Keywords

1 Introduction

Crowdsourcing is term for “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call” [4]. Crowdsourcing platforms are used more and more often to execute tasks that are hard for computers but easy for humans. This form of realizing small human intelligence tasks through a large number of individuals has been used in various domains; and plays a more and more important role. It is also considered as a style of future work [10] that can be crucial for example in the context of decision support [2]. Controlling the quality of obtained data and identifying the workers who tend to give correct answers in this environment still a major problem. The absence of quality control of participants (and their responses) reduces the efficiency of these platforms [5].

One often refers to a participant who gives exact and precise answers as an expert [12]. Several works [5, 8, 9, 11] were proposed to identify the experts in this context. These methods assume that if a worker accepts to complete a task, he will give an answer, even if he is not sure about it. In other words, they make the assumption that a worker does not skip a question. Also, existing crowdsourcing platforms do not allow to give partial results. For example, if the tasks involve a multiple choice question with answers A, B, C and D, a worker cannot say that the correct answer either \(A\, or\, B\) (he is not sure about), but certainly not \(C\, or\, D\).

Some works use first “gold” data on which real answers are known [6]. In that case, a degree of exactitude (the percentage of answers that is not wrong) and a degree of precision (the percentage of answers that is not partial) could be learn to measure the expertise level. Here, we assume we that do not have such data.

In our work, we construct a model where we allow situations where a worker skips some questions or answers them partially. In our model we make use of belief functions that is a powerful framework to take into account such imperfection of data. We propose a novel expert identification technique that by calculating a degree of exactitude (based on a level of answers that is not wrong) and a degree of precision (based on a level of answers that is not partial). The “ideal” worker has a high degree of exactitude and a high degree of precision. For example, in the multiple choice question case, if the correct answer is A then clearly the answer A is better than an answer \(A \, or \, B\) (higher degree of precision).

The degrees of exactitude and precision are complementary, so using both of them together can lead to better expert identification methods. The rest of the paper is organized as follows. Section 2 formulates the expert identification problem more precisely, together with some relevant related work. We present our approach in Sect. 3. The experimental evaluation is presented in Sect. 4.

2 Expert Identification in the Context of Crowdsourcing

2.1 Notions of an Expert

An expert in the context of crowdsourcing, is the person who provides a large number of correct, complete and reliable answers. The person who acquired a set of knowledge and skills about a particular area. He can extract knowledge and relevant responses with a minimum cognitive effort. He is identified in crowdsourcing platforms by: the precision and the exactitude of responses, the capability to detect the tasks a priori, the knowledge, skills and learning level.

2.2 Expert Identification Methods

Evaluating quality of workers and identifying experts in crowdsourcing represents a standing problem. Many authors found that taking randomly workers is a good choice [1] and others found that establishing a good strategy for selecting experts is more interesting [5]. Several researches have been exploring this area, but essentially there are two basic approaches to identify the experts: Use “gold” data: Provide participants the questions that we already know the answers and identify the workers who give the correct responses as the experts. Use multiple workers: Give a score for each participant which represents his qualities and skills. In this context, Ipeirotis et al. improved in [5] the expectation maximization algorithm (EM) to generate a scalar score representing the quality of each worker. [9] proposed an evaluation of the participants by the set of labels. [8] based on behavioral observation to define a typology of workers. [11] proposed an algorithm based on the graphs (SPEAR) to classify the users and to identify the experts. Various methods proposed to identify the experts. But, all these methods have a such level of imprecision and inaccuracy results. In order to ensure a certain identification, we propose to model this imperfection. We proposed an identification of experts with using the theory of belief functions [3, 13] which represents a mathematical theory for representing imperfect information and gives a complete framework to model the participant’s answers.

3 Identification of the Experts

We would like to identify the experts in a crowdsourcing platform. We assume that the questions (tasks) and a list of answers from the crowd workers available. However, we do not assume any access to a “gold” data that would contain all the correct answers. Such a ground truth would clearly largely simplify the identification of experts. Therefore, we develop novel techniques - based on the theory of belief functions - to calculate the exactitude and precision degrees.

We use the following formalism. We note the responses \(r_{U_{j}}\) proposed by each participant \(U_{j}\) with a mass of belief \(m_{U_{j}}^{\varOmega _{k}}\). Each response is specific for each question \(Q_{k}\) (\(k=\{1,\cdots ,K\}\)) which has a specific frame of discernment \(\varOmega _{k}\) with \(\varOmega _{k}=\{\omega _{1}^{Q_{k}},\ldots ,\omega _{n_k}^{Q_{k}}\}\). The frame \(\varOmega _k\) is the set of all possible responses of \(Q_k\) question. Therefore, we obtain a matrix of mass of belief of size s participants/lines and K questions/columns given by:

$$\begin{aligned} \left. \begin{array}{cc} &{} \left. \begin{array}{ccccc} Q_1&{} \ldots &{} Q_k &{} \ldots &{} Q_K \\ \end{array}\right. \\ \left. \begin{array}{c} U_1 \\ \vdots \\ U_j \\ \vdots \\ U_s\\ \end{array}\right. &{} \left[ \begin{array}{ccccc} \displaystyle {m_{U_{1}}^{\varOmega _{1}}} &{} \ldots &{} \displaystyle {m_{U_{1}}^{\varOmega _{k}}}&{} \ldots &{} \displaystyle {m_{U_{1}}^{\varOmega _{K}}} \\ \vdots &{} &{}\vdots &{} &{} \vdots \\ \displaystyle {m_{U_{j}}^{\varOmega _{1}}} &{} \ldots &{} \displaystyle {m_{U_{j}}^{\varOmega _{k}}}&{} \ldots &{} \displaystyle {m_{U_{j}}^{\varOmega _{K}}} \\ \vdots &{} &{} \vdots &{} &{} \vdots \\ \displaystyle {m_{U_{s}}^{\varOmega _{1}}} &{} \ldots &{} \displaystyle {m_{U_{s}}^{\varOmega _{k}}} &{} \ldots &{} \displaystyle {m_{U_{s}}^{\varOmega _{K}}} \\ \end{array}\right] \end{array}\right. \end{aligned}$$
(1)

3.1 Exactitude Degree

The exactitude degree is based on the average of the distance between the response proposed by the participant \(m_{U_{j}}^{\varOmega _{k}}\) and all the responses of the other participants \(m_{U_{\varepsilon _{s-1}}}^{\varOmega _{k}}\). This representation of all other participants is obtained by the average of the responses proposed by the \(s-1\) participants for the \(k^{th}\) question, such as:

$$\begin{aligned} m_{U_{\varepsilon _{s-1}}}^{\varOmega _{k}}(X) = \displaystyle \frac{1}{s-1}\!\! \sum _{j=1}^{s-1} m_j(X) \end{aligned}$$
(2)

The distance is then calculated by the distance of Jousselme et al. [7]: \(d_J(m_{U_{i}}^{\varOmega _{k}}, m_{U_{\varepsilon _{s-1}}}^{\varOmega _{k}})\). According to this distance, we calculate the exactitude degree for each participant \(U_{j}\) as follows:

$$\begin{aligned} IE_{U_{j}}=1-\frac{1}{r_{(U_{j})}}\displaystyle {\sum _{k=1}^{K}d_{U_{j}}^{\varOmega _{k}}} \end{aligned}$$
(3)

The assumption behind this method is the majority of participants give a correct answer. This assumption is currently made in information fusion and crowdsourcing.

The exactitude degree can be used to identify the experts. For this purpose, we use the k-means algorithm (with \(k=2\) for expert/non expert). The set of experts is given by the cluster with the higher average of exactitude degree.

3.2 Precision Degree

Based on the model of responses given by the mass functions \(m_{U_{j}}^{\varOmega _{k}}\), we can define a degree of precision.

We recall that we allow the participants to give partial answers, that is crucial for calculating the precision degree. The usual model of responses (that is, the worker must give a complete answer), we could not define a such degree.

We note \(\displaystyle {\delta _{U_{j}}^{\varOmega _{k}}}\) the specificity degree of the mass function \(m_{U_{j}}^{\varOmega _{k}}\). It is defined by [14] as follows:

$$\begin{aligned} \delta _{U_{j}}^{\varOmega _{k}} =1- \displaystyle {\sum _{X\in 2^{\varOmega _{k}}} m_{U_{j}}^{\varOmega _{k}}(X){\frac{\log _2 (|X|)}{\log _2 (|\varOmega _{k}|)}}} \end{aligned}$$
(4)

This specificity degree allows to translate the precision level of each response independently of the other participant’s responses. To measure the degree of precision of each participant \(IP_{U_{j}}\), we propose to calculate the average of the specificity degrees for all the \(k^{th}\) questions. Such as:

$$\begin{aligned} IP_{U_{j}}=\frac{1}{r_{(U_{j})}}\displaystyle {\sum _{k=1}^{K}\delta _{U_{j}}^{\varOmega _{k}}} \end{aligned}$$
(5)

We determine the experts by using k-means (with \(k=2\)). We do not need the assumption on the majority of participant’s answers.

3.3 Global Degree

In order to obtain a global degree, we combine both degrees in a single degree for each participant. The global degree is given by a weighted average as follows:

$$\begin{aligned} GD_{U_{j}}=\beta _{U_{j}} IE_{U_{j}} +(1- \beta _{U_{j}}) IP_{U_{j}} \end{aligned}$$
(6)

The weight \(\beta _{U_{j}}\) is introduced to give more or less importance for each degree. Hereafter, we do not make any difference between the participants in the crowd.

4 Experimentation

In the following, we generate some mass functions in order to evaluate our approach in the context where there is not use of gold data. We generate three kinds of participants. The experts are those who provide precise and exact responses, in the generation of the masses a singleton is expected on the correct answer. However, if the expert is not totally sure of him, the ignorance is also a focal element. The imprecise experts are those who provide exact but imprecise answers, the correct singleton can be in a disjunction and the ignorance can also be a focal element. The ignorants (sometimes called spammers) are those who give random responses with mass functions taken randomly. To verify the efficiency of our approach we make several experiments with 100 participants, 100 questions where each experiment is repeated 10 times.

The precision or the exactitude degree alone is insufficient to identify the experts. The global degree of the equation (6) allows to identify precise and exact responses simultaneously. In a first experiment (with results illustrated in Fig. 1), we vary the experts’ number, without generating imprecise experts, from 10 % to 90 % with the global degree in order to prove the ability of our method to identify precise and exact responses simultaneously. In order to demonstrate the importance of each degree we vary in each case the weight \(\beta _{U_{j}}\) from 0.1 to 0.9. 100 % Good classification rate with \(\beta _{U_{j}} = 0.5\) reflects that both exactitude and precision degrees have the importance to identify experts. Our algorithm identifies correctly the experts and puts all the other participants in the class of the ignorant.

Fig. 1.
figure 1

Variation of the good classification rate according to the percentages of experts

To verify the stability of the good classification rates, we vary in the next experiment (with results illustrated in Fig. 2) the number of questions with 35 % of experts, 35 % of imprecise experts and 30 % of ignorants for 10 iterations, we calculate the three degrees. We measure this stability with a perturbation rate calculated by the standard deviation between the different good classification rate exchange on 10 iterations. This experiment shows that it is necessary to have a certain number of questions in order to ensure a better identification.

Fig. 2.
figure 2

The variation of the perturbation rate according to the different degrees

We can found that 30 questions provide a reliable good classification rate. All the previous experiments show the ability of our method to identify the experts in the context of uncertain and imprecise responses. The recourse to the theory of belief functions ensures a reliable identification. It solves the problem of imperfection and provides a certain frame of characterization. With both degrees, we detect the exactitude and precision level of each participant and we correctly identify the experts in the crowd. To confirm the interest of the theory of the belief functions, we compare our belief approach with the probabilistic approach corresponding to the mass function \(m_{U_j}^{\varOmega _k}\) which models the responses proposed by each participant \(U_{j}\) given by the pignistic probability:

$$\begin{aligned} BetP_{m_{U_j}^{\varOmega _k}}(\omega _{k})= \displaystyle \sum _{X \subseteq \varOmega _k, \omega _k \in X} \frac{m_{U_j}^{\varOmega _k}(X)}{(1-m_{U_j}^{\varOmega _k}(\emptyset )) |X|} \end{aligned}$$
(7)

With the same principle in Sect. 3, we calculate the exactitude degree as follows:

$$\begin{aligned} EP(U_{j})=1-\frac{1}{r_{(U_{j})}}\displaystyle {\sum _{k=1}^{K}d_{U_{j}}^{\varOmega _{k}}} \end{aligned}$$
(8)

where \(d_{U_{j}}^{\varOmega _{k}}\) is the Euclidean distance on the probabilities. We have to do the same assumption on the majority of correct answers. We use k-means to characterize the experts. In this way, we obtain a probabilistic approach available to detect experts. We limit the comparison by the exactitude degree, due to the impossibility to determine the specificity degree with the probability. We vary in this experiment the percentage of experts and imprecise experts at the same time. The results are illustrated in Fig. 3. This figure shows the interest of the use of the belief functions theory to identify the experts and imprecise experts. The probabilistic approach cannot identify the experts from the imprecise experts, it loose the information of exactitude and could not model the imprecision. The regression of the good classification rate to 0 % reflects this inability. Whereas with the belief approach the precise and imprecise experts are better discriminated with all the variations. In complex environment like the crowdsourcing, the theory of belief functions can consider all the imperfection of the participant’s responses.

Fig. 3.
figure 3

Comparison between belief function and probability function

5 Conclusion

We introduced a new technique for characterizing the experts in a crowdsourcing platform by using the belief functions theory, to improve the quality of data that one could obtain from such platforms. We use a model where the crowd workers are allowed to skip a question or provide partial answers. Based on a belief model of the participant’s responses, we calculated two complementary degrees: An exactitude degree translates the knowledge level of the participants and a precision degree reflects their reliability level. We showed the ability of these degrees to help for the expert identification and we demonstrated the interest of the theory of the belief functions in a comparison with the probability theory.