1 Introduction

A classical problem in the decision-making field is how to get a collective ranking or a winning candidate from individual rankings of a set of candidates or alternatives. One usual way to tackle this problem is to consider scoring rules for aggregating the individual rankings. In scoring rules, fixed scores are assigned to the different ranks obtained by the candidates and these ones are ordered according to the total number of points they receive. Notice that scoring rules are the best-known examples of positional voting systems (see Llamazares and Peña 2015a), where voters rank order the candidates from best to worst and a set of winners is selected using the positions of the candidates in the voters’ preference orders.

Due to their simplicity and good properties, scoring rules have received considerable attention in the literature (see, for instance, Llamazares and Peña 2015b and references therein). Nowadays, scoring rules are used in sport competitions like the Formula One World Championship, the IndyCar Series Championship or the Motorcycle World Championship. Likewise, they are also used for awarding the FIFA Ballon d’Or Award, the Baseball Writers Most Valuable Player Award or the Most Valuable Player of the National Basketball Association (MVP of the NBA).

However, one of the most important issues in the field of scoring rules is the choice of the scoring vector to use, because a candidate that is not the winner with the scoring vector imposed initially could be it if another one is used. For instance, the scoring vector used in the Formula One World Championship has changed several times. Thus, from 2003 to 2009, the scoring vector used for selecting the winner of the championship was (10, 8, 6, 5, 4, 3, 2, 1). In 2008, the winner was Lewis Hamilton, followed by Felipe Massa. However, from 1991 to 2002, only the six first positions were considered and the scoring vector used was (10, 6, 4, 3, 2, 1). If this scoring vector had been used in 2008, then the winner would have been Felipe Massa (see Llamazares and Peña 2013 for another example of this).

To avoid the previous problem, Cook and Kress (1990) introduced Data Envelopment Analysis (DEA) in this context in order to evaluate each candidate with the most favorable scoring vector for him/her. However, one important shortcoming of their model is that several candidates are often efficient, i.e., they achieve the maximum attainable score. For this reason, some procedures to discriminate efficient candidates have appeared in the literature (see, for instance, Green et al. 1996; Hashimoto 1997; Noguchi et al. 2002; Obata and Ishii 2003). Nevertheless, as it has been noticed by Llamazares and Peña (2009), some of the previous models have a serious drawback from the point of view of Social Choice Theory: the relative order between two candidates may be altered when the number of first, second, ..., \(k\hbox {th}\) ranks obtained by other candidates changes, although there is not any variation in the number of first, second, ..., \(k\hbox {th}\) ranks obtained by both candidates.

On the basis of the pioneering work of Cook and Kress (1990), several models have appeared in the literature to deal with this kind of problems (see, for instance, Hashimoto and Wu 2004; Contreras et al. 2005; Foroughi et al. 2005; Foroughi and Tamiz 2005; Wang and Chin 2007; Wang et al. 2007a, b, 2008; Wu et al. 2009; Amin and Sadeghi 2010; Soltanifar et al. 2010; Contreras 2011; Hosseinzadeh Lotfi and Fallahnejad 2011; Ebrahimnejad 2012; Foroughi and Aouni 2012; Hosseinzadeh Lotfi et al. 2013; Llamazares and Peña 2013; Hadi-Vencheh 2014).

Among this great variety of models, we would like to point out those of Hashimoto (1997) and Llamazares and Peña (2013). Although Hashimoto’s model has the above-described shortcoming (see Llamazares and Peña 2009), it is very interesting because it uses the DEA super-efficiency model (see Andersen and Petersen 1993) for breaking ties for first place. Moreover, this model also considers convex sequences of weights, which is a very natural condition in this context (see Stein et al. 1994). For their part, Llamazares and Peña (2013) avoid the above-described shortcoming by putting together in a single restriction the constraints of the candidates that are not being evaluated.

In this paper we take into account both methodologies. So, we propose and analyze a model with convex sequences of weights where we put together in a single restriction the constraints of the candidates that are not being evaluated. In this way, in our model, the relative order between two candidates cannot be altered by variations in the number of first, second, ..., \(k\hbox {th}\) ranks obtained by the remaining candidates. Moreover, we also give a closed expression for the scores assigned to the candidates, and thus we can obtain the winning candidates without solving the proposed model.

The rest of the paper is organized as follows. In Sect. 2 we recall Cook and Kress’ model and Hashimoto’s model. In Sect. 3 we present our model and give a closed expression for the scores obtained by the candidates. Finally, Sect. 4 is devoted to conclusions. All proofs are in the “Appendix”.

2 Models of Cook and Kress (1990) and Hashimoto (1997)

Let \({\mathscr {A}}=\{A_1,\dots ,A_m\}\) be a set of candidates and suppose that each voter selects k candidates and ranks them from top to \(k\hbox {th}\) place. Under the scoring rule associated with the scoring vector \((w_1,\dots ,w_k)\), the candidate \(A_i\) receives \(Z_i=\sum _{j=1}^k v_{ij}w_j\) points, where \(v_{ij}\) is the number of \(j\hbox {th}\) place ranks that candidate \(A_i\) occupies, and the candidates are ordered according to the score obtained. Two of the best known scoring rules are the plurality rule, where \(w_1=1\) and \(w_j=0\) for all \(j\in \{2,\dots ,k\}\), and the Borda rule, where \(k=m\) and \(w_j=m-j\) for all \(j\in \{1,\dots ,m\}\).

One of the most important questions in this topic is the choice of the scoring vector to use, given that this choice could determine the winning candidate. To avoid this problem, Cook and Kress (1990) suggested to evaluate each candidate with the most favorable scoring vector for him/her. The model DEA/AR (Data Envelopment Analysis/Assurance Region) suggested by these authors was

$$\begin{aligned} \begin{aligned} Z_o^{*}(\varepsilon ) = \max&\,\, \sum _{j=1}^k v_{oj} w_j,\\ {\mathrm {s.t.}}&\,\, \sum _{j=1}^k v_{ij} w_j\le 1,\quad i=1,\dots ,m,\\&\,\, w_j-w_{j+1}\ge d(j,\varepsilon ),\quad j=1,\dots ,k-1,\\&\,\, w_k\ge d(k,\varepsilon ), \end{aligned} \end{aligned}$$
(1)

where \(\varepsilon \ge 0\) and the functions \(d(j,\varepsilon )\), called the discrimination intensity functions, are nonnegative and nondecreasing in \(\varepsilon \). Moreover, \(d(j,0)=0\) for all \(j\in \{1,\dots ,k\}\).

One important shortcoming of the previous model is that several candidates are often efficient, i.e., they achieve the maximum attainable score (\(Z_o^{*}(\varepsilon )=1\)). For this reason, Hashimoto (1997) proposed to apply the DEA super-efficiency model (see Andersen and Petersen 1993) to Cook and Kress’s model: By removing in Model (1) the constraint relative to the candidate that is been evaluated, efficient candidates can achieve scores greater than one and, in this way, ties for first place can be broken. Moreover, Hashimoto (1997) considered \(\,d(j,\varepsilon )=\varepsilon \,\) for all \(j\in \{1,\dots ,k\}\), with \(\varepsilon \) small enough to guarantee a decreasing sequence of weights and to avoid the solution of the model depending on the discrimination intensity functions. On the other hand, he added new constraints to the model to assure a convex sequence of weights (for more on this, see Stein et al. 1994); that is, \(w_j - w_{j+1}\ge w_{j+1} - w_{j+2}\) for \(j=1,\dots ,k-2\) (or equivalently, \(w_j-2w_{j+1}+w_{j+2}\ge 0\) for \(j=1,\dots ,k-2\)). So, the model proposed by this author was

$$\begin{aligned} \widetilde{Z}_o^{*}(\varepsilon ) = \max&\quad \sum _{j=1}^k v_{oj}w_j,\nonumber \\ {\mathrm {s.t.}}&\,\, \sum _{j=1}^kv_{ij}w_j\le 1,\quad i=1,\dots ,m,\quad \;i\ne o \end{aligned}$$
(2a)
$$\begin{aligned}&\,\, w_j-w_{j+1}\ge \varepsilon , \quad j=1,\dots ,k-1, \end{aligned}$$
(2b)
$$\begin{aligned}&\,\, w_k\ge \varepsilon , \end{aligned}$$
(2c)
$$\begin{aligned}&\,\, w_j-2w_{j+1}+w_{j+2}\ge 0, \quad j=1,\dots ,k-2, \end{aligned}$$
(2d)

where \(\varepsilon \) is a positive non-archimedian infinitesimal.

Although Hashimoto’s model allows to discriminate efficient candidates, it has an important drawback: the number of first, second, ..., \(k\hbox {th}\) ranks obtained by inefficient candidates may change the order of efficient candidates [see an example of this shortcoming in Llamazares and Peña (2009)]. For this reason in the next section we propose a model that avoids the above drawback. This model is based on Hashimoto’s model and in the methodology suggested by Llamazares and Peña (2013).

3 Our Model

Consider Hashimoto’s model. Notice that constraints (2b) are equivalent to \(w_{k-1}-w_{k}\ge \varepsilon \) due to the convex sequence of weights. Moreover, in this restriction, instead of using the parameter \(\varepsilon \), we consider the weight \(w_k\), which has been considered by authors such as Stein et al. (1994) and Contreras et al. (2005).

Now, to avoid that the positions obtained by inefficient candidates may change the order of efficient candidates, we put together in a single restriction the constraints of the candidates that are not being evaluated (see Llamazares and Peña 2013). So, we replace the constraints (2a) by their sum. Since

$$\begin{aligned} \mathop {\mathop {\displaystyle \sum }\limits _{i=1}}\limits _{{i\ne o}}^{m} \displaystyle \sum _{j=1}^{k} v_{ij}w_{j} = \displaystyle \sum _{j=1}^{k} w_{j} \mathop {\mathop {\displaystyle \sum }\limits _{i=1}}\limits _{{i\ne o}} ^{m} v_{ij} = \displaystyle \sum _{j=1}^{k} w_{j} (n-v_{oj}), \end{aligned}$$

where n is the number of voters, we get the constraint

$$\begin{aligned} \sum _{j=1}^k (n-v_{oj}) w_j\le m-1. \end{aligned}$$

Therefore, our model can be expressed as follows:

$$\begin{aligned} \begin{aligned} \widehat{Z}_o^{*}(\varepsilon ) = \max&\,\, \sum _{j=1}^k v_{oj} w_j,\\ {\mathrm {s.t.}}&\,\, \sum _{j=1}^k (n-v_{oj}) w_j\le m-1,\\&\,\, w_{k-1}-w_{k}\ge w_{k},\\&\,\, w_k\ge \varepsilon ,\\&\,\, w_j-2w_{j+1}+w_{j+2}\ge 0, \quad j=1,\dots ,k-2, \end{aligned} \end{aligned}$$
(3)

where we maximize the score of each candidate under the assumption that the total score of the remaining candidates is less than or equal to the number of candidates minus 1. Moreover, we also suppose convex sequences of weights.

In order to make easier the analysis of this model, in the following lemma we give an alternative representation of it. In this way, we get an equivalent model where we have replaced the convexity restriction on the variables by nonnegativity.

Lemma 1

Model (3) can be expressed as

$$\begin{aligned} \begin{aligned} \widehat{Z}_o^{*}(\varepsilon ) = \max&\,\, \sum _{j=1}^k V_{oj} W_j + \varepsilon V_{ok},\\ {\mathrm {s.t.}}&\,\, \sum _{j=1}^k \left( \frac{nj(j+1)}{2}-V_{oj}\right) W_j \le \delta _{o}(\varepsilon ),\\&\,\, W_j\ge 0,\quad j=1,\dots ,k, \end{aligned} \end{aligned}$$
(4)

where

$$\begin{aligned}&\displaystyle W_{j} = w_{j}-2w_{j+1}+w_{j+2}, \quad \text {for all } j\in \{1,\dots , k-2\},\\&\displaystyle W_{k-1} = w_{k-1}-2w_{k},\\&\displaystyle W_{k} = w_{k}-\varepsilon ,\\&\displaystyle V_{oj} = \sum _{l=1}^{j}(j+1-l)v_{ol}, \quad \text {for all }\quad j\in \{1,\dots , k\},\\&\displaystyle \delta _{o}(\varepsilon ) = (m-1)- \left( \frac{nk(k+1)}{2}- V_{ok}\right) \varepsilon . \end{aligned}$$

Notice that

$$\begin{aligned} V_{ok} = k v_{o1} + (k-1)v_{o2} + \cdots + v_{ok} \le k n \le \frac{nk(k+1)}{2}. \end{aligned}$$

Therefore, the function \(\delta _{o}(\varepsilon )\) is nonincreasing in \(\varepsilon \) and, consequently, the feasible set of Model (4) does not increase when the value of \(\varepsilon \) increases. Hence \(\widehat{Z}_o^{*}(\varepsilon )\) is a nonincreasing function; that is, if \(\varepsilon _1 > \varepsilon _2\), then \(\widehat{Z}_o^{*}(\varepsilon _1) \le \widehat{Z}_o^{*}(\varepsilon _2)\).

In order to ensure that Model (4) is feasible, we need to impose the condition \(\delta _{o}(\varepsilon ) \ge 0\). Given that the feasible set of the previous model depends on the evaluated candidate, the condition \(\min _{o=1,\dots ,m} \delta _{o}(\varepsilon ) \ge 0\) is necessary to guarantee that all the feasible sets are non-empty.

On the other hand, it is worth noting that if a candidate \(A_{o}\) gets all the first ranks, then he/she is the winner. Given that \(V_{oj}=jv_{o1} + (j-1)v_{o2} + \cdots + v_{oj}\), when \(v_{o1}=n\) (and, consequently, \(v_{o2} = \cdots = v_{ok}=0\)), we have \(V_{oj}=jn\) for all \(j\in \{1,\dots ,k\}\). Therefore, for this candidate, the feasible set of Model (4) is

$$\begin{aligned} S=\left\{ (W_1,\dots ,W_k)\in {\mathbb {R}}_{+}^k \;\;\vert \;\; \sum _{j=2}^k \frac{nj(j-1)}{2}W_j\le \delta _{o}(\varepsilon )\right\} , \end{aligned}$$

and Model (4) is unbounded. Consequently, candidate \(A_{o}\) is the winner.

In the following theorem we give the optimal value of this program for the remaining cases.

Theorem 1

Consider Model (4) and let \(V_{o1}<n\). Then \(\widehat{Z}_o^{*}(\varepsilon )=\delta _{o}(\varepsilon ) V_{o}^{*} + \varepsilon V_{ok}\), where

$$\begin{aligned} V_{o}^{*}=\max _{j=1,\dots ,k} V_{o}^{j} \quad \text{ and }\quad V_{o}^{j} = \frac{V_{oj}}{\displaystyle \frac{nj(j+1)}{2}-V_{oj}}. \end{aligned}$$

In the following subsections we analyze the behavior of our model according to whether the parameter \(\varepsilon \) is null or not.

3.1 The Case \(\varepsilon = 0\)

When \(\varepsilon = 0\), Model (4) can be written as

$$\begin{aligned} \begin{aligned} \widehat{Z}_o^{*} = \max&\,\, \sum _{j=1}^k V_{oj} W_j,\\ \text{ s.t. }&\,\, \sum _{j=1}^k \left( \frac{nj(j+1)}{2}-V_{oj}\right) W_j \le m-1,\\&\,\, W_j\ge 0,\quad j=1,\dots ,k. \end{aligned} \end{aligned}$$
(5)

By Theorem 1, the score obtained by the candidate \(A_{o}\) when \(V_{o1}<n\) is

$$\begin{aligned} \widehat{Z}_o^{*}=(m-1)V_{o}^{*}=(m-1)\max _{j=1,\dots ,k}\frac{V_{oj}}{\displaystyle \frac{nj(j+1)}{2}-V_{oj}}. \end{aligned}$$

For instance, consider the example given in Table 1, taken from Cook and Kress (1990, p. 1309). There is a tie for the first place between candidates C and D, and the order of the remaining candidates is B, A, F and E. A way to break the tie between C and D will be explained in the next subsection.

Table 1 Ranks obtained by each candidate and values of \(\widehat{Z}_o^{*}\)

It is worth noting that the order obtained through the scores \(\widehat{Z}_o^{*}\) is the same as obtained by using a model proposed by Contreras et al (2005, Prop. 3.4).

Proposition 1

The rank given by Model (5) is the same as the obtained by using the expression

$$\begin{aligned} Z_{o}=\max _{j=1,\dots ,k} \frac{V_{oj}}{\displaystyle \frac{j(j+1)}{2}}. \end{aligned}$$

As Contreras et al. (2005) have pointed out, the score \(Z_{o}\) can be interpreted as the result of evaluating the candidates by using the normalized truncated Borda rules (on this, see Fishburn 1974) and choosing the maximum value. Moreover, the model proposed by these authors has some interesting properties such as monotonicity, Pareto-optimality and immunity to the absolute winner paradox (see Contreras et al. 2005; Llamazares and Peña 2015a). Therefore, Model (5) also satisfies these properties.

3.2 The Case \(\varepsilon > 0\)

By Theorem 1, when \(V_{o1} < n\) we can express the score obtained by the candidate \(A_{o}\) as

$$\begin{aligned} \widehat{Z}_o^{*}(\varepsilon )&= (m-1)V_{o}^{*} + \varepsilon \left( V_{ok}-V_{o}^{*}\bigg (\displaystyle \frac{nk(k+1)}{2}-V_{ok}\bigg )\right) \nonumber \\&= (m-1)V_{o}^{*} + \varepsilon \left( V_{ok}(1+V_{o}^{*}) - V_{o}^{*}\displaystyle \frac{nk(k+1)}{2} \right) . \end{aligned}$$
(6)

As we can see, the graph of \(\widehat{Z}_o^{*}(\varepsilon )\) is a straight line. Moreover, given that \(\widehat{Z}_o^{*}(\varepsilon )\) is a nonincreasing function, the slope of this straight line is negative or null.

Notice that since

$$\begin{aligned} V_{o}^{k} = \frac{V_{ok}}{\displaystyle \frac{nk(k+1)}{2}-V_{ok}}, \end{aligned}$$

we get

$$\begin{aligned} V_{o}^{k} \frac{nk(k+1)}{2} = V_{ok} (1+V_{o}^{k}). \end{aligned}$$

Therefore, when \(V_{o}^{*} = V_{o}^{k}\) then

$$\begin{aligned} V_{ok}(1+V_{o}^{*}) - V_{o}^{*}\frac{nk(k+1)}{2} = 0 \end{aligned}$$

and, consequently,

$$\begin{aligned} \widehat{Z}_o^{*}(\varepsilon ) = (m-1)V_{o}^{k}, \end{aligned}$$

that is, the value \(\widehat{Z}_o^{*}(\varepsilon )\) does not depend on the choice of \(\varepsilon \).

Consider now the following example, taken from Obata and Ishii (2003) (see Table 2).

Table 2 First and second ranks obtained by each candidate

In this case \(m=7\), \(n=150\) and \(k=2\). Therefore

$$\begin{aligned} \widehat{Z}_o^{*}(\varepsilon )=6V_{o}^{*}+ \varepsilon \Big (V_{ok}(1+V_{o}^{*})-450V_{o}^{*}\Big ). \end{aligned}$$

When we focus on candidates A and B we have

$$\begin{aligned} \widehat{Z}_{{\mathrm {A}}}^{*}(\varepsilon ) =\frac{96}{59}-\frac{1650}{59}\varepsilon , \qquad \qquad \widehat{Z}_{{\mathrm {B}}}^{*}(\varepsilon ) =\frac{84}{61}-\frac{600}{61}\varepsilon . \end{aligned}$$

Both functions appear drawn in Fig. 1 (note that we have considered different scales on both axes; a hundredth on the x-axis is equal to one unit on the y-axis).

Fig. 1
figure 1

Graphs of the functions \(\widehat{Z}_{{\mathrm {A}}}^{*}(\varepsilon )\) and \(\widehat{Z}_{{\mathrm {B}}}^{*}(\varepsilon )\)

As we can see in Fig. 1, when we take values of \(\varepsilon \) less than 2 / 145 we have \({\mathrm {A}} \succ {\mathrm {B}}\). However, if the values of \(\varepsilon \) are greater than 2 / 145, then \({\mathrm {B}} \succ {\mathrm {A}}\). In Table 3 we show this behavior for two specific values of \(\varepsilon \), \(\varepsilon =0.01\) and \(\varepsilon =1/70\). This last value is the maximum possible value for \(\varepsilon \), that is, the maximum value for which \(\min _{o=1,\dots ,m} \delta _{o}(\varepsilon )\ge 0\).

Table 3 Values of \(\delta _{o}(\varepsilon )\) and \(\widehat{Z}_o^{*}(\varepsilon )\) for the candidates of Table 2

In the light of the previous example, it does not seem obvious how to choose a specific value of \(\varepsilon \). One possibility would be to consider a value of \(\varepsilon \) small enough to avoid that the winner depends on the choice of \(\varepsilon \) (this is the solution proposed by some authors in their models; see, for instance, Hashimoto 1997; Foroughi and Tamiz 2005). If we consider in our model an infinitesimal positive value of \(\varepsilon \), then, by expression (6), the candidates are ordered according to the score \((m-1)V_{o}^{*}\); that is, the score obtained by the candidates when \(\varepsilon = 0\). If for \(\varepsilon = 0\) there is a tie between two candidates, \(A_i\) and \(A_p\), then this means that \(V_{i}^{*}\) is equal to \(V_{p}^{*}\). Therefore,

$$\begin{aligned} 1+V_{i}^{*} = 1+V_{p}^{*} \quad \text{ and } \quad V_{i}^{*}\frac{nk(k+1)}{2} = V_{p}^{*}\frac{nk(k+1)}{2}, \end{aligned}$$

and, consequently,

$$\begin{aligned} \widehat{Z}_{i}^{*}(\varepsilon ) > \widehat{Z}_{p}^{*}(\varepsilon ) \;\Leftrightarrow \; V_{ik} > V_{pk}. \end{aligned}$$

Therefore, the use of an infinitesimal positive value of \(\varepsilon \) in our model is equivalent to ranking the candidates according to the score \(\widehat{Z}_o^{*}\) (that is, to consider \(\varepsilon = 0\)) and breaking the ties between the candidates according to the value of \(V_{ok}\). For instance, if we consider again Table 1, the tie between C and D can be broken by taken into account the values \(V_{{\mathrm {C}}4} = 38\) and \(V_{{\mathrm {D}}4} = 40\). So, in this case, \({\mathrm {D}} \succ {\mathrm {C}}\).

Another possibility would be to take the maximum possible value for \(\varepsilon \) (in the same spirit that in Cook and Kress’ model), although in the previous example (see Fig. 1), it does seem the best choice. On the other hand, this solution has a serious shortcoming: the order between two candidates may depend on the ranks obtained by other candidates. For instance, suppose that candidates C, D and G obtain the ranks shown in Table 4, which are different from those shown in Table 2. Now, the maximum possible value for \(\varepsilon \) is 1 / 73, and, with this value, \({\mathrm {A}} \succ {\mathrm {B}}\) (\(\widehat{Z}_{{\mathrm {A}}}^{*}(\varepsilon )\) and \(\widehat{Z}_{{\mathrm {B}}}^{*}(\varepsilon )\) are still the functions of Fig. 1 and \(1/73 < 2/145\); see also Table 4).

Table 4 New ranks and values of \(\delta _{o}(\varepsilon )\) and \(\widehat{Z}_{o}^{*}(\varepsilon )\) for \(\varepsilon =1/73\)

To avoid taking a fixed value of \(\varepsilon \), Llamazares and Peña (2013) propose a model where the average value of the objective functions is considered (in our case, the average of the functions \(\widehat{Z}_o^{*}(\varepsilon )\)). In the sequel we follow this methodology. Moreover, for each candidate \(A_{o}\) we only consider the constraint \(\delta _{o}(\varepsilon )\ge 0\) instead of the restriction \(\min _{o=1,\dots ,m} \delta _{o}(\varepsilon )\ge 0\). This prevents that the average of the function \(\widehat{Z}_o^{*}(\varepsilon )\) may depend on the results obtained for the remaining candidates.

The maximum value for which the feasible set of Model (4) is not-empty is

$$\begin{aligned} \varepsilon _{o}^{*}=\sup \left\{ \varepsilon \ge 0 \;\vert \; \delta _{o}(\varepsilon )\ge 0\right\} . \end{aligned}$$

Since

$$\begin{aligned} \delta _{o}(\varepsilon ) = (m-1)- \left( \frac{nk(k+1)}{2}- V_{ok}\right) \varepsilon , \end{aligned}$$

we have

$$\begin{aligned} \varepsilon _{o}^{*}=\frac{m-1}{\displaystyle \frac{nk(k+1)}{2}-V_{ok}}. \end{aligned}$$

Once known the value of \(\varepsilon _{o}^{*}\), the score assigned to the candidate \(A_{o}\) is

$$\begin{aligned} \overline{Z}_o=\frac{1}{\varepsilon _{o}^{*}} \int _{0}^{\varepsilon _{o}^{*}} \widehat{Z}_o^{*}(\varepsilon )\, d\varepsilon , \end{aligned}$$

that is, the average of the function \(\widehat{Z}_o^{*}(\varepsilon )\). In the following theorem we show explicitly the value of \(\overline{Z}_o\).

Theorem 2

Consider Model (4). Then

$$\begin{aligned} \overline{Z}_o = (m-1)\frac{V_{o}^{*} + V_{o}^{k}}{2}. \end{aligned}$$

Notice that \(\overline{Z}_o\) is the average of \(\widehat{Z}_o^{*}\) and \((m-1)V_{o}^{k}\), and that, given two candidates \(A_i\) and \(A_p\),

$$\begin{aligned} V_{i}^{k} > V_{p}^{k}&\;\Leftrightarrow \; \frac{V_{ik}}{\displaystyle \frac{nk(k+1)}{2}-V_{ik}} > \frac{V_{pk}}{\displaystyle \frac{nk(k+1)}{2}-V_{pk}}\\&\;\Leftrightarrow \; \frac{V_{ik}}{nk(k+1)-2V_{ik}} > \frac{V_{pk}}{nk(k+1)-2V_{pk}}\\&\;\Leftrightarrow \; V_{ik}\Big (nk(k+1)-2V_{pk}\Big ) > V_{pk}\Big (nk(k+1)-2V_{ik}\Big )\\&\;\Leftrightarrow \; V_{ik} > V_{pk}. \end{aligned}$$

Therefore, if several candidates have the same score \(\widehat{Z}_o^{*}\), then ranking these candidates according to the values \(\overline{Z}_o\) gives the same result as breaking the ties with the values \(V_{ok}\).

It is also worth noting that, in general, \(\overline{Z}_o\) and \(\widehat{Z}_o^{*}\) provide different ranks. For instance, consider Table 5. The rank obtained with \(\widehat{Z}_o^{*}\) is

$$\begin{aligned} {\mathrm {A}} \succ {\mathrm {F}} \succ {\mathrm {B}} \succ {\mathrm {E}} \succ {\mathrm {D}} \succ {\mathrm {C}} \succ {\mathrm {G}}. \end{aligned}$$

while the rank obtained with \(\overline{Z}_o\) is

$$\begin{aligned} {\mathrm {A}} \succ {\mathrm {B}} \succ {\mathrm {F}} \succ {\mathrm {E}} \succ {\mathrm {D}} \succ {\mathrm {C}} \succ {\mathrm {G}}. \end{aligned}$$
Table 5 Values of \(\widehat{Z}_o^{*}\) and \(\overline{Z}_o\) for the candidates of Table 2

4 Concluding Remarks

In the last years, increasing attention has been devoted to the study of ranked voting systems where each candidate is evaluated with the most favorable scoring vector for him/her. However, some of them have an important shortcoming: the relative order between two candidates may be altered when the number of first, second, ..., \(k\hbox {th}\) ranks obtained by other candidates changes, although there is not any variation in the number of first, second, ..., \(k\hbox {th}\) ranks obtained by both candidates. Likewise, some models are formulated using functions depending on a parameter \(\varepsilon \), and the order between two candidates may change according to the value of \(\varepsilon \).

In this paper we have proposed a model that avoids the above problems and that considers convex sequences of weights. Moreover, an important advantage of our model against other methods suggested in the literature is that we give a closed expression for the scores assigned to the candidates. Thus, we can obtain the order of the candidates without solving the proposed model and, in this way, it could be easy to implement in some contexts (for instance, in academic environments).