1 Introduction

The most simple and popular application of Condorcet’s jury theorem (CJT)Footnote 1 is to the case of a group consisting of an odd number of homogeneous members, i.e., who possess identical competence to identify the “better” alternative and vote sincerely and independently, using simple majority rule (henceforth SMR) to aggregate the various votes.Footnote 2

In many decision-making contexts, there is an even number of members in the group. However, the literature on SMR avoids talking about an even number of voters in a committee.Footnote 3 The first contribution of this paper is its application of SMR to collective decision making by a group with an even number of homogeneous voters.

In particular, we show that the probability that an odd-numbered set of voters who are homogeneous (in the sense that they are each associated with the same probability of voting “correctly,” i.e., for the better alternative) reach a correct decision under SMR is the same as the probability that an even-numbered set (one more) of homogeneous voters reach a correct decision by a majority (more than half of the number) of members and a uniformly random decision is taken when there is a tie (henceforth SMRE). This result also holds in a more general, asymmetric framework where the voters are also assumed to be homogeneous in their decision-making skills, but their skills are dependent on the state of nature; i.e., each of the members is associated with two probabilities of voting correctly that correspond to two possible states of nature.Footnote 4

Several studies relax the assumption of identical competence and discuss heterogeneous groups of voters.Footnote 5 In many decision-making contexts, the competence structure of a group of decision-makers is not common knowledge. Ben-Yashar and Paroush (2000) show that in such cases, a majority of an odd number of jurists is more likely to choose the better of two alternatives than a single jurist selected at random from the jurists.Footnote 6 However, the literature avoids talking about an even number of voters in a committee.

The second contribution of this paper is a justification for using SMR when the competence structure of the group of decision makers is not common knowledge, for an even-numbered fixed-size committee. In particular, we show that the probability that an even-numbered set of voters will reach a correct decision under SMRE is the same as the expected probability that an odd-numbered set (one less) of voters will reach a correct decision under SMR. In other words, dropping one member from an even-numbered set of voters does not affect the expected probability of reaching a correct decision under SMR. This result also holds in a more general, asymmetric framework where the voters are also assumed to be heterogeneous in their decision-making skills, but their skills are dependent on the state of nature; i.e., each of the members is associated with two probabilities of voting correctly that correspond to two possible states of nature.

2 Homogeneous voters

2.1 The basic (symmetric) model

Let the decision-making group N consist of n = 2 k + 1 (n is an odd number) members (henceforth voters). Let 1 and − 1 represent two equiprobable states of nature. The voters choose one of two alternatives: 1 or − 1. The final decision is based on the members’ votes. There are therefore two possible correct decisions: 1 in state of nature 1 and − 1 in state of nature − 1. We assume that one alternative is “correct”; however, its identity is unknown to the voters. These voters are assumed to be homogeneous in their decision-making skills. A voter chooses the correct alternative with probability p, which reflects the voter’s decision-making skills (competency). Note that all voters share the same objective, namely, to select the correct alternative. We assume that voters are independent and that ½ < p < 1. The vector \(\underline{p}_{h}^{n} = (p,...,p)\) refers to the n homogeneous members, each with probability p.

We use the following notation:

$$\pi \left( {p,i,n} \right) = \left( {\begin{array}{*{20}c} n \\ i \\ \end{array} } \right)p^{i} \left( {1 - p} \right)^{n - i} ,\;{\text{and}},$$
$$\pi_{{\overline{ \uparrow }}} \left( {p,k,n} \right) = \sum\limits_{i = k}^{n} {\left( {\begin{array}{*{20}c} n \\ i \\ \end{array} } \right)p^{i} \left( {1 - p} \right)^{n - i} } = \sum\limits_{i = k}^{n} {\pi \left( {p,i,n} \right)} .$$

Let \(\pi_{{{\text{SMR}}}} \left( {\underline{p}_{h}^{n} } \right)\) denote the probability that a group consisting of \(n\) homogeneous members chooses the correct alternative when employing SMR. Formally:

$$\pi_{{{\text{SMR}}}} \left( {\underline{{p_{h} }}^{n} } \right) = {\uppi }_{{\overline{ \uparrow }}} \left( {p,k + 1,2k + 1} \right)$$

2.1.1 The result

We examine the case where one member who is associated with decision-making skills x is added to a set of n homogeneous decision makers. Let \(\pi_{{{\text{SMRE}}}} \left( {\left( {\underline{{p_{h}^{n} }} ,x} \right)} \right)\) denote the probability that this group chooses the correct alternative when employing SMRE. Formally:

$$\begin{aligned} \pi_{{{\text{SMRE}}}} \left( {\left( {\underline{{p_{h}^{n} }} ,x} \right)} \right) & = x{\uppi }_{{\overline{ \uparrow }}} \left( {p,k + 1,2k + 1} \right) + \left( {1 - x} \right){\uppi }_{{\overline{ \uparrow }}} \left( {p,k + 2,2k + 1} \right) \\ & \;\; + \frac{1}{2}\left( {x\pi \left( {p,k,2k + 1} \right) + \left( {1 - x} \right)\pi \left( {p,k + 1,2k + 1} \right)} \right) \\ \end{aligned}$$

Theorem 1:

$$\pi_{{{\text{SMRE}}}} \left( {\left( {\underline{{p_{h}^{n} }} ,x} \right)} \right) \ge \pi_{{{\text{SMR}}}} \left( {\underline{p}_{h}^{n} } \right) \Leftrightarrow x \ge p.$$

Proof:

$$\begin{aligned} x\pi_{{\overline{ \uparrow }}} & \left( {p,k + 1,2k + 1} \right) + \left( {1 - x} \right)\pi_{{\overline{ \uparrow }}} \left( {p,k + 2,2k + 1} \right) + \frac{1}{2}\left( {x\pi \left( {p,k,2k + 1} \right)~ + \left( {1 - x} \right)\pi \left( {p,k + 1,2k + 1} \right)} \right) \\ & \;\; \ge \pi_{{\overline{ \uparrow }}} \left( {p,k + 1,2k + 1} \right) \Leftrightarrow \left( {1 - x} \right)\left( {\pi_{{\overline{ \uparrow }}} \left( {p,k + 2,2k + 1} \right) - \pi_{{\overline{ \uparrow }}} \left( {p,k + 1,2k + 1} \right)} \right) \\ & \;\; + \frac{1}{2}\left( {x\pi \left( {p,k,2k + 1} \right) + \left( {1 - x} \right)\pi \left( {p,k + 1,2k + 1} \right)} \right) \ge 0 \\ & \;\; \Leftrightarrow \frac{1}{2}\left( {x\pi \left( {p,k,2k + 1} \right) - \left( {1 - x} \right)\pi \left( {p,k + 1,2k + 1} \right)} \right)~ \ge 0 \\ & \;\; \Leftrightarrow x\pi \left( {p,k,2k + 1} \right) \ge \left( {1 - x} \right)\pi \left( {p,k + 1,2k + 1} \right) \Leftrightarrow \frac{x}{{1 - x}} \ge \frac{{\pi \left( {p,k + 1,2k + 1} \right)}}{{\pi \left( {p,k,2k + 1} \right)}} \\ & \;\; \Leftrightarrow \frac{x}{{1 - x}} \ge \frac{{\left( {\begin{array}{*{20}c} {2k + 1} \\ {k + 1} \\ \end{array} } \right)p^{{k + 1}} \left( {1 - p} \right)^{k} }}{{\left( {\begin{array}{*{20}c} {2k + 1} \\ k \\ \end{array} } \right)p^{k} \left( {1 - p} \right)^{{k + 1}} }} \Leftrightarrow \frac{x}{{1 - x}} \ge \frac{p}{{1 - p}} \Leftrightarrow x \ge p. \\ & \;\;{\text{Q.E.D.}} \\ \end{aligned}$$

Note that, the fifth line of the proof states that when one member who is associated with decision-making skills x is added to a set of n = 2 k + 1 homogeneous decision makers, it changes the probability of a correct decision in two cases. The first is when one more voter is needed in order for the correct decision to be adopted (k out of n); then his correct decision creates a situation where exactly half the voters make the correct decision, in which case the decision is made randomly. The second is when there is a majority of voters without this voter (k + 1 out of n); then his wrong decision creates a situation where half the voters make the correct decision, in which case the decision is made randomly.

Theorem 1 shows that, it is not worth increasing a homogeneous group with an odd number of decision makers by one member, unless the new member is a more competent decision maker than the existing group members.Footnote 7

2.2 The general (asymmetric) model

In this section the voters are also assumed to be homogeneous in their decision-making skills, but their skills are parameterized by two probabilities.Footnote 8A voter chooses the correct alternative 1 (− 1) with probability \(p^{\text I}\) (\(p^{\text {II}}\)) in state of nature 1 (− 1). We assume that the average of the voter’s probabilities in the two states of nature exceeds one half.Footnote 9

Let \(\pi_{{{\text{SMR}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}}_{h}^{n} } \right)\) denote the probability that a group consisting of \(n\) homogeneous members with probabilities \(p^{I} ,p^{II}\) chooses the correct alternative when employing SMR. Formally:

$$\pi_{{{\text{SMR}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}}_{h}^{n} } \right) = \frac{1}{2}{\uppi }_{{\overline{ \uparrow }}} \left( {p^{{\text{I}}} ,k + 1,2k + 1} \right) + \frac{1}{2}{\uppi }_{{\overline{ \uparrow }}} \left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right).$$

We examine the case where one member who is associated with decision-making skills x in state 1 and y in state − 1 is added to a set of n homogeneous decision makers.

Let \(\pi_{{{\text{SMRE}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}}_{h}^{n} ,\left( {x,y} \right)} \right)\) denote the probability that a group consisting of \(n\) homogeneous members who are each associated with a competency \(\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)\) and one member who is associated with decision-making skills \(\left( {x,y} \right)\) chooses the correct alternative when employing SMRE. Formally:

$$\begin{aligned} \pi _{{{\text{SMRE}}}} & \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}} _{h}^{n} ,\left( {x,y} \right)} \right) \\ & \; = \frac{1}{2}\left( {x\pi_{{\overline{ \uparrow }}} \left( {p^{{\text{I}}} ,k + 1,2k + 1} \right.} \right) + \left( {1 - x} \right)\pi_{{\overline{ \uparrow }}} \left. {\left( {p^{{\text{I}}} ,k + 2,2k + 1} \right)} \right) \\ & \;\;\; + \frac{1}{2}\left( {y\pi_{{\overline{ \uparrow }}} \left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right.} \right) + \left( {1 - y} \right)\pi_{{\overline{ \uparrow }}} \left. {\left( {p^{{{\text{II}}}} ,k + 2,2k + 1} \right)} \right) \\ & \;\;\; + \frac{1}{4}\left( {x\pi \left( {p^{{\text{I}}} ,k,2k + 1} \right.} \right) + \left( {1 - x} \right)\pi \left( {p^{{\text{I}}} ,k + 1,2k + 1} \right)) + \frac{1}{4}\left( {y\pi \left( {p^{{{\text{II}}}} ,k,2k + 1} \right.} \right) \\ & \;\;\; + \left( {1 - y} \right)\pi \left. {\left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right)} \right). \\ \end{aligned}$$

Note that if the number of supporters in a decision is exactly half the number of the voters, a uniformly random decision breaks the tie and the probability of deciding 1 equals one half. However, when the decision is in favor of 1, the actual state can be 1 or − 1 with probability one-half. Similarly, when the decision is in favor of − 1, the actual state can be 1 or − 1 with probability one-half. This is the reason that the two last terms are multiplied by one-quarter.

We now turn to examine the case where one member who is associated with decision-making skills x and y is added to a set of n homogeneous decision makers and SMRE is employed. We present the minimal required decision-making skills of the voter as follows.

Theorem 2:

$$\begin{gathered} \pi _{{{\text{SMRE}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}} _{h}^{n} ,\left( {x,y} \right)} \right)\frac{ > }{ < }\pi _{{{\text{SMR}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}} _{h}^{n} } \right) \Leftrightarrow \hfill \\ \left( {p^{{\text{I}}} \left( {1 - p^{{\text{I}}} } \right)} \right)^{k} \left( {x - p^{{\text{I}}} } \right)\frac{ > }{ < }\left( {p^{{{\text{II}}}} \left( {1 - p^{{{\text{II}}}} } \right)} \right)^{k} \left( {p^{{{\text{II}}}} - y} \right) \hfill \\ \end{gathered}$$

Proof:

$$\begin{aligned} \frac{1}{2} & \left( {x\pi_{{\overline{ \uparrow }}} \left( {p^{{\text{I}}} ,k + 1,2k + 1} \right.} \right) + \left( {1 - x} \right)\pi_{{\overline{ \uparrow }}} \left. {\left( {p^{{\text{I}}} ,k + 2,2k + 1} \right)} \right) \\ & \;\;\; + \frac{1}{2}\left( {y\pi_{{\overline{ \uparrow }}} \left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right.} \right) + \left( {1 - y} \right)\pi_{{\overline{ \uparrow }}} \left. {\left( {p^{{{\text{II}}}} ,k + 2,2k + 1} \right)} \right) \\ & \;\;\; + \frac{1}{4}\left( {x\pi \left( {p^{{\text{I}}} ,k,2k + 1} \right.} \right) + \left( {1 - x} \right)\pi \left. {\left( {p^{{\text{I}}} ,k + 1,2k + 1} \right)} \right) + \frac{1}{4}\left( {y\pi (p^{{{\text{II}}}} ,k,2k + 1} \right) \\ & \;\;\; + \left( {1 - y} \right)\pi \left. {\left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right)} \right)~\frac{ > }{ < }\frac{1}{2}\pi_{{\overline{ \uparrow }}} \left( {p^{{\text{I}}} ,k + 1,2k + 1} \right) + \frac{1}{2}\pi_{{\overline{ \uparrow }}} \left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right) \\ & \; \Leftrightarrow \frac{1}{2}\left( {x - 1)\pi_{{\overline{ \uparrow }}} \left( {p^{{\text{I}}} {\text{,}}k + 1,2k + 1} \right.} \right) + \frac{1}{2}\left( {1 - x} \right)\pi_{{\overline{ \uparrow }}} \left( {p^{{\text{I}}} {\text{,}}k + 2,2k + 1} \right) \\ & \;\;\; + \frac{1}{2}\left( {y - 1)\pi_{{\overline{ \uparrow }}} \left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right.} \right) + \frac{1}{2}\left( {1 - y} \right)\pi_{{\overline{ \uparrow }}} \left( {p^{{{\text{II}}}} ,k + 2,2k + 1} \right) \\ & \;\;\; + \frac{1}{4}\left( {x\pi \left( {p^{{\text{I}}} ,k,2k + 1} \right.} \right) + \left( {1 - x} \right)\pi \left. {\left( {p^{{\text{I}}} ,k + 1,2k + 1} \right)} \right) + \frac{1}{4}\left( {y\pi \left( {p^{{{\text{II}}}} ,k,2k + 1} \right.} \right) \\ & \;\;\; + \left( {1 - y} \right)\pi \left. {\left( {p^{{II}} ,k + 1,2k + 1} \right)} \right)~\frac{ > }{ < }0 \\ & \; \Leftrightarrow \frac{1}{2}\left( {1 - x)\left( {\pi_{{\overline{ \uparrow }}} \left( {p^{{\text{I}}} ,k + 2,2k + 1} \right.} \right.} \right) - \pi_{{\overline{ \uparrow }}} \left. {\left( {p^{{\text{I}}} ,k + 1,2k + 1} \right)} \right) \\ & \;\;\; + \frac{1}{2}\left( {1 - y)(\pi_{{\overline{ \uparrow }}} (p^{{{\text{II}}}} ,k + 2,2k + 1} \right) - \pi_{{\overline{ \uparrow }}} \left. {\left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right)} \right) \\ \end{aligned}$$
$$\begin{aligned} & \;\;\; + \frac{1}{4}\left( {x\pi (p^{{\text{I}}} ,k,2k + 1} \right) + \left( {1 - x} \right)\pi \left( {p^{{\text{I}}} ,k + 1,2k + 1} \right)) + \frac{1}{4}\left( {y\pi (p^{{{\text{II}}}} ,k,2k + 1} \right) \\ & \;\;\; + \left( {1 - y} \right)\pi \left. {\left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right)} \right)~\frac{ > }{ < }0 \\ & \; \Leftrightarrow - \frac{1}{4}\left( {\left( {1 - x} \right)\pi \left( {p^{{\text{I}}} ,k + 1,2k + 1} \right) + \left( {1 - y} \right)\pi \left( {p^{{{\text{II}}}} ,k + 1,2k + 1} \right)} \right) \\ & \;\;\; + \frac{1}{4}\left( {x\pi (p^{I} ,k,2k + 1} \right) + y\pi \left. {\left( {p^{{{\text{II}}}} ,k,2k + 1} \right)} \right)\frac{ > }{ < }0 \\ & \;\;\; \Leftrightarrow \left( {p^{{\text{I}}} \left( {1 - p^{{\text{I}}} } \right)} \right)^{k} \left( { - p^{{\text{I}}} \left( {1 - x} \right) + x\left( {1 - p^{{\text{I}}} } \right)} \right)\frac{ > }{ < } \\ & \;\;\;\left( {p^{{{\text{II}}}} \left( {1 - p^{{{\text{II}}}} } \right)} \right)^{k} \left( {p^{{{\text{II}}}} \left( {1 - y} \right) - y\left( {1 - p^{{{\text{II}}}} } \right)} \right) \\ & \; \Leftrightarrow \left( {p^{{\text{I}}} \left( {1 - p^{I} } \right)} \right)^{k} \left( {x - p^{{\text{I}}} } \right)\frac{ > }{ < }\left( {p^{{{\text{II}}}} \left( {1 - p^{{{\text{II}}}} } \right)} \right)^{k} \left( {p^{{{\text{II}}}} - y} \right)~.~ \\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{\text{Q.E.D.}} \\ \end{aligned}$$

Note that the special case of \(p^{{\text{I}}} = p^{{{\text{II}}}}\) and x = y is precisely the case of Theorem 1, i.e., the basic (symmetric) model.

Corollary 1:

If x = \(p^{{\text{I}}}\) and y = \(p^{{{\text{II}}}}\), then \(\pi_{{{\text{SMRE}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}}_{h}^{n} ,\left( {x,y} \right)} \right) = \pi_{{{\text{SMR}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}}_{h}^{n} } \right)\), which is identical to the result of Theorem 1 (the basic model).

Corollary 2:

If \(x > p^{{\text{I}}}\) and \(y > p^{{{\text{II}}}}\) then \(\pi_{{{\text{SMRE}}}} \left( {\underline{{\left( {p^{{\text{I}}} {,}p^{{{\text{II}}}} } \right)}}_{h}^{n} ,\left( {x,y} \right)} \right) > \pi_{{{\text{SMR}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}}_{h}^{n} } \right)\).

If \(x < p^{{\text{I}}} { }\) and \(y < p^{{{\text{II}}}}\) then \(\pi_{{{\text{SMRE}}}} \left( {\underline{{\left( {p^{{\text{I}}}, p^{{{\text{II}}}} } \right)}}_{h}^{n} ,\left( {x,y} \right)} \right) < \pi_{{{\text{SMR}}}} \left( {\underline{{\left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)}}_{h}^{n} } \right)\)\(.\)

That is, if the new member is a more (less) competent decision maker than the existing group members in both states of nature, it is (not) worth increasing a homogeneous group with an odd number of decision-makers by one member.

However, even if the new member is a more competent decision maker than the existing group members, only in one state of nature can it be worth increasing the group. That is, if \(x > p^{{\text{I}}}\) and \(y < p^{{{\text{II}}}}\) then when \(p^{{\text{I}}} < p^{{{\text{II}}}}\) there are cases in which it is worth increasing the group: when \(x - p^{{\text{I}}}\) > \(p^{{{\text{II}}}} -\)y (i.e., the average competency of the additional voter is greater than the average competency of each one of the voters in the original group) or even when \(x - p^{{\text{I}}}\) < \(p^{{{\text{II}}}} -\)y (i.e., the average competency of the additional voter is less than the average competency of each one of the voters in the original group) but the original group is big enough.Footnote 10 Also, if \(x < p^{{\text{I}}}\) and \({\text{y}} > p^{{{\text{II}}}}\) then when \(p^{{\text{I}}} > p^{{{\text{II}}}}\) there are cases in which it is worth increasing the group: when \(p^{{\text{I}}} - x\) < \(y - p^{{{\text{II}}}}\) or even when \(p^{{\text{I}}} - x\) > \(y - p^{{{\text{II}}}}\) but the original group is big enough.Footnote 11

In the same way, one can find other cases where it is not worth increasing the group. That is, if \(x > p^{{\text{I}}} { }\) and \(y < p^{{{\text{II}}}}\) then when \(p^{{\text{I}}} > p^{{{\text{II}}}}\) there are cases in which it is not worth increasing the group: when \(x - p^{{\text{I}}}\) < \(p^{{{\text{II}}}} -\) y or even when \(x - p^{{\text{I}}}\) > \(p^{{{\text{II}}}} - y\) but the original group is big enough. Also, if \(x < p^{{\text{I}}}\) and \(y > p^{{{\text{II}}}}\) then when \(p^{{\text{I}}} < p^{{{\text{II}}}}\) there are cases in which it is not worth increasing the group: when \(p^{{\text{I}}} - x\)  > \(y - p^{{{\text{II}}}}\) or even when \(p^{{\text{I}}} - x\) < \({\text{y}} - p^{{{\text{II}}}}\) but the original group is big enough.

Note that \(p^{{\text{I}}} { \gtreqless }p^{{{\text{II}}}} \Leftrightarrow p_{2}^{*} { \gtreqless }p_{1}^{*} ,\) where \(p_{1}^{*} = \frac{{p^{I} }}{{p^{I} + 1 - p^{II} }}\) (the probability of making a correct decision given that a voter decides 1) and \(p_{2}^{*} = \frac{{p^{{{\text{II}}}} }}{{p^{{{\text{II}}}} + 1 - p^{{\text{I}}} }}\) (the probability of making a correct decision given that a voter decides − 1). Hence, \(p^{{\text{I}}} > p^{{{\text{II}}}}\) means that the probability that a voter who decides 1 is making a correct decision is less than the probability that a voter who decides − 1 is making a correct decision. Similarly, \(p^{{\text{I}}} < p^{{{\text{II}}}} { }\) means that the probability that a voter who decides − 1 is making a correct decision is less than the probability that a voter who decides 1 is making a correct decision.Footnote 12 Thus, if the new member is a more competent decision maker than the existing group members, only in state of nature 1 (− 1) it is worth increasing the group when \(p_{1}^{*} > p_{2}^{*}\) \((p_{2}^{*} > p_{1}^{*} )\) if the average competency of the additional voter is greater than the average competency of each one of the voters in the original group or otherwise if the original group is big enough.

3 Heterogeneous voters

3.1 The basic (symmetric) model

A group N consists of \(n\) voters, where n = 2 k + 1 (n is an odd number). The voters are assumed to be heterogeneous in their decision-making skills. We denote voter \(i\)’s decision by \(x_{i}\), where \(x_{i} \in \left\{ {1, - \;1} \right\}\) and the decision profile \(x = (x_{1} , x_{2} , \ldots , x_{n}\)). Voter \(i\) chooses the correct alternative with probability \(p_{i}\), which reflects his competence,\(p_{i} \in \left( {1/2,1} \right)\). The vector \(p = \left( {p_{1} , \ldots ,p_{n} } \right)\) refers to the n members.

For \(S \subseteq N\), let \(x^{S} \in \{ 1, - 1\}^{N}\) be the profile that satisfies \(x_{i}^{S} = 1\) \({\text{for every}} i \in S\), and \(x_{j}^{S} = - 1\) for every \(j \in N\backslash S\). By the symmetry assumption, we can assume and without loss of generality that 1 is the correct decision and − 1 is the incorrect one.

The probability of obtaining a decision profile \(x^{S} {\text{given the skill vector}} \, p \,\)(in other words, the probability that voters in S decide correctly and that voters in \(N\backslash S\) decide incorrectly) is given by

$$\begin{aligned} g\left( {x^{S} } \right) & = {\text{Prob}}\left( {x_{j} = 1_{,} \forall j \in S{\text{ and }}x_{j} = - 1,\;\;\forall j \in N\backslash S} \right) \\ & = \mathop \prod \limits_{j \in S} p_{j} \mathop \prod \limits_{j \in N\backslash S} \left( {1 - p_{j} } \right) \\ \end{aligned}$$

Let \(\overline{x}^{S} \in \{ 1, - 1\}^{N}\) be the profile where \(x_{i}^{S} = - 1\) for every \(i \in S,\) and \(x_{j}^{S} = 1\) for every \(j \in N\backslash S\).Footnote 13 The probability of obtaining a decision profile \(\overline{x}^{S}\) given the skill vector \(p\) is given by

$${\text{g}}\left( {\overline{x}^{S} } \right) = \mathop \prod \limits_{j \in N\backslash S} p_{j} \mathop \prod \limits_{j \in S} \left( {1 - p_{j} } \right)$$

We use the following notation:

$$S_{a}^{T} = \left\{ {S:S \subseteq T{ },\left| S \right| = a} \right\}$$
$$S_{a \uparrow }^{T} = \left\{ {S:S \subseteq T{ },\left| S \right| > a} \right\}.$$

Let \({\uppi }_{{{\text{SMR}}}}^{N} \left( p \right)\) denote the probability that a group chooses the correct alternative when employing SMR. Formally:

$$\pi_{{{\text{SMR}}}}^{N} \left( p \right) = \mathop \sum \limits_{{S \in S_{k \uparrow }^{N} }} g\left( {x^{S} } \right).$$

Consider now a committee with n + 1 members and denote the set of members by NN. Assume that the skill vector is unknown. Let \({\uppi }_{{{\text{SMRE}}}}^{NN} \left( p \right)\) denote the probability that a group consisting of \(n + 1\) members chooses the correct alternative when employing SMRE. Formally:

$$\pi_{{{\text{SMRE}}}}^{NN} \left( p \right) = \mathop \sum \limits_{{S \in S_{k + 1 \uparrow }^{NN} }} g\left( {x^{S} } \right) + \frac{1}{2}\mathop \sum \limits_{{S \in S_{k + 1}^{NN} }} g\left( {x^{S} } \right).$$

We can write this probability as

$$\begin{gathered} \frac{1}{n + 1}\mathop \sum \limits_{l \in NN} \left\{ {\mathop \sum \limits_{{S \in S_{k + 1 \uparrow }^{{NN{ \setminus }\left\{ l \right\}}} }} \left( {p_{l} g\left( {x^{S} } \right) + (1 - p_{l} )g\left( {x^{S} } \right)} \right)} \right. \hfill \\ \left. {\;\;\; + \mathop \sum \limits_{{S \in S_{k + 1}^{{NN{ \setminus }\left\{ l \right\}}} }} \left( {p_{l} g\left( {x^{S} } \right) + \frac{1}{2}(1 - p_{l} )g\left( {x^{S} } \right)} \right) + { }\mathop \sum \limits_{{S \in S_{k}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{2}p_{l} g\left( {x^{S} } \right)} \right\}. \hfill \\ \end{gathered}$$

Now assume that a set N of n members is chosen in a random way from \(NN{ }\) and that they use SMR. Let \(E{\uppi }_{{{\text{SMR}}}}^{N} \left( p \right)\) denote the expected probability.

Theorem 3:

\(E{\uppi }_{{{\text{SMR}}}}^{N} \left( p \right) = {\uppi }_{{{\text{SMRE}}}}^{NN} \left( p \right)\).

Proof:

$$\begin{aligned} E\pi_{{{\text{SMR}}}}^{N} \left( p \right) & = \frac{1}{n + 1}\mathop \sum \limits_{{N \in S_{n}^{NN} }} \mathop \sum \limits_{{S \in S_{k \uparrow }^{N} }} g\left( {x^{S} } \right) \\ & = { }\frac{1}{n + 1}\mathop \sum \limits_{l \in NN} \left\{ {\mathop \sum \limits_{{S \in S_{k + 1}^{{NN{ \setminus }\left\{ l \right\}}} }} g\left( {x^{S} } \right) + \mathop \sum \limits_{{S \in S_{k + 1 \uparrow }^{{NN{ \setminus }\left\{ l \right\}}} }} g\left( {x^{S} } \right)} \right\} \\ \end{aligned}$$
$$\begin{gathered} E{\uppi }_{{{\text{SMR}}}}^{N} \left( p \right) - {\uppi }_{{{\text{SMRE}}}}^{NN} \left( p \right) \hfill \\ = \;\frac{1}{n + 1}\mathop \sum \limits_{l \in NN} \left\{ {\mathop \sum \limits_{{S \in S_{k + 1}^{{NN{ \setminus }\left\{ l \right\}}} }} g\left( {x^{S} } \right)\left( {1 - p_{l} - \frac{1}{2}\left( {1 - p_{l} } \right)} \right) - { }\mathop \sum \limits_{{S \in S_{k}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{2}p_{l} g\left( {x^{S} } \right)} \right\} \hfill \\ = \;\frac{1}{n + 1}\mathop \sum \limits_{l \in NN} \left\{ {\mathop \sum \limits_{{S \in S_{k + 1}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{2}g\left( {x^{S} } \right)\left( {1 - p_{l} } \right) - \mathop \sum \limits_{{S \in S_{k}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{2}p_{l} g\left( {x^{S} } \right)} \right\} = 0. \hfill \\ \end{gathered}$$

Q.E.D.Footnote 14

3.2 The general (asymmetric) model

In this section, the voters are also assumed to be heterogeneous in their decision-making skills, but their skills are parameterized by two probabilities.Specifically, each voter i chooses the correct alternative 1 (− 1) with probability \(p_{i}^{{\text{I}}}\)(\(p_{i}^{{{\text{II}}}}\)) in state of nature 1 (− 1). We assume that the average of each voter’s probabilities in the two states of nature is greater than one-half.

The probability of obtaining a decision profile \(x^{S}\) given the skill vector \(p\) in state 1 is given by

$$\begin{aligned} g\left( {x^{S} :1} \right) & = {\text{Prob}}\left( {x_{j} = 1_{,} \forall j \in S{\text{ and }}x_{j} = - 1,\forall j \in N\backslash S:1} \right) \\ & = \mathop \prod \limits_{j \in S} p_{j}^{{\text{I}}} \mathop \prod \limits_{j \in N\backslash S} \left( {1 - p_{j}^{{\text{I}}} } \right) \\ \end{aligned}$$

and in state − 1 by

$$\begin{aligned} g\left( {x^{S} : - 1} \right) & = {\text{Prob}}\left( {x_{j} = 1_{,} \forall j \in S{ }\;{\text{and}}\;{ }x_{j} = - 1,\forall j \in N\backslash S: - 1} \right) \\ & = \mathop \prod \limits_{j \in N\backslash S} p_{j}^{{{\text{II}}}} \mathop \prod \limits_{j \in S} \left( {1 - p_{j}^{{{\text{II}}}} } \right) \\ \end{aligned}$$

Let \({\uppi }_{{{\text{SMR}}}}^{N} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)\) denote the probability that a group N consisting of \(n\) members chooses the correct alternative when employing SMR. Formally:

$$\pi_{{{\text{SMR}}}}^{N} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right) = \frac{1}{2}\mathop \sum \limits_{{S \in S_{k \uparrow }^{N} }} \left( {g\left( {x^{S} :1} \right) + g\left( {\overline{x}^{S} : - 1} \right)} \right).$$

Let \({\uppi }_{{{\text{SMRE}}}}^{NN} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)\) denote the probability that a group NN consisting of \(n + 1\) members chooses the correct alternative when employing SMRE. Formally:

$$\begin{aligned} \pi_{{{\text{SMRE}}}}^{NN} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right) & = \mathop \sum \limits_{{S \in S_{k + 1 \uparrow }^{NN} }} \frac{1}{2}\left( {g\left( {x^{S} :1} \right) + g\left( {\overline{x}^{S} : - 1} \right)} \right) \\ & \;\;\;\; + \frac{1}{2}\mathop \sum \limits_{{S \in S_{k + 1}^{NN} }} \frac{1}{2}\left( {g\left( {x^{S} :1} \right)} \right. + g\left( {\left( {\overline{x}^{S} : - 1} \right)} \right). \\ \end{aligned}$$

We can express this probability as

$$\begin{aligned} \frac{1}{{n + 1}} & \sum\nolimits_{{l \in NN}} {\left( {\frac{1}{2}\mathop \sum \limits_{{S \in S_{{k + 1 \uparrow }}^{{NN{ \setminus }\left\{ l \right\}}} }} \left( {{\text{p}}_{l}^{I} g\left( {x^{S} :1} \right)} \right.} \right. + \left( {1 - {\text{p}}_{l}^{{{\text{II}}}} } \right)g\left( {\bar{x}^{S} : - 1} \right) + \left( {1 - {\text{p}}_{l}^{{\text{I}}} } \right)g\left( {x^{S} :1} \right)} \\ & \;\;\; + {\text{p}}_{l}^{{{\text{II}}}} \left. {g\left( {\bar{x}^{S} : - 1} \right)} \right) + {\text{ }}\mathop \sum \limits_{{S \in S_{{k + 1}}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{2}\left( {\left( {{\text{p}}_{l}^{{\text{I}}} g\left( {x^{S} :1} \right) + {\text{p}}_{l}^{{{\text{II}}}} g\left( {\bar{x}^{S} : - 1} \right)} \right)} \right. \\ & \;\;\; + \frac{1}{2}\left( {\left( {1 - {\text{p}}_{l}^{{\text{I}}} } \right)g\left( {x^{S} :1} \right) + \left( {1 - {\text{p}}_{l}^{{{\text{II}}}} } \right)g\left( {\bar{x}^{S} : - 1} \right)} \right) ) \\ &\left. \;\;\; + \mathop \sum \limits_{{S \in S_{k}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{4}\left( {{\text{p}}_{l}^{{\text{I}}} g\left( {x^{S} :1} \right) + {\text{p}}_{l}^{{{\text{II}}}} g\left( {\bar{x}^{S} : - 1} \right)} \right)\right)\\ \end{aligned}$$

Now assume that a set of n members is chosen in a random way from \(NN{ },{ }N \subseteq NN,{ }\) and that they employ SMR. Let \(E{\uppi }_{{{\text{SMR}}}}^{N} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right)\) denote the expected probability.

Theorem 4:

$$E\pi_{{{\text{SMR}}}}^{N} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right) = \pi_{{{\text{SMRE}}}}^{NN} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right).$$

Proof:

$$\begin{aligned} \pi _{{{\text{SMR}}}}^{N} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right) & = \frac{1}{2}\mathop \sum \limits_{{S \in S_{{k \uparrow }}^{N} }} \left( {g\left( {x^{S} :1} \right) + g\left( {\bar{x}^{S} : - 1} \right)} \right){\text{ }} \\ E\pi _{{{\text{SMR}}}}^{n} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right) & = \frac{1}{{n + 1}}\mathop \sum \limits_{{N \subseteq NN}} \frac{1}{2}\mathop \sum \limits_{{S \in S_{{k \uparrow }}^{N} }} \left( {g\left( {x^{S} :1} \right) + g\left( {\bar{x}^{S} : - 1} \right)} \right) \\ {\text{ }} & = {\text{ }}\frac{1}{{n + 1}}\mathop \sum \limits_{{N \subseteq NN}} \frac{1}{2}\left\{ {\mathop \sum \limits_{{S \in S_{{k + 1}}^{N} }} g\left( {x^{S} :1} \right) + g\left. {\left( {\bar{x}^{S} : - 1} \right)} \right)} \right. \\ & \;\;\; + \mathop \sum \limits_{{S \in S_{{k + 1 \uparrow }}^{N} }} g\left( {x^{S} :1} \right)\left. { + \left. {g\left( {\bar{x}^{S} : - 1} \right)} \right)} \right\} \\ \end{aligned}$$
$$\begin{aligned} E\pi _{{{\text{SMR}}}}^{N} & \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right) - \pi _{{{\text{SMRE}}}}^{{NN}} \left( {p^{{\text{I}}} ,p^{{{\text{II}}}} } \right) \\ & = \frac{1}{{n + 1}}\mathop \sum \limits_{{l \in NN}} \left( {\mathop \sum \limits_{{S \in S_{{k + 1}}^{{NN{ \setminus }\left\{ l \right\}}} }} \left( {\vphantom{\frac{{1}}{{2}}}g\left( {x^{S} :1} \right)} \right.\frac{1}{2} + \left( {1 - {\text{p}}_{l}^{{\text{I}}} - \frac{1}{2}\left( {1 - {\text{p}}_{l}^{{\text{I}}} } \right)} \right)} \right. \\ & \;\;\; + g\left( {\bar{x}^{S} : - 1} \right)\frac{1}{2}\left. {\left( {1 - {\text{p}}_{l}^{{{\text{II}}}} - \frac{1}{2}\left( {1 - {\text{p}}_{l}^{{{\text{II}}}} } \right)} \right)} \right) \\ & \;\;\; - {\text{ }}\mathop \sum \limits_{{S \in S_{k}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{4}\left. {\left( {{\text{p}}_{l}^{{\text{I}}} g\left( {x^{S} :1} \right) + {\text{p}}_{l}^{{{\text{II}}}} g\left( {\bar{x}^{S} : - 1} \right)} \right)\vphantom{\mathop \sum \limits_{{S \in S_{{k + 1}}^{{NN{ \setminus }\left\{ l \right\}}} }}}} \right) \\ \end{aligned}$$
$$ \begin{gathered} = \frac{1}{n + 1}\mathop \sum \limits_{l \in NN} \left\{ {\mathop \sum \limits_{{S \in S_{k + 1}^{{NN{ \setminus }\left\{ l \right\}}} }} \left( {\frac{1}{4}g\left( {x^{S} :1} \right)\left( {1 - {\text{p}}_{l}^{{\text{I}}} } \right) + \frac{1}{4}g\left( {\overline{x}^{S} : - 1} \right)\left( {1 - {\text{p}}_{l}^{{{\text{II}}}} } \right)} \right)} \right. \hfill \\ \left. \;\;\; - \mathop \sum \limits_{{S \in S_{k}^{{NN{ \setminus }\left\{ l \right\}}} }} \frac{1}{4} {\left( {{\text{p}}_{l}^{{\text{I}}} g\left( {x^{S} :1} \right) + {\text{p}}_{l}^{{{\text{II}}}} g\left( {\overline{x}^{S} : - 1} \right)} \right)} \right\} = 0 \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{\text{Q.E.D.}} \end{gathered} $$

4 Summary

In this paper, we have shown that the probability that an odd-numbered set of voters who are homogeneous reaches a correct decision when employing SMR is the same as the probability that an even-numbered set (one more) of homogeneous voters reaches a correct decision when employing SMRE.

Second, we have shown that in the absence of information on voters’ heterogeneous skills, simple majority rule is the relevant rule even for a group with an even number of voters. We show that the probability that an even-numbered set of heterogeneous voters reaches a correct decision when employing SMRE is the same as the expected probability that an odd-numbered set (one less) of heterogeneous voters reaches a correct decision when employing SMR.

These two results hold not only for the basic (symmetric) framework but also in a more general (asymmetric) framework where the voters’ skills are dependent on the state of nature; i.e., each of the members is associated with two probabilities of voting correctly that correspond to the two possible states of nature.