9.1 Condorcet’s Jury Theorem

While the modern social choice theory deals mostly with elections and other opinion aggregation contexts, the earlier results of the theory focus on somewhat different settings, viz. jury decision making. Marquis de Condorcet dealt with the problem of amalgamating the opinions of several jurors into a just or correct collective decision or verdict (McLean and Urken 1995). More specifically, Condorcet was looking for an answer to the following question: assuming that each individual has a given probability of being right, what is the probability that the majority of a group consisting of such individuals is right? Although related to the modern social choice theory, this question invokes considerations that are absent in the modern theorizing, viz. the notion that there is a correct decision and that collective decision making procedures are to varying degrees capable of resulting in those correct decisions.

Condorcet’s starting point is, thus, that every individual has a fixed probability of being right on the issue to be decided. Whether this probability is determined on the basis of success rates in similar previous decision settings or on the basis of formal or practical training or some other factors is left open. The simplest situation would seem to be one in which each individual has an identical probability p of being right. The probability could be interpreted as the relative frequency of right “yes” or “no” answers to a long sequence of questions for which the correctness of the answers can be determined. Let us focus on a question that calls for either “yes” or “no” answer and assume that the number of individuals who have given the right answer is x. To simplify the setting even further, let us assume that the persons vote independently of each other. In other words, the voters make their decisions without consulting each other or knowing each other’s decision. Under these assumptions we can apply the binomial probability formula to express the probability that among n individuals exactly x have given the right answer:

$$\begin{aligned} f(x) = p^{x}(1-p)^{n-x}. \end{aligned}$$

Let P denote the probability that the group using the simple majority rule gives the right answer. In other words, P is the probability that more than \(50\%\) of the group members will vote “yes” (“no”, respectively) when “yes” (“no”) is the right answer. For any given size of majority x, this probability equals the number of different ways of picking exactly x individuals times the probability of exactly x individuals being right. Thus, the probability is the sum of these products over the sizes of majority. In symbols,

$$\begin{aligned} P = \sum _{x=n'}^{n}{n \atopwithdelims ()x} p^{x} (1-p)^{n-x}. \end{aligned}$$
(9.1)

Here \(n' = (n+1)/2\). With the exception of those values of p which are very close to 1 or 0, the distribution of P can be approximated by the normal distribution with mean np and variance \(np(1-p)\). Thus, we obtain

$$\begin{aligned} P = 1 - G \left( \frac{n/2 - np}{\sqrt{np(1-p)}} \right) = G \left( \frac{p - 0.5}{\sqrt{p(1-p)/n}} \right) . \end{aligned}$$

Here G(y) is the area under the density curve of normal distribution from \(-\infty \) to y. Condorcet’s jury theorem can now be stated (Miller 1986).

Theorem 1

(Condorcet) The probability P of the majority being right depends on the individuals’ probability p of being right as follows:

  1. 1.

    If \(0.5< p < 1\) and \(n > 2\), then \(P > p\), P increases with n and when n approaches infinity, P converges to 1.

  2. 2.

    If \(0< p < 0.5\) and \(n > 2\), then \(P < p\), P decreases with the increase of n and P approaches 0 when n approaches infinity.

  3. 3.

    If \(p = 0.5\), then \(P = 0.5\), for all values of n.

Table 9.1 gives an idea of how fast P approaches 1 when n increases for various values of p.

Table 9.1 Probability of being right. Source Miller (1986)

The first two parts of Condorcet’s jury theorem contain two statements: one pertaining to probability of the majority being right vis-à-vis the individual probability of being right, and the other indicating the limiting probability value of the majority being right. The former is called non-asymptotic and the latter the asymptotic part of the theorem. The non-asymptotic part can be proven by showing that

$$\begin{aligned} p < \sum _{i=n'}^{n}{n \atopwithdelims ()i} p^{i}(1-p)^{n-i} \end{aligned}$$

for groups of any size n. Here \(n'=(n+1)/2\) and by assumption n is odd. Similarly, the asymptotic part, which states that the right hand side of the preceding inequality approaches unity as the group size approaches infinity, follows from the observation that the limiting value of the sum is, indeed, unity (Ben-Yashar and Paroush 2000).

The message of the theorem is clear: the majority is more reliable than the average citizen if the latter is more often right than wrong and if the probability of being right is the same for all citizens. Indeed, the majority becomes omniscient when the number of individuals increases. The assumption that the probability of each citizen’s being right is larger than 1/2 is essential: should the probability be strictly less than 1/2, then P approaches 0, i.e. it becomes certain that the majority is wrong. The applicability of Condorcet’s jury theorem is, however, seriously limited by the assumption that each individual has the same competence, i.e. the same probability of being right.

Various generalizations of the above theorem have been discussed in the modern literature. Of particular interest is one proven by Owen et al. (1989). Suppose that each individual i is characterized by probability \(p_i\) of being right. Let \(\bar{p}= \sum _{i} p_{i}/n\), i.e. \(\bar{p}\) is the average competence of the individuals or the average probability of their being right. If now \(1/2< \bar{p} < 1\) and \(n > 2\), then \(P > \bar{p}\) and P approaches 1 as n approaches infinity. In this theorem the individuals do not necessarily have identical competences. Furthermore, they are not all required to be more often right than wrong. What is assumed instead is that the arithmetic mean of the individual competences is larger than 1/2.

The non-asymptotic part of Condorcet’s theorem, thus, holds in the generalized setting in the sense that the competence of the majority always exceeds that of the average competence. In another sense, when it is asserted that the majority be more competent than each of the individuals, the theorem does not always hold. Consider, for example, a group consisting of three individuals with \(p_1 = 0.6\), \(p_2 = 0.7\), and \(p_3 = 0.9\). Here we get:

$$\begin{aligned} P&= 0.6\times 0.7\times 0.1 + 0.6\times 0.3\times 0.9 \nonumber \\&\quad + 0.4\times 0.7\times 0.9 + 0.6\times 0.7\times 0.9 = 0.834. \end{aligned}$$
(9.2)

Thus, the majority is more competent than the average competence (0.73), but less competent than one of the individuals. On the other hand, in a three-person setting where \(p_1=0.6\), \(p_2 = 0.7\) and \(p_3 = 0.7\), the majority competence exceeds that of the most competent individual since \(P = 0.742\). In other words, the majority can be, but is not always, more competent than every individual when the average competence exceeds 1/2.

This result significantly qualifies Dahl’s contention (Dahl 1970, 34):

... whenever you believe that 1 is significantly more competent than 2 or 3 to make a decision that will seriously affect you, you will want the decision to be made by 1. You will not want it to be made by 2 or 3, nor by any majority of 1, 2, and 3.

Suppose that person 1’s competence is 0.8, person 2’s 0.7 and person 3’s 0.7 (Ben-Yashar and Paroush 2000, 192). Person 1 is, thus, significantly more competent than 2 and 3. Yet, \(P=0.826\) which exceeds person 1’s competence. Hence, pace Dahl, one might well prefer the decision to be made by a majority of the three persons rather than by the most competent person 1.

The generalized Condorcet theorem demonstrates that one should not perhaps be overly concerned about the use of referenda in matters which in other times and places may have been decided by experts, e.g. joining military or economic alliances. Adding a sufficient number of minimally competent decision makers improves the quality of decision making in the sense that the competence of the majority exceeds the average individual level of competence. One should observe, though, that if the added decision makers are just barely competent, they may lower the prevailing average competence. Anyway, the nonasymptotic part of Condorcet’s jury theorem is not always valid in the sense that the majority would be more competent than any individual. In fact, there is a result which states under which conditions the asymptotic part is not valid (see Nitzan and Paroush 1982 as well as Shapley and Grofman 1984).

Theorem 2

Let there be an odd number n of voters who vote independently of each other. Assume that \(p_{i} > 0.5\) for all voters and that the voters are labeled in non-increasing order of competence, i.e. \(p_{i} \ge p_{j}\) if \(i<j\). The non asymptotic part of Condorcet’s jury theorem does not hold, if

$$\begin{aligned} \frac{p_{1}}{1-p_{1}} > \prod _{i=2}^{n} \frac{p_{i}}{1-p_{i}}. \end{aligned}$$

The expression \(p_{i}/(1-p_{i})\) indicates the odds regarding voter i’s competence. By assumption it is larger than unity for all voters with values increasing with the competence of the voter. The result thus states that if the most competent voter has higher odds than the product of the odds of the other voters, then voter 1 is more competent than the collective choice made using the majority rule.

9.2 Relaxing the Independence Assumption

One of the assumptions underlying Condorcet’s jury theorem is that the voters act independently of each other. Intuitively this is somewhat implausible. More often than not in politics people take their cues from other people’s actions and plans. It is, however, difficult to find an alternative modeling assumption that would at the same time be more plausible in taking into account the intuitively frequent interdependencies of people’s behaviors and be general enough to cover a wide variety of voting situations. Nevertheless, it is important to get even a rough idea of the importance of the independence assumption. Some results achieved in system reliability theory are pertinent here.

This theory aims at estimating the probabilities for proper functioning of systems under the assumption that certain portion of their components break down or otherwise fail. The majority systems model is constructed assuming that the system is composed of several components so that it works if and only if the majority of its components works. Given that each component has a fixed probability of working properly, we can analyze the reliability of majority systems under various assumptions concerning the interdependence of components (Boland 1989; Boland et al. 1989). The components can be viewed as voters or jurors and the proper working of a component as the event that the juror is right.

Let us assume that there is an odd number \(2m + 1\) of components. We label them \(Y, X_1,..., X_{2m}\). For our purposes it is convenient to interpret Y as a prominent individual or opinion leader whose lead is followed by several other individuals \(X_i\). Each component is interpreted as a dichotomous variable so that e.g. \(X_i = 1\) means that the component \(X_i\) works properly, \(X_i = 0\), in turn, means that it fails. We assume that \(p(Y=1) = p(X_i=1) = p\), for all \(i=1,\ldots ,2m\). In other words, every component has the same probability p of working properly or every juror has the same probability of being right. Let \(q=1-p\). The conditional probability of each \(X_i\) working properly, given that Y does, is:

$$ p(X_i=1|Y=1) = p + rq $$

and working properly, given that Y fails, is:

$$ p(X_i=1|Y=0) = p - rp, $$

where \(i = 1, \ldots , 2m.\)

The conditional probabilities are, thus, assumed to be identical for each \(X_i\). The parameter r measures the interdependence or correlation between \(X_i\), on the one hand, and Y, on the other. Obviously, with \(r = 1\), the probability that \(X_i\) gets the value 1 when Y gets the value 1, is 1. On the other hand, when \(r = 0\), the conditional probabilities of \(X_i\) equal their absolute probabilities, i.e. they are independent of Y. The parameter r thus allows us to describe positive association between \(X_i\) and Y. It is noteworthy, however, that this model cannot accommodate negative dependence. Thus, we can deal with voters who imitate each other, but not with voters who wish to “cancel” each other’s votes.

One of Boland’s results states that the probability of the majority of components working properly decreases with the increase of correlation. In other words, the larger r, the larger the probability of the majority system failure. Applying this result to voting contexts we can argue that the probability that the majority is right decreases when the dependence of voters on one “leader” (variable Y) increases. However, as long as the correlation between the voters and the leader is less than 1, the probability that the majority is right exceeds that of a single voter. Hence, in Boland’s model the interdependence between voters does not affect the essence of Condorcet’s theorem.

More general approach to modelling interdependence is developed by Berg who replaces the binomial distribution with Pólya-Eggenberger or beta-binomial distribution (Berg 1993). This distribution is a generalization of the binomial one. In the model a parameter h is introduced so that \(h/(h+1)\) is the correlation between any two voters. Thus, h can be interpreted as a dependence parameter.

Table 9.2 of Berg reports the variation of the majority competence for small values of h (Berg 1993). We see that, at small absolute values of the interdependence parameter, the majority competence increases if the interdependence is negative, whereas it decreases if the dependence is positive. Berg shows that this is the case whenever \(p > 1/2\) (Berg 1993, 92–93). Thus, we may conclude that positive interdependence between voters decreases the majority competence from its value under independence assumption.

Table 9.2 The majority competence (mc) for individual competence value \(p = 0.6\) for varying group sizes and dependence values (Berg 1993)

Despite this observation the main content of Condorcet’s jury theorem remains intact also under beta-binomial distributions. Thus, with \(p > 1/2\) and for fixed value of h, the probability that the majority decision is right increases with the increase of the number of voters. Moreover, whenever \(1/2< p < 1\) the majority competence always exceeds that of individual p.

The preceding discussion on the variations of Condorcet’s jury theorem reveals that even in contexts where one can meaningfully speak about correct and incorrect decisions the group choice using majority rule is not necessarily inferior to expert choice, unless the expert is perfect and the group consists of individuals who are not even minimally competent. The main conclusion, however, is that Condorcet’s jury theorem is relatively robust under modifications regarding the independence of voters. What is perhaps of more interest is that positive association between voters does not increase the majority competence, but rather diminishes it from the level that is achieved by independent voters.

9.3 Optimal Jury Decision Making

Although the setting analyzed in the preceding sections pertains to making correct decisions and seems thus somewhat distant from political decision making where subjective values play a major role, it is well worth studying since, if it turns out that significant results with regard to optimal decision making principles can be found in these settings, we might then try to introduce additional political realism into the model and possibly end up with feasible solutions to the design of political institutions. One of the potentially significant results deals with principles of designing optimal jury decision procedures under the assumption that jurors have different degrees of expertise in matters to be decided.

In Theorem 2 we have already touched upon a corollary of the most important result in this genre. This corollary states the conditions under which the most competent individual is more competent than the majority of voters. In other words, the result tells us in an abstract manner when it is advisable - from a consequentialist point of view - to bestow the decision making authority upon a single individual rather than the group, provided that the latter makes decisions using the majority rule. The theorem follows from a deeper result which pertains to maximizing the probability of making correct decisions by a group of voters. The result is due to Nitzan and Paroush (1982). Before spelling it out, let us consider an example.

Suppose we have a group of five individuals with individual competences: 0.9, 0.8, 0.8, 0.6, 0.6. The average competence then is 0.74. The majority competence, in turn, is 0.897, which clearly exceeds the average, but falls slightly short of the most competent individual. What happens when we increase the weight of the most competent individual? In weighted voting each voter is assigned a weight that reflects his relative influence on the voting outcomes. Typically weights are normalized so that each voter i gets the weight \(w_i\) which behaves like a probability, i.e. \(\sum _{i} w_{i} = 1\) and \(0 \le w_{i} \le 1\). In order for a motion to pass, it has to be supported by voters whose weights sum to a number that exceeds a given quota of weights, e.g. \(50\%\) of total weights. If the quota is set at \(50\% \), as often is the case, then we are dealing with weighted majority rule.

To continue our example, let the first individual with competence value of 0.9 be assigned the weight of 0.4, while the other voters have equal weights of 0.15 each. Suppose that the required quota is \(50\%\) of the total weight. We notice that now any pair that the most competent individual forms with some other individual exceeds the quota. On the other hand, not all groups consisting of three individuals exceed the weight quota. Computing the competence of the weighted majority voting results in value 0.919 which exceeds that of the most competent individual. So, it seems that increasing the weight of the most competent individual increases the group’s competence if the group makes its decisions using the weighted majority rule. This is intuitively plausible. But is there a general method for assigning weights to individuals that results in the best achievable group competence? There is and that is provided by Nitzan and Paroush (1982) theorem (see also Grofman et al. 1983 as well as Shapley and Grofman 1984).

Theorem 3

Given a group of minimally competent individuals (i.e. \(p_i > 0.5\), for all i), the decision procedure that maximizes the probability that the group decision is right is weighted majority rule where each individual i is assigned a weight

$$\begin{aligned} w_i = \log \left( \frac{p_i}{1-p_i}\right) . \end{aligned}$$

In other words, weighted majority voting with weights assigned to individuals in proportion of the logarithm of their competence odds, is the answer to the above question. Since the odds are larger than unity for all voters by assumption, this means that the logarithms in question are real numbers larger than zero.

In our example, the odds of the voters are: \(0.9/0.1= 9\), \(0.8/0.2= 4\), \(0.8/0.2 = 4\), \(0.6/0.4 = 1.5 \) and \(0.6/0.4= 1.5\). Since the Briggs’ logarithms of these numbers are: 0.954, 0.602, 0.602, 0.176 and 0.176, the optimal weights are 0.380 to individual 1, 0.240 to individuals 2 and 3 and 0.07 to individuals 4 and 5.Footnote 1 Computing the group competence under the assumption that weighted majority rule is being used in decision making, we get the group competence value 0.984, well above any of the values discussed above and quite close to unity.

So, there is an apparently plausible method of making decisions in a way that not only improves upon the competence of the average group member, but even that of the most competent member. Now, the natural question to ask is how does one go about applying this apparently useful result. The main restriction to its applicability in business and politics is, of course, the fact that very few relevant issues pertain to competence in the sense of knowing true answers to questions. Rather the bulk of business and political decision making deals with values, goals and other desiderata. But even in those hypothetical situations where the competence in the sense of probability of being right is a reasonably meaningful notion, one faces a severe application problem, to wit, how to find out the competence values of individuals. A remarkable result of Feld is one plausible way of proceeding (Grofman et al. 1983, p. 275).

Theorem 4

The optimal individual weights can be approximated by assigning each individual i the weight \(r_{i} - 0.5\) where \(r_{i}\) is the proportion of times that i has been in agreement with the majority decision in the past.

This theorem enables us to sidestep the issue of determining what is the right decision in any given situation. Instead we can determine the optimal weights by counting the relative number of times the individual has been in agreement with the majority. This theorem should, however, not be read as a solution to the philosophical problem of induction. What it states is that, assuming that the future decision settings do not essentially differ from those of the past, the agreement with the majority works well as a determinant of the optimal weight.

The above theorem can be utilized in designing institutions which provide incentives for consensus. To wit, by assigning each decision maker a weight in accordance with the theorem, i.e. \(r_{i} - 0.5\), one gives larger weights to persons with larger conformity to majority decisions. If the individuals want to maximize their weight, then the way to proceed is to stick with the majority. Not a recipe for innovation, a critique could say.

9.4 Epistemic Paradoxes and Their Relevance

In the same way as the social choice theory deals with aggregation of individual preferences, we can study the rules used in aggregating judgments. This is the case, for example, in jury decision making or any situation involving arguments developed to justify conclusions. Similarly in expert groups one often aggregates judgments, not opinions of the experts. The judgments may concern various states of affairs, e.g. whether a given occurrence has taken place, whether a certain assessment is reliable, whether a given applicant has sufficient skills for a given task, etc. So, it is not the values of the experts that count, but their judgments regarding facts. Furthermore, many expert views involve not only the statement regarding the facts, but also an argument relating those facts to each other so that – together with some logical statements – they form a sequence where some sentences are premises leading to other statements, viz. the conclusions. For example, in an economic policy advisory group, an expert might suggest that since the inflation rate, unemployment, foreign trade balance and immigration have reached a given level, certain economic policies ought to be resorted to by the government. The suggestion thus lists specific facts and reaches its policy recommendation or conclusion resting on the premises and some general principles reflecting the views of the expert about the causal relationships prevailing in the economy. So, when a group of experts is drafting a policy recommendation, it basically aggregates the judgments of its members regarding the facts and general principles that rule in the economy. This differs from aggregating opinions, simpliciter.

The classic example in this literature is the doctrinal paradox introduced by Kornhauser (1992) (an early precursor is Vacca 1921).Footnote 2 This paradox involves a jury of three jurors and a case where the issue is whether the defendant has breached a contract with another party. The legal doctrine has it that a breach of contract has occurred if and only if there is an act A such that the defendant is contractually obliged not to do A and, yet, the defendant did A. Otherwise, no breach has occurred.

For the jury decision three propositions are relevant:

  1. 1.

    p: the defendant was contractually obliged not to do A

  2. 2.

    q: defendant did A

  3. 3.

    r: the defendant breached the contract

All judges are adhering to the prevailing legal doctrine, i.e. r is true if and only if both p and q are true. Even though they agree on the doctrine, they may disagree on the truth value of the three propositions. Suppose that their truth value assignments are those presented in Table 9.3. Thus, for example, judge 2 sees that the defendant was, indeed, contractually obliged to refrain from doing A, but he/she did not do A. Since judge 2 acts in accordance with the legal doctrine, his/her view is that the defendant was not in breach of the contract. Similarly the other two jurors can be seen to adhere to the prevailing legal doctrine. When looking at the judgments of the majority of jurors it, however, turns out that this doctrine is no more valid: the majority deems propositions p and q true, but – in contrast to what the doctrine dictates – judges r to be false.

Table 9.3 Doctrinal paradox

It is easy to see the similarity of the doctrinal paradox with the Condorcet one: a principle characterizing each individual does not extend to the majority of those individuals. In the case of the doctrinal paradox the principle is the adherence to the legal doctrine, while in Condorcet’s paradox it is the completeness and transitivity of preferences.

What is called legal doctrine above is basically a specification of admissible ways of combining propositions, i.e. a rule guiding allowable inferences. One might say that the doctrine here is a kind of constraint that any legitimate reasoning has to satisfy. Thus, we can generalize the Table 9.3 setting to any situation where certain types of logical constraints are imposed on individual judgments and, yet, these constraints are not satisfied by the reasoning of the majority. In this more general setting where issue-wise majority voting leads to an inconsistent outcome is known as discursive dilemma (Pettit 2001; List and Pettit 2002).

The dilemma basically undermines the possibilities to make consistent arguments by aggregating proposition-wise judgments using majority. This leaves open two possibilities for handling judgment aggregation in group choice: (i) to impose restrictions to distributions of inputs, i.e. the individual judgments, or (ii) accept either the premise-based or conclusion-based majority as the decisive one. The former possibility could involve ruling out inputs that lead to inconsistent majority arguments, while the latter would essentially rule out paradoxes by assuming that the majority decision on premises or conclusions is paradox-free. Although the latter might seem impossible to accept, it is in fact common practice in preference aggregation settings where the successive elimination method is resorted (e.g. in the U.S. Congress). This method conducts \(k - 1\) pairwise majority votes if the alternative set consists of k elements. In each vote, the losing alternative is eliminated and the winner is confronted with the next one until all alternatives have been present in at least one pairwise comparison. The winner of the final pair is the overall winner. This method is based on the incorrect assumption that the group preference relation formed through pairwise majority votes is transitive, e.g. if z wins the winner of the xy pair, it also wins the loser of the pair. As Condorcet’s paradox shows, this is not guaranteed by the majority rule. Indeed, it may well be that x defeats y and z defeats x, and yet y defeats z. The successive elimination solves the Condorcet paradox by fiat. Hence, it is in general impossible to find out on the basis of pairwise voting records whether the successive elimination system results in a robust (i.e. Condorcet) winner or one whose victory is merely due to the order of voting since the underlying majority preference relation is cyclic.

The relevance of the doctrinal paradoxes or discursive dilemmas is in settings where one is not simply aggregating opinions regarding the desirability of policies or candidates, but the voters are expected to be able and willing to formulate or accept arguments in support of certain conclusions. It is, of course, possible and, indeed, likely that the voters think in terms of arguments also in those settings where they are not expected to present them. This possibility widens essentially the domain of relevance of these paradoxes. We simply do not know what kind of arguments underlie conclusion-based aggregations. An analogous observation can be made regarding the likelihood of the Condorcet paradox or related anomalies in preference aggregation. What, on the other hand, restricts the relevance of judgment aggregations paradoxes is the intuitive observation that the arguments underlying the choice of a policy alternative or candidate have a wide variety of factual statements built into them. E. g. in political competitions one voter may regard economic self-interest as the primary consideration, another might have social justice considerations in mind, while a third voter could deem religious variables the most important ones. It is in fact not common to encounter settings where all voters would base their conclusions on the same propositions and their truth-value combinations. The same holds for expert bodies composed of representatives from different areas of knowledge, say, finance, customer relations, technical expertise, marketing. It is quite natural to expect that these experts build their arguments on different kinds of propositions.

9.5 Topics for Further Reflection

  1. 1.

    Bovens and Rabinowicz (2006) discuss the following example: an item of technical equipment is to be purchased for a specific purpose. Consider three propositions:

    • p: the item meets the safety standards

    • q: the item is economically feasible

    • r: the item should be purchased

    Three persons are in charge of the purchasing decision. Each thinks that the item should be purchased if and only if both p and q are true. Construct a table similar to Table 9.3 that does not exhibit the doctrinal paradox.

  2. 2.

    Consider a country that has cumulated a huge amount of foreign debt. It turns to a coalition of international actors for an economic aid package in the form of additional loans. The coalition consists of three equal-sized groups: A, B and C. Within each group there is a unanimity that the country ought to be given the requested aid if and only of if propositions p and q are true. Here p: the government of the country is able to execute economic policies that enable it to pay back – with interest – the borrowed funds within a 30-year period, q: the government’s policies are acceptable enough to the population for the government to stay in power for an adequate period of time to launch the policies. Construct a table similar to Table 9.3 so that the doctrinal paradox occurs. Then construct another table where is doesn’t occur.

9.6 Suggestions for Reading

Very useful accounts of the epistemic paradoxes are Bovens and Rabinowicz (2006), Dietrich and List (2013) and List (2011). The earlier contribution by Kornhauser and Sager (1986) set the stage for later developments in this rapidly expanding field.

Answers to Selected Problems

  1. 1.

    Suppose that two individuals think that both p and q are true and therefore the equivalence is true as well. Since these two person constitute a majority, their opinion coincides with the collective opinion. Hence, no paradox ensues.

  2. 2.

    A situation where the doctrinal paradox appears (A believes that p is true, but q is not, B believes that both p and q are true and C believes that p is false, but q is true):

    Country

    Prop. p

    Prop. q

    Prop. r

    A

    True

    False

    False

    B

    True

    True

    True

    C

    False

    True

    False

    Majority

    True

    True

    False

  3. 3.

    A situation where the doctrinal paradox does not appear (country B changes its mind regarding the truth value of q with respect to the preceding table:

    Country

    Prop. p

    Prop. q

    Prop. r

    A

    True

    False

    False

    B

    True

    False

    False

    C

    False

    True

    False

    Majority

    True

    False

    False