Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 An Introduction to Power, Voting, and Voting Power: 30 Years After

Power is a fundamental concept in the social sciences. It is, however, a theoretical one, i.e., it cannot be directly observed. It is also dispositional. If a person or institution has power, it has an ability or propensity to bring about certain types of events or other outcomes. From a formal point of view, power can be represented as a unary predicate (“A has power”) or a binary relation (“A has power over B”) or a ternary relation (“A has power over B with regard to X”). Nothing has changed in these fundamental relations since the publication of the volume Voting, Power and Voting Power (PVVP) in 1982. But subsequently many articles/studies… derived from the material presented in that volume (PVVP) have been published. Some of those publications directly refer to contributions that can be found in the 1982 collection of chapters. However, this is not the primary argument for publishing a second volume Voting, Power and Voting Power thirty years later. More convincing to us is that there has been a lot of new material developed during the last thirty years in the fields of PVVP. We think it high time for reflections about what has been accomplished during these years and what are the main issues of ongoing and future research. PVVP 2012 should be of help in answering these questions. Of course, the selection of material is highly subjective. We find the selected chapters very important contributions. Some are original material written for PVVP 2012. However, most of the chapters are more or less revised material published in the quarterly journal Homo Oeconomicus, and are, thus, accessible only for a small readership. We did not select articles that are available in leading and widely distributed journals of social sciences, economics, game theory and mathematics. This of course gives an additional bias to our volume. However, we think that we can leave it to the reader to find the “easy to access” articles in the library—whether in paper or in electronic form.

This is not the only bias that characterizes PVVP 2012. The volume is the result of our ideas about what research work and which results are important (or interesting) and what will be important for the future. This selection, of course, has to do with our own work in this field. However, we will follow, in this introduction and in the selection of the contributions to this volume, the two paths that Anatol Rapoport outlined in his Foreword to the 1982 volume: game theory is the one, and social choice theory the other.

Given these two foci and our personal biases, the contributions to this volume reflect the main issues in the discussion of power, voting and voting power over the last thirty years.

2 Power and Preferences

There is an ongoing debate on whether power measures should take the preferences of the agents into account, and if so, to what degree. For instance, the Journal of Theoretical Politics (JTP) dedicated many pages of its volume 11 (1999) to this issue. Those authors who wanted to see preferences taken into consideration even declared that power indices are useless—at least when it comes to measuring power in the EU—,Footnote 1 while others argued that power indices are valuable instruments just because they do not refer to preferences which might be unknown or irrelevant for the questions under scrutiny. The latter position was defended with reference to institutional design: future agents and their priorities are—or at least often should be—irrelevant and, in any case, unknown for the present-day deliberations. When on March 25th, 1957, the Treaty of Rome was signed creating the European Economic Community (EEC) of The Six, and the seats in the Council of Ministers were allocated to the participating countries, the signing partners could not foresee the political preferences of the governments that were to be represented in the coming years. (See Holler and Widgrén 1999a.)

In the course of the scholarly debate that took place, e.g., in JTP, a consensus seemed to emerge suggesting that political preferences are to be considered when power measures are used to forecast or to analyse specific outcomes or events defined by specific historical settings,Footnote 2 just like other factors that are affecting the outcome. Thus, for example, election outcomes are sometimes interpreted as depending on whether it rained or not. However, despite the apparent consensus the discussion about power and preferences has been popping up time and again. Napel and Widgrén (2005) argue for the “possibility of a preference-based power index,” this being the title of their article, while Braham and Holler’s (2005a, b) retort is the “impossibility of a preference-based power index”.Footnote 3

As Napel and Widgrén use all possible single-peaked preferences, one could argue that they use the assumed preferences as an analytical device to measure power, and not as an ingredient of power. In fact, it seems that they apply the preference profiles in order to defend their choice of the Shapley-Shubik index which is related to permutations of agents instead of unordered sets of agents, i.e., coalitions. However, even Shapley and Shubik (1954) doubted the plausibility of applying the Shapley value to weighted voting. Undoubtedly, information of preferences, whether fully hypothetical or with some empirical substance, can be useful to give us a better understanding of power measures. To put water into a bucket will show us whether the bucket has a hole or not. However, water is not part of the bucket. In many applications we may use the bucket without having it filled with water.

This volume opens with two contributions, the first one authored by Ian Carter (2013) and the second by Matthew Braham (2013), that discuss the nature of power. Conceptual issues are also discussed by Laruelle and Valenciano (2013) . For many of the contributions that follow Max Weber’s definition of power is a good starting point.Footnote 4 Unfortunately, there are somewhat incompatible alternative translations of Max Weber’s concept of power. Parsons translated “Macht bedeutet die Chance, …” as “the probability that one actor within a social relationship will be in a position to carry out his own will despite resistance.” (Weber 1947[1922]: 152, italics added).Footnote 5 This is the translation of Weber’s definition of power on page 38 of Wirtschaft und Gesellschaft, published posthumously (Weber 2005[1922]: 38). In the Essays from Max Weber, edited by Gerth and Mills, we read: “In general, we understand by ‘power’ the chance of a man or of a number of men to realize their own will in a communal action even against the resistance of others who are participating in the action” (Weber 1948[1924]:180). This is the translation of Weber’s definition given on page 678 of Wirtschaft und Gesellschaft (Weber 2005[1922]: 678).Footnote 6 There are also differences in the two definitions in their original German versions. For instance, the second definition extends the definition of power to “a number of men” and “communal action.” Another obvious difference is in the translation of the German word “die Chance” (which, of course, the Germans borrowed from French). Parsons used “probability” for its translation into English while in the edition of Gerth and Mills we read “chance.” Quite similar to English, in German “die Chance” expresses either a possibility or is a synonym for probability. It depends on the context whether the former or later interpretation applies. This also holds in the case of Weber’s definition of power and the use of “die Chance” in it.

There is a widely shared notion of probability which relates this concept to a random mechanism as, for example, in the expression “chance setup.” The outcome of the setup or mechanism is determined by “nature.” Chance presupposes a lack of control due, e.g., to decisions or actions of others or to unpredictable natural events. However, if somebody asks “what is the chance to see you tomorrow,” an answer “with probability 1/3” does not make sense if the answer solely depends on your choice. However, it would make perfect sense if you cannot leave the house if it rains and the probability of rain is 2/3.

Experts on Weber claim that his use of “die Chance” concurs with possibility or potential. On the other hand, the fact that Parsons used “probability” for the translation of “die Chance” cannot be neglected. Talcott Parsons received a doctorate from the University of Heidelberg in 1927. The title of his doctoral dissertation was “‘Capitalism’ in recent German literature: Sombart and Weber.”Footnote 7 In this volume, we find both interpretations. The idea of power as a potential was emphasized in Holler and Widgrén (1999b) where the value of the characteristic function in a coalitional game is interpreted as power. (See also Napel et al. 2013)

3 The Right Index

The ambiguity in the interpretation of power carries over to the question of the “right index.” Over many pages and years, the question of right index focused on a comparison of or, should we say, a competition between, the Shapley-Shubik index and the Banzhaf index—the latter also labelled as Penrose-Banzhaf or Banzhaf-Coleman index—ignoring other candidates like the measures suggested by Johnston (1978), Deegan and Packel (1979) and Holler (1982c). The Banzhaf faction was spearheaded by Dan Felsenthal and Moshé Machover while the Shapley-Shubik index had, e.g., Stefan Napel and the late Mika Widgrén as eminent supporters despite the fact that the latter two introduced themselves a power measure based on the “inferior player concept” (Napel and Widgrén 2001, 2013). On the other hand, Laruelle and Valenciano gave a new axiomatization for the two indices with axioms that “are remarkably close” such that “both indices appear on the same footing when they are interpreted as measures of power in collective decision-making procedures” (Laruelle and Valenciano 2001: 103). However, as Aumann (1977: 471) observes: “…axiomatics underscores the fact that a ‘perfect’ solution concept is an unattainable goal, a fata morgana; there is something ‘wrong’, some quirk with every one.” Still, axiomatizations “serve a number of useful purposes. First, like any other alternative characterization, they shed additional light on a concept and enable us to ‘understand’ it better. Second, they underscore and clarify important similarities between concepts, as well as differences between them.”

Felsenthal et al. (1998) and Felsenthal and Machover (1998) suggested a compromise, but also differentiation, through the claim that the Banzhaf index describes I-Power, an agent’s potential influence over the outcome, whereas the Shapley-Shubik index represents P-Power, an agent’s expected share in a fixed prize. However, Turnovec (2004) demonstrated that the distinction does not hold: both measures can be interpreted as expressing I-Power or, alternatively, P-Power. Indeed, these measures can be modeled as values of cooperative games and as probabilities of being ‘decisive’ without reference to game theory at all. The basic point being that ‘pivots’ (Shapley-Shubik index) and ‘swings’ (Banzhaf index) can be taken as special cases of a more general concept of ‘decisiveness’ (see Turnovec et al. 2008; see also Laruelle and Valenciano 2013 and König and Bräuninger 2013).

Still, the distinction of I-Power and P-Power contributes to the discussion of power measures and often serves as a valuable instrument to structure our intuition. Yet, in the light of Turnovec’s results, it is perhaps not a major flaw for the Public Goods Index (PGI), introduced by Holler (1982c, 1984) , that Felsenthal and Machover (1998) classify it among the P-Power measures. From Paul Samuelson we learn that there is nothing to share in the case of pure public goods. It is difficult to see why the PGI does not qualify as an I-Power measure like the Banzhaf index does. Loosely speaking, the difference between the PGI and the normalized Banzhaf index boils down to those winning coalitions that are not minimal. Holler (1982c, 1998) argues that these coalitions should not be considered because they imply a potential to freeride if the decisions concern public goods—as is often the case in policy making.Footnote 8 This does not mean that surplus coalitions do not form, but they should not be considered when measuring power.

There is another, more critical remark in Felsenthal and Machover that relates to the PGI. They argue that any a priori measure of power that violates local monotonicity is ‘pathological’ and should be disqualified from serving as a valid yardstick for measuring power (Felsenthal and Machover 1998: 221ff)—and they correctly point out that the PGI and the Deegan-Packel index violate this property. Holler and Napel (2004a, b) hypothesize that the PGI exhibits nonmonotonicity (and thus confirms that the measure does not satisfy local monotonicity) if the game is not decisive, as the weighted voting game  = (51; 35, 20, 15, 15, 15) with a PGI of  = (4/13, 2/13, 3/13, 3/13, 3/13) demonstrates, or is improper and therefore indicates that perhaps we should worry about the design of the decision situation.Footnote 9 The more popular power measures, i.e., the Shapley-Shubik index and the Banzhaf one, satisfy local monotonicity and thus do not exhibit any peculiarities if the game is not decisive or is improper. To what extent the PGI can serve as an indicator, revealing certain peculiarities of a game, has been discussed in Holler and Nurmi (2012b).

Interestingly, the Shapley-Shubik and Banzhaf index also violate local monotonicity if we consider a priori unions and the equal probability of permutations and coalitions, respectively, no longer applies.Footnote 10 The concept of a priori unions or pre-coalitions is rather crude because it implies that certain coalitions will not form at all, i.e., they have a zero probability of forming. Note since the PGI considers minimum winning coalitions (MWCs) only, this is formally equivalent to putting a zero weight on coalitions that have surplus players. Is this the (“technical”) reason why the PGI may show nonmonotonicity?

Instead of accepting the violation of monotonicity, we may ask under which circumstances or decision situations the PGI guarantees monotonic results—this may help to design adequate voting bodies. In Holler et al. (2001), the authors analyze alternative constraints on the number of players and other properties of the decision situations. For example, it is obvious that local monotonicity will not be violated by any of the known power measures, including PGI, if there are n voters and n-2 of these are dummies. It is, however, less obvious that local monotonicity is also satisfied for the PGI if one constrains the set of games so that there are only n−4 dummies. A hypothesis that needs further research is that the PGI does not show nonmonotonicity if the voting game is decisive and proper and the number of decision makers is smaller than 6.Footnote 11

Which index is the right one? Many contributions to this volume shed light on this question, e.g. Felsenthal and Machover (2013a, b); Laruelle and Valenciano (2013); König and Bräuninger (2013); Alonso-Meijide et al. (2013a, b); Amer and Carreras (2013); Widgrén and Napel (2013); Montero (2013) and Freixas and Pons (2013). A possible answer is due to Aumann (1977: 464): “None of them; they are all indicators, not predictors. Different solution concepts are like different indicators of an economy; different methods for calculating a price index; different maps (road, topo, political, geologic, etc., not to speak of scale, projection, etc.); different stock indices (Dow Jones)… They depict or illuminate the situation from different angles; each one stresses certain aspects at the expense of others.” We subscribe to this perspective. “Different solution concepts can…be thought of as results of choosing not only which properties one likes, but also which examples one wishes to avoid” (Aumann 1977: 471).

4 Cooperative Games, Bargaining Models and Optimal Strategies

Power indices can be distinguished by their underlying assumptions on coalition formation as well as by the weights they give to these coalitions. The weights may reflect the probabilities that particular coalitions form. Inasmuch as these measures are exogenously given by the rules implicit in the power measure we are in the realm of cooperative game theory. Recently, series of chapters have been published taking into account a priori unions, building on the pioneering articles of Owen (1977, 1982). See the contributions of Alonso-Meijide et al. (2013a, b).

Once we ask the question of whether coalition A forms and why coalition B does not, we enter the domain of noncooperative game theory. A lot of work has been done to derive the standard power indices from bargaining games or to interpret solution concepts that are based on notions of bargaining as power indices—also in order to understand the problem of implementing a given (possibly “fair”) power distribution. Maria Montero’s (2013) contribution to this volume, proposing the nucleolus as a power index, is an example of the latter. Another chapter by her that deals with the “noncooperative foundations of the nucleolus in majority games” (Montero 2006) obviously represents this same approach. Same is true of Yan’s (2002) modelling of a “noncooperative selection of the core,” while the contributions by Andreas Nohn (2013) as well as by Francesc Carreras and Guillermo Owen (2013) are examples that fall in the first category. The search for a noncooperative foundations of bargaining power and its relationship to the Shapley-Shubik index in Laruelle and Valenciano (2008) as well as the bidding models in Pérez-Castrillo and Wettstein (2001) and, with some reservation, in Hart and Mas-Colell (1996), supporting the Shapley value, also fall in this category.

The main result in Nohn (2013) is that veto players either hold all of the overall power of 1, or hold no power at all. This somehow reflects the preventive power measure (“power to block”) suggested in Coleman (1971). However, power indices also deal with the power to initiate and therefore will, in general, not allocate all the power to veto players. The difference is that in Coleman as well as in any other classical power measure the focus is on winning coalitions, i.e., sets of agents that have the means to accomplish something. To be potentially a member of such a coalition represents the chance that the corresponding agent “within a social relationship will be in a position to carry out his own will despite resistance,” borrowing from Weber’s definition given above, and thus power. In bargaining models the forming of coalitions is a possible, but not a necessary result. Of course, by definition, a veto player has the potential to block any winning coalition, but other arrangements may also lead to a break down of bargaining and to a zero outcome, which represents zero power. Veto power is important in bargaining games because the standard requirement for agreement is unanimity, but in general not all players are active all the time.

The n-person bargaining models in the tradition of Rubinstein or Baron and Ferejohn do not consider binding coalitions as the point of departure, but the power indices do so. No wonder that bargaining models and power measures are difficult to reconcile. Without being more explicit about coalition formation, the bargaining models are not likely to be successful in giving a noncooperative underpinning to the power indices.

Similar problems are relevant for those approaches that do not apply power measures to express a priori (voting) power but model the interaction of the agents as a game and look for possible equilibria. They substitute the potential of a coalition by a game form and preferences that allow specifying a Nash equilibrium (or a refinement of it) that describes the allocation of payoffs and thereby specifies the power of the players in this game. The analysis of EU codecision-making in Napel et al. (2013) is an example of this approach. Of course, the results depend on the assumed payoff functions. But whether we can generalize the outcome also depends on the structure of decision-making and on the information that the voters have. The assumption that the policy space is one where the voters have single-peaked preferences, face only binary agendas and are endowed with complete information is convenient but hardly descriptive of real world voting bodies. A rather extensive literature shows that, for a given preference profile, the voting outcome may strictly depend on whether we apply plurality voting, Borda count, amendment voting, approval voting or some other voting procedure. Moreover, a slight perturbation of the preferences may change the winning platform and thus the winning coalition to their opposite. (See, e.g., Holler and Nurmi 2012a, b) It has been said that power index analysis hardly ever deals with more complex voting rules and the information of the agents. But at least it does not suffer from the vulnerability to perturbation of preferences as long as the working of the rules does not depend on particular properties that the preferences have to satisfy so that we get a voting outcome at all. The contributions on the aggregation of preferences (Part VI) to this volume clarify some of these problems. We will come back to this issue below.

5 Fair Representation and Mechanism Design

Despite—or perhaps because of—the multitude of indices, on the one hand, and the implementation problems that we just outlined, on the other, the issue of fair representation has been much discussed during the last three decades. One reason is the emergence of and important advances in the field of theoretical mechanism design (see also Saari (2013) and Vartiainen (2013)).Footnote 12 Another is the ongoing discussion of adequate institutions for international organizations, such as the European Union (see König and Bräuninger (2013) as well as the contributions in Part V in this volume), the European Central Bank, the IMF (see Leech and Leech (2013)) and the World Bank, and various arrangements (frameworks like the UNFCCC) that deal with climate change and environment policy. (See, e.g., Holler and Wegner (2011) for the latter.) A parallel discussion we find in the business world: the issue of an adequate representation of the stakeholders in the various boards of a firm which, in the case of conflict, make use of voting. [See, e.g., Leech (2013); Gambarelli and Owen (1994, 2002).]

However, most vigorous is the discussion in the political arena. In modern democracies, fair representation is, at least, a two-stage problem that relates votes to seats and thus the vote distribution to the power distribution in the representative voting body.Footnote 13 One of the central issues addressed has been whether the influence over the outcomes (e.g., legislation) can be distributed precisely according to the resources (e.g., voting weights) when the rules of decision-making are taken into account. In proportional representation systems this issue has been dealt with by aiming at a reasonably close resemblance between the distribution of support for parties and the distribution of the party seats in the legislature. Upon closer inspection, however, the aim at proportionality turns out to be both ambiguous and vague. It is ambiguous in the sense that proportionality may refer to different things. An outcome that is proportional in one sense may not be proportional in another. The aim at proportionality is vague in the sense that—given a precise interpretation of the concept—the outcomes may exhibit different degrees of proportionality. Thus, for example, Jefferson’s (d’Hondt’s) method of proportional representation tends to be biased towards larger parties when compared with Webster’s (Sainte-Laguë).

The ambiguity of proportionality, in turn, can be illustrated by an example that refers to the preference profile in Table 1. Suppose that two candidates out of four (A, B, C and D) are to be elected. If the preferences given above are those reported by the voters, the plurality outcome is {A, B}, whereas proportionality when viewed from the perspective of the Borda count is {C, D}. i.e., depending on the interpretation of proportionality we may get mutually exclusive choice sets. As proportionality is by and large identified with fair allocation, this result is quite challenging.

Table 1 Preference profile

Many contributions in The Logic of Multiparty Systems, edited by Holler (1987a), analyze the assignment of votes to seats in the case of two or more “criteria of proportionality.” In the present volume Gambarelli and Palestini (2013) discuss a multi-district apportionment model that relies on minimax method that in the case of “unavoidable distortions” minimizes the “negative effects.” However, voting power is not dealt with in this model.

In the advent of the European Union taking the first steps of enlargement to the Central and Eastern Europe, Laruelle and Widgrén (1998) ask, to paraphrase the title of their chapter, whether the allocation of voting power among EU states is fair. To discuss this question they make use of the Square Root Rule and the Banzhaf index. The relationship between the two will be further discussed in the next section. What is important here is that applying the results of this approach implies a “re-weighting of votes and voting power in the EU,” to paraphrase the title of Sutter (2000) that was written as a critical response to Laruelle and Widgrén.

The re-shuffling of seats has been widely discussed in the EU context and, as we will see below, quite a few applications of analytic results have been presented (see Johnston (2013); Kirsch (2013); Bertini et al. (2013); Felsenthal and Machover (2013b) in this volume). History shows that such a policy is accompanied with frustration. Moreover, the re-shuffling method does not always allow perfect proportionality of votes and power. Let us assume a vote distribution w° = (40, 30, 30). Given simple majority voting, there is no re-shuffling of seats so that the corresponding power measure π° is identical with w°, irrespective of whether we apply the Shapley-Shubik index, the Banzhaf index or the PGI. In the introduction to Power, Voting and Voting Power, Holler (1982b) gives this example and suggests the randomized decision rule (3/5, 2/5) which prescribes a 3/5 probability of the simple majority and a 2/5 probability for a qualified majority of 2/3 of votes. Here, in order to keep the example simple, the PGI is applied, as it is very easy to list the complete set of minimum wining coalitions for this example. As a result we get the power distributions (1/3, 1/3, 1/3) for the simple majority rule and (1/2, 1/4, 1/4) for the 2/3 quota. Taking care of the randomization (3/5, 2/5) an expected power of π° = (40, 30, 30) follows—in percentages, of course.

The randomized decision rule approach was further elaborated in Berg and Holler (1986) and in Holler (1985, 1987b) and generalized in Turnovec (2013).

6 The Case of EU

Unsurprisingly, the issue of fair allocation of seats has been much debated in the context of the enlargement of the EU. In fact the analysis of the EU became the testing field and source of inspiration for almost all questions discussed so far. Therefore we think it appropriate to dedicate more than one page of this introduction to this subject.

6.1 The European Parliament

The enlargement of the EU entitles the new member states to voting rights in the European Parliament (EP) and the Council of Ministers (Council). For the EP, the standard procedure takes into account the size of the population and aims at guaranteeing the representation of the major political parties of each country.Footnote 14 Bertini et al. (2013) propose to restructure the distribution of the EP seats according to not only the population sizes but also the economic performance as measured by GDP. They suggest a formula that is based on the Banzhaf index and thus incorporates the potential to form a winning coalition, i.e., a priori voting power. Applying this to the Union of the 27 they show that, with the exception of Italy, all countries have their maximum power value if they either are represented in accordance with their population or, alternatively, with their GDP. The authors do not give a definitive method for allocating seats. Their intention is to build up scenarios to understand which EU country will benefit, if we take into account only GDP, only population, or a linear combination of the two. Taking into account GDP only, the analysis shows that Germany should have 24.35 % of the seats, France 16.38 %, Italy 13.42 %, and so on. This percentage for Italy will decrease if a higher weight is given to the population. It will fall to 12.00 % if only population is taken into consideration. The situation for Poland is quite the opposite: there would be 1.80 % of seats to it if the apportionment is based on GDP, whereas based on population only its share would be 8.04 %.

However, seat shares are notoriously a poor proxy for a priori voting power. Applying the Banzhaf index, Bertini et al. (2013) show that the maximum power for Italy is 12.09 %. This value is not reached in accordance with the maximum number of seats (13.42 %), but through a linear combination S = 0.8P + 0.2G where P and G represent “population” and “GDP,” respectively. This linear combination should be Italy’s preferred method for assigning seats among EU member countries. However, in this case Italy would have only 12.28 % of the seats. For the corresponding voting game its Banzhaf index shows a maximum. (N.B.: all other EU member states can be expected to prefer a different apportionment rule than Italy.)

Here the nonmonotonicity of power is due to the multi-dimensionality of the reference space for the seat apportionment.Footnote 15 Individual voters also face the multi-dimensionality of the EP, but in general they are not informed about individual decisions of the EP and the decisions of their representatives. Moreover elections to the EP are often used as by-elections sanctioning the performance of the political parties on the national level.

6.2 The Council of Ministers

The recent history of the shaping of the Council is highlighted by the Nice Treaty of 2001 and the Brussels agreement of 2004. The latter was designed as part of the Treaty establishing a constitution for Europe. The discussion was about the proposed seat distributions, on the one hand, and the decision rules, on the other. In accordance with the Treaty of Nice each EU member state is assigned a voting weight which to some degree reflects its population. With the sum of the weights of all 27 member states being 345, the Council adopts a piece of legislation if following three conditions are satisfied: (a) the sum of the weights of the member states voting in favor is at least 255 (which is approximately a quota of 73.9 %); (b) a simple majority of member states (i.e. at least 14) vote in favor; (c) the member states voting in favor represent at least 62 % of the overall population of the European Union.

The distribution of weights shows, to pick out some prominent features, an equal distribution of 29 votes to the four larger EU member states Germany, France, the UK, and Italy and 4 votes for each of the member states at the opposite end of the scale: Latvia, Slovenia, Estonia, Cyprus and Luxembourg. Malta with a weight of 3 and population of about 400.000 concludes the scale. Note that Germany, with a population of about 82.5 million, and Italy, with a population of 57.7 million, have identical voting weights. The voting weights are monotonic in population size, but obviously this monotonicity is “very” weak.

Condition (c) was meant to correct for imbalances in the ratio of population and seat shares. However, Felsenthal and Machover (2001) demonstrate that the probability of forming a coalition which meets condition (a) but fails to meet one of the other two is extremely low. Therefore, the “triple majority rule” implied by the Nice Treaty boils down to a single rule.

Given the shortcomings of the voting rule of the Treaty of Nice a revision did not come as a surprise.Footnote 16 According to the Brussels agreement of 2004, the Constitutional Treaty, the Council takes its decisions if two criteria are simultaneously satisfied: (a) at least 55 % of EU member states vote in favor; and (b) these member states comprise at least 65 % of the overall population of the EU.

A major defect of the Nice voting rule seems to be the high probability that no decisions will be taken and the status quo prevails, i.e., the decision-making efficiency is low when measured by the Coleman power of a collectivity to act. This measure, the so-called passage probability, represents the probability that the Council would approve a randomly selected issue, where random means “that no EU member knows its stance in advance and each member is equally likely to vote for or against it” (Baldwin and Widgrén 2004: 45). It is specified by the proportion of winning coalitions assuming that all coalitions are equally likely. For the Treaty of Nice rule this measure is 2.1 % only, while for the Constitutional Treaty it is 12.9 %. However, Baldwin and Widgrén (2004) demonstrate that with no substantial change in the voting power of the member states, the Treaty of Nice system can be revised so that its low decision-making efficiency increases significantly. Thus, the difference in effectiveness does not necessarily speak for the Constitutional Treaty rule. But perhaps fairness does.

Condition (b) of the Constitutional Treaty implies that the voting weights applied are directly proportional to the population of the individual member states. At a glance this looks like an acceptable rule, representing the “one man, one vote” principle. However, it caused an outcry in those countries that seem to suffer by the redistribution of a priori voting power implied in the substitution of the “triple majority rule” of the Treaty of Nice by the “double majority rule” of the Constitutional Treaty—also referring to a violation of the “one man, one vote” principle. For instance, Słomczyński and Życzkowski (2007a, b); see also Życzkowski and Słomczyński (2013) in this volume point out that the larger and the smaller countries will gain power should the double majority rule of the Constitutional Treaty prevail, while the medium-sized countries, especially Poland and Spain, will be the losers in comparison to the voting power implications of the Treaty of Nice. (But obviously the Council’s voting system of the Treaty of Nice was considered defective.)

Both the Treaty of Nice and the Constitutional Treaty imply voting rules that are based on a compromise between the two principles of equality of member states and equality of citizens. The double majority rule emphasizes these principles. Large states gain from the direct link to population, while small countries would derive disproportionate power from the increase in the number of states needed to support a proposal. The combined effect reduces the a priori voting power of the medium-sized countries. More specifically, Germany will gain by far the most voting power under the Constitutional Treaty rule, giving it 37 % more clout than the UK, while both countries have equal voting power in accordance to rule (a) of the Treaty of Nice. Moreover, the Constitutional Treaty rule will make France the junior partner in the traditional Franco-German alliance which may lead to severe tensions in this relationship.

Obviously, there are substantial differences between the two schemes discussed, and their application to EU decision-making might have substantial and unwarranted consequences. Moreover, there are conflicts of interests made obvious by the analysis of voting power. In order to lessen these conflicts, Słomczyński and Życzkowski (2007a, b, and 2013) propose an allocation of seats and power that they call the “Jagiellonian compromise,” named after their home university in Krakow. The core of this compromise is the square root rule, suggested by Penrose (1946). This rule is meant to guarantee that each citizen of each member state has the same power to influence EU decision-making.Footnote 17 Applied to the two-tier voting problem of the Council (i.e., voting in the member states at the lower level and in the Council at the upper level), it implies choosing the weights that are proportional to the square root of the population. What remains to be done is to find a quota (i.e., decision rule) such that the voting power of each member state equals its voting weight. But, as already noted, for smaller voting bodies this generally cannot be achieved when applying one quota only. However, the EU has a sufficiently large number of members so that this equality can be duly approximated. Słomczyński and Życzkowski (2007b) give an “optimal quota” of 61.6 % for the EU of 27 member states. Interestingly, the optimal quota decreases with the size of the voting body.Footnote 18

A further expansion of EU membership (e.g., the admission of Turkey) does not constitute a challenge to the square root rule. The adjusted seat distribution will take care of (the square root of) the additional population share, by redistributing seats or by adding additional seats to the Council, and the quota will be revised so that the a priori power is as equal as possible to the seat distribution. This is why Słomczyński and Życzkowski (2007a, b) suggest not fixing the quota in a new constitutional contract but only prescribe a procedure, which assures that (a) the voting weights attributed to each member state are proportional to the square root of the population; and (b) a decision is taken if the sum of the weights of the members that vote yes exceeds the quota \( q={1/2 + 1/\sqrt {\pi M}} \), where M represents the number of member states.

The choice of the optimal quota guarantees that the Council’s decision-making efficiency of the square root system is always larger than 15.9 %. This is larger than calculated for the Constitutional Treaty, and far more than promised by the Treaty of Nice rule. Słomczyński and Życzkowski (2007b) point out that the efficiency of the square root system does not decrease with an increasing number of members states, whereas the efficiency of the double majority rule does.

6.3 Codecision-Making

There is still a puzzle to solve: Why does the allocation of the budget follow national voting power distribution in the Council, as demonstrated by Kauppi and Widgrén (2004, 2007), when the annual spending plans are negotiated between the EP and the Council on the basis of a proposal by the Commission? The EP is organized along ideology based party groups and members of the EP are said not to follow narrowly defined national interests. Is the Council the stronger institution although both institutions are meant to have equal influence on the budget?

Napel and Widgrén (2006), (see also Napel et al. 2013) analyze the power relations of the Council and the EP in the EU legislation under the codecision procedure as a noncooperative game, i.e., both institutions are assumed to act strategically. Their results are that (a) the procedure favors the status quo and (b) the Council has a stronger a priori influence on the outcome than the EP. Both results are due to the qualified majority rule of the Council (whereas the EP only applies simple majority voting). Thus the low decision-making efficiency of the Council, discussed above, carries over to the codecision procedure.

At some stage of the sequential game that the Council and the EP play in the model of Napel and Widgrén, Conciliation Committees enter the arena. Such a committee is composed of the representatives of EU member states—at the time of the study these numbered 25—representing the Council and a delegation of EP members of the same size. It is interesting to note that here the Union of States principle reflected in rule (b) of the Treaty of Nice determines the representation of the Council. This is generally not taken into consideration when the a priori voting power distribution in the Council is analyzed as a weighted voting game. On the other hand, Napel and Widgrén have, in addition to making use of stylized procedural rules that determine the strategies of the players, made some simplifying assumptions on the preferences of the players, i.e., the Council, the EP and the Conciliation Committees, to get a full description of a game model. The individual members of the Council and the EP, also when they are members of a Conciliation Committee, are assumed to have single-peaked preferences. Of course, the latter is a strong assumption, given that many EU policies have a strong distributional character and thus are prone to cyclical majorities and unstable voting outcomes. The fact that we cannot observe a high degree of instability, resulting in prevalent revisions of decisions, seems to be the result of extensive logrolling. The Franco-German alliance is a manifestation of such a policy.

7 Social Choice and Paradoxes of Representation

Voting is a mechanism of aggregating preferences. It also forms a link between the two approaches to voting power singled out by Rapoport in his Foreword to PVVP 1982, i.e., game theory and social choice, mentioned above. Voting is often modelled as a game with voters as players and ballots as strategies to choose from. However, voting is a very imperfect way of aggregating preferences if we impose the conditions that Arrow used in his General Possibility Theorem (1963[1951]). Overall, the social choice theory is notorious for its many negative results that demonstrate the incompatibility of various choice desiderata. The outcomes of aggregation under given choice rules do not always seem to reflect the individual opinions in a plausible way. Should we then take preferences into account at all when discussing social decision mechanisms? Although it can be debated whether the analysis of power should take preferences into consideration, it seems obvious that decisions reflect preferences (and perhaps power). At least they should, lest the fundamental democratic principle of “going to the people” be undermined. This should also apply to collective decisions, based on social preferences, unless we argue that “policy is merely a random business.” Reasonable choice rules establish a relationship between individual preferences and social preferences, but, as Arrow proved, this relationship is not always straightforward. Social preferences that have the same properties as individual preferences may not exist. In particular, majorities may exhibit properties that would be regarded as irrational when found in individuals. The Condorcet cycle teaches us that the pairwise majority aggregation of individual preference relations, which satisfy transitivity, may lead to intransitivity in the aggregate. While \( ( {\text{A}} \succ {\text{B) \& (B}} \succ {\text{C)}} \Rightarrow ({\text{A}} \succ {\text{C}}) \) is widely accepted as minimum requirement of rational behaviour, and not only by social choice theorists, it could well be that we get a Condorcet cycle \( ( {\text{A}} \succ {\text{B) \& (B}} \succ {\text{C}) \;\&\; ({C}}\succ {\text{A}}) \) for the society, when aggregating well-ordered individual preferences. We get intransitivity for the social preferences and, as a result, inconsistent decisions. However, to conclude that the society is “irrational” puts too much individualism on it. There are different groups behind the social rankings \( ({\text{A}} \succ {\text{B}}), \, ({\text{B}} \succ {\text{C}}), \) and \( ({\text{C}} \succ {\text{A}}): \, ({\text{A}} \succ {\text{B}}), \) might be supported by a majority that consists of x- and y-voters, (\( {\text{B}} \succ {\text{C}} \)) might be supported by a majority that consists of x- and z-voters, and \( ({\text{C}} \succ {\text{A}}) \) might be supported by a majority that consists of y- and z-voters, all voters choosing in accordance to their preference order.

Saari (2013) gives a general characterization for preference profiles that will result in such a cycle as just described by the concept of Ranking Wheel Configuration (RWC). Those preference profiles that do not form a RWC are “strongly transitive.” The RWC construction provides a way to understand basically all paradoxical results that are related to pairwise or, more generally part-wise, comparisons of alternatives. Eckert and Klamler (2013) apply Saari’s geometric approach to discuss paradoxes of majority voting. Ahlert and Kliemt (2013) demonstrate that “numbers may count” (e.g., of victims) in case of the ethical ranking of possible state of affairs. Ono-Yoshida (2013) tests selected solution concepts for multi-choice games and fuzzy games under the assumption that coalitions are binding. This assumption is paired with a bargaining model by Carreras and Owen (2013) who examine the possible proportionality of the Shapley rule, thus matching a model “where only the whole and the individual utilities matter”, assuming transferable utilities, with a concept that assumes coalition formation. The subsequent contribution by Vartiainen (2013) does not consider coalitions. However, the log-rolling equilibrium in Vartiainen can be identified with a grand coalition. The failure to achieve such a result implies that the society (i.e., the set of players) remains in the state of anarchy. In the concluding chapter, Schofield (2013) discusses instability and chaos of social decision-making, resuming the coalition framework, and illustrates the implication of the corresponding solution concepts with reference to climate change. This demonstrates a high analytical potential of the social choice tool kit even in the case of anarchy and chaos.

8 Power, Causality, and Responsibility

The concluding section of our reflections deals with an issue which is only indirectly covered by the contributions to this volume, i.e., the allocation of responsibility in collective decision-making.Footnote 19 This is motivated by the expectation that if the allocation of responsibility works, threats of punishment or promises of appreciation and honors may improve the results of collective decision-making. However, the specification of causality in the case of collective decision-making with respect to the individual agent cannot be derived from the action and the result as both are determined by the collectivity. They have to be traced back to decision-making itself. But collective decision-making has a quality that differs substantially from individual decision-making. For instance, an agent may support his favored alternative by voting for another alternative or by not voting at all. The two volumes by Nurmi (1999, 2006) contain a collection of such “paradoxes.”Footnote 20

These paradoxes tell us that we cannot derive the contribution of an individual to a particular collective action from the individual’s voting behavior. Trivially, a vote is not a contribution, but a decision. Resources such as voting power, money, etc. are potential contributions and causality might be traced back to them if collective action results. As a consequence causality follows even from those votes that do not support the collective action. This is reflected in everyday language when one simply states that the Parliament has decided, when in fact decision was made by a majority of less than 100 % of votes. But how can we allocate causality if it cannot be derived from decisions?

Imagine a five-person committee \( {\text{N }} = \, \left\{ { 1,{ 2},{ 3},{ 4},{ 5}} \right\} \) that makes a choice between the two alternatives x and y. The voting rule specifies that x is chosen if either (1) 1 votes for x, or (2) at least three of the players 2–5 vote for x. Let us assume that all individuals vote for x. What can be said about causality? Clearly this is a case of over-determination inasmuch as there can be two “winning coalitions” at the same time, and the allocation of causation is not straightforward. The action of agent 1 is an element of only one minimally sufficient coalition, i.e., decisive set, while the actions of each of the other four members are in three decisive sets each. If we take the membership in decisive sets as a proxy for causal efficacy and standardize such that the shares of causation add up to one, then vector

$$ h^\circ = \left( {\frac{1}{13},\,\frac{3}{13},\,\frac{3}{13},\,\frac{3}{13},\,\frac{3}{13}} \right) $$

represents the degrees of causation .Footnote 21 Braham and van Hees (2009: 334), who introduced and discussed the above case, conclude that “this is a questionable allocation of causality.” They add that “by focusing on minimally sufficient coalitions, the measure ignores the fact that anything that players 2–5 can do to achieve x, player 1 can do, and in fact more–he can do it alone.”

Let us review the above example. Imagine that x stands for polluting a lake. Now the lake is polluted, and all five members of N are under suspicion for having polluted it. Then implies that the share of causation for 1 is significantly smaller than the shares of causation of each of the other four members of N. If responsibility and perhaps sanctions follow causation, then the allocation seems pathological, at least at the first glance. One might however argue that a smaller member of N could send its garbage to the lake hoping that the lake does not show pollution, while this is not possible in the case of player 1. Given that the costs of cleaning will be assigned to the members of N, player 1’s expected benefits of sending its garbage to the lake might be much smaller than the expected benefits of the smaller ones.

Perhaps this argument looks somewhat farfetched, but it parallels the “tragedy of the commons” and related “paradoxes” of social interaction. However, Braham and van Hees (2009) propose to apply the weak NESS concept instead of the strong one, i.e., not to refer to decisive sets, but to consider sufficient sets instead and count how often an element i of N is a “necessary element of a sufficient set” (i.e., a NESS).Footnote 22 Taking care of an adequate standardization so that the shares add up to 1, we get the following allocation of causation:

$$ b^{^\circ } = \,\left( {\frac{{11}}{{23}},\frac{3}{{23}},\frac{3}{{23}},\frac{3}{{23}}\,\frac{3}{{23}}} \right) $$

The result expressed by looks much more convincing than the result proposed by , does it not? Note that the b-measure and h-measure correspond to the Banzhaf index and the PGI, respectively, and can be calculated accordingly.

If our intuition refers to the capacity of influencing the outcome that differentiates the players, then the numerical results seem to support the weak NESS test and thus the application of the Banzhaf index. However, what happened to alternative y? If y represents “no pollution” then the set of decisive sets consists of all subsets of N that are formed of the actions of agent 1 and the actions of two out of agents 2–5. Thus the actions of 1 are members in six decisive sets while the actions of 2–5 are members of three decisive sets each. The corresponding shares are given by the vector

$$ h* = \left( {\frac{2}{6},\,\frac{1}{6},\,\frac{1}{6},\,\frac{1}{6},\,\frac{1}{6}} \right) $$

Obviously, h* looks much more convincing than and the critical interpretation of Braham and van Hees (2009) no longer applies: agent 1 cannot bring about y on its own, but can cooperate with six different pairs of other agents to achieve this goal.

Note that the actions (votes) bringing about x represent an improper game—two “winning” subsets can co-existFootnote 23 —while the determination of y can be described as a proper game. However, if there are only two alternatives x and y, then “not x” necessarily implies y, irrespective of whether the (social) result is determined by voting or by polluting. The h-values indicate that it seems to matter what issue we analyze and what questions we raise, while the Banzhaf index with respect to y is the same then for x:  = b*.

From the above example we can learn that nonmontonicity might indicate that we asked perhaps the wrong question: Does the responsibility pertain to keeping the lake clean or to polluting it and then perhaps sharing the costs of cleaning it? To conclude, the PGI and thus the strong NESS concept may produce results that are counterintuitive at first glance. However, in some decision situations they seem to tell us more about the power structure and the corresponding causal attribution than the Banzhaf index and the corresponding weak NESS concept do.

In the Republic of San Marino, every six months, the proportionally elected multi-party Council selects two Captains to be the heads of state. These Capitani Reggenti are chosen from opposing parties so that there is a balance of power. They serve a six-month term, and a subsequent re-election is not possible. Once their six-month term is over, citizens have three days to file complaints about the Captains’ activities. If they are warranted, judicial proceedings against the ex-head(s) of state can be initiated.Footnote 24 Should the European Court of Justice evaluate the policies of the Council and the EP? Perhaps impartial commenting could help to make voters more aware of EU decision-making and thus increase political responsibility. However, there have to be more effective ways for the voter to hold his or her representatives accountable than to vote every four years, if responsibility is to work.