Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

We begin by reviewing the theory of two-person bargaining and three-person bargaining over a unit of wealth in the presence of outside options. We then explore the implications and developments in the literature for bargaining games with more than three players, and in particular, for legislative bargaining games in a democratic assembly with many members who decide by majority rule how to divide a unit of wealth, or an infinite sequence of units of wealth.

The Nash bargaining solution applies to bargaining problems between two rational players who have no secrets from each other (Nash 1950). Such solution concepts—characterized by a set of axioms—are classified as being part of cooperative game theory. Cooperative game theory is often mistakenly regarded as a rival to noncooperative theory but Nash saw cooperative and noncooperative game theory as complementary approaches in which the strengths of one approach can buttress the weaknesses of the other.

Cooperative game theory presupposes a preplay negotiation period during which the players come to a binding agreement on how a game is to be played. However, all this preplay activity is packed away in a black box during a cooperative analysis. The strength of the approach is that it is often possible (as in the case of the Nash bargaining solution) to obtain a simple characterization of what deal rational players will reach. Its weakness is that it fails to explain why rational players will honor the axioms that support one solution concept rather than the axioms that support an alternative solution concept (like the Kalai-Smorodinsky (1975) bargaining solution).

1.1 The Nash Program

The Nash program invites us to open the black boxes of cooperative game theory to see whether the mechanism inside really does works in the way the axioms characterizing a cooperative solution concept assume. Nash observed that the details of the negotiation process we will find inside such a black box determine a noncooperative game, in which the moves are everything the players may say or do while bargaining. If we model any bargaining that precedes a game \(\mathcal{G}\) in this way, the result is an enlarged game \(\mathcal{N}\). A strategy for this negotiation game first tells a player how to conduct the preplay negotiations, and then how to play \(\mathcal{G}\) depending on the outcome of the negotiations. Negotiation games must be studied without presupposing preplay bargaining, all preplay activity having already been built into their rules. Analyzing them is therefore a task for noncooperative game theory. This means looking for their Nash equilibria, in the hope that the equilibrium selection problem won’t prove too difficult when there is more than one equilibrium.

When a negotiation game \(\mathcal{N}\) can be solved successfully, we have a way of checking up on what a cooperative solution concept tells us about the rational outcome of \(\mathcal{G}\). If a cooperative solution concept says that the result of a rational agreement on how to play \(\mathcal{G}\) will be s, then s should also result from solving \(\mathcal{N}\) as a noncooperative game. If it does not, then we have a mismatch between our cooperative axioms and the economic environment in which they are being applied.

Two mistakes are commonplace. The first is to conclude that we need not bother with cooperative theory at all. One should instead always undertake the impossible task of modeling all bargaining using noncooperative models. This is impossible because a negotiation game that was sufficiently general to capture each twist and turn that a real-life negotiation might conceivably take would be complicated beyond all imagining. One can only analyze simplified negotiation games—as in Nash (1951) or Rubinstein (1982)—and hope that one is right in thinking that the simplified game captures all the strategic factors that really matter. In practice, Rubinstein’s model is usually substituted for the actual negotiation game, but this is a pointless activity since we already know that the Rubinstein game implements the Nash bargaining solution (Binmore 2007). This observation leads to the second mistake, which is to proceed as though we need not bother with noncooperative theory at all. The next section explains why this attitude—which is orthodox in the matching-and-bargaining literature—sometimes leads to the parameters that characterize a bargaining problem being written wrongly into the Nash bargaining solution used to predict its outcome.

2 Outside Options

Nash (1950) models a two-person bargaining problem as a feasible set of payoffFootnote 1 pairs on which the players might agree, and a status quo payoff pair that represents what each player will get in the event of a disagreement. Sometimes this model is adequate and sometimes not. It is not adequate, for example, when information is incomplete.Footnote 2 Nor is it adequate when time matters and the players discount time at different rates.Footnote 3 Nor when disagreement can arise in more than one way. We focus here on a simple version of the last case.

Suppose that two disagreement points can be distinguished in a bargaining problem: a deadlock point and a breakdown point. The deadlock point is the payoff pair that would result if the players bargained forever without breaking off the negotiations or reaching an agreement. The breakdown point is the payoff pair that would result if one player were to abandon the negotiations irrevocably with the result that both players take up their best outside option.

The breakdown point and the deadlock point may be the same, but when they are not it is commonplace in studying wage bargaining to identify the status quo in the Nash bargaining solution with the breakdown point. When is this sound modeling practice? Two noncooperative models can be used to examine this question.

Nash’s Demand Game In this game, two players make simultaneous take-it-or-leave-it demands (Nash 1951). If the demands are jointly infeasible, all bargaining is over and cannot be resumed. The players will then take up their best outside options (that may include continuing to cooperate with their bargaining partner on the same basis as before they failed to agree on a new contract). Identifying Nash’s status quo with the breakdown point then makes sense.

Rubinstein’s Alternating-Offers Game But who believes people who say they are making their last and final offer? What prevents your bargaining partner from refusing your offer, and then making a counter-offer before you have the chance to commit to an outside option? This natural feature can be built into Rubinstein’s (1982) alternating-offers bargaining model without difficulty (Binmore et al. 1982). If the players discount time at equal rates and the interval between successive offers is sufficiently small, the subgame-perfect equilibrium outcome approximates the Nash bargaining solution for the case when all deals that pay players less than their best outside option have been removed from the set of feasible agreements.Footnote 4 The status quo in this use of the Nash bargaining solution is placed at the deadlock point, which corresponds to the payoffs the players would receive if all offers were always refused and no outside option were ever taken up.

The alternating-offers game so obviously fits wage bargaining better than the demand game that it remains a wonder that the matching-and-bargaining literature should hold so firmly to the practice of locating the status quo in the Nash bargaining solution at the breakdown point rather than the deadlock point.

3 Three-Player Bargaining

What happens when the conclusion of the preceding section on two-person bargaining with outside options is applied to bargaining among more than two players?

A new problem then arises because deals may now be reached that do not include all the bargainers. We therefore have to determine not only how much each signatory of an agreed contract receives, but also who is excluded from the winning coalition (or coalitions). The latter problem is endemic in political science. Many coalitions might form, but which coalition will actually form as the result of rational bargaining? Unfortunately, noncooperative bargaining models adapted to this problem commonly have multiple equilibria, and so a determinate answer is available only in special cases.

The three-player/three-cake problem studied in Binmore (1986) serves to illustrate the latter issue. Only one of three feasible cakes are available for division at the end of the bargaining session. Each cake is controlled by a different pair of players, as illustrated in Fig. 1 (where the cake Y ij is controlled by players i and j). The aim of player i in the bargaining game is to reach an agreement with another player j which determines that cake Y ij is to be divided and how much each player will then receive. The outvoted third player k then gets nothing. We simplify by placing the deadlock point at the origin. Even though we assume that the players have no external outside options, the breakdown situation is more complicated because the outside option for i when bargaining with j is the payoff that i would receive by abandoning j and making a deal with k.

3.1 Cooperative Approach

Two distinct cases are identified in Fig. 1.

Fig. 1
figure 1

Three-player/three-cake problem. The deadlock point is placed at the origin, and individual players have no external outside options. However, outside options arise endogenously because when two players make a deal, they always have the alternative of proposing a deal to the third player who will otherwise be left with nothing. The feasible set Y ij is what is available to the coalition {ij}. The left diagram shows a case in which a Von Neumann and Morgenstern triple exists. When such a triple {v 12, v 23, v 31} exists, the final outcome is one of its three elements. The right diagram shows a case in which no Von Neumann and Morgenstern triple exists. Only one coalition can then form. In the figure, this coalition is {31}, and the final outcome is the Nash bargaining solution n when the feasible set is Z 31 and the status quo is 0

Von Neumann and Morgenstern Triple The left diagram of Fig. 1 illustrates the case in which a Von Neumann and Morgenstern triple (v 12, v 23, v 31) exists.Footnote 5 Suppose that 1 and 2 are bargaining in this case with a view to excluding 3. With the second bargaining model of Sect. 3, they will then agree on the Nash bargaining solution with status quo at the deadlock point 0 for the set Y 12 with all points that pay 1 or 2 less than their outside options removed. When these outside options are what they would get from v 23 and v 31, this set Z 12 consists only of the single point v 12, which is therefore the deal on which 1 and 2 will agree should they get together to bargain.

An identical argument shows that 2 and 3 will agree on v 23 if they get together to bargain. Similarly, 3 and 1 will agree on v 31. But which of the three two-player coalitions will actually form is left unspecified. In practice, this question is determined by historical or accidental factors that may easily vary over time and are not included in the model.Footnote 6

Two Buyers and One Seller The right diagram of Fig. 1 illustrates a case with no Von Neumann and Morgenstern triple. The theory than predicts that the coalition {3, 1} will form. When 3 and 1 bargain, their outside options will be the most that 2 can offer each of them. They will therefore agree on the Nash bargaining solution with status quo at the deadlock point 0 for the set Z 31 (which is Y 31 with points that pay less than a player’s best outside option removed). In Fig. 1, the outcome is not what would be obtained by placing the status quo at the point at which both players receive their best outside options. It is exactly the same as it would be if the players had no outside options at all.

A particularly interesting case arises when 1 and 2 are rival buyers, and 3 is the only available seller of a good. The set Y 12 then contains only the origin. This will make no difference to the preceding result unless Y 23 in Fig. 1 is enlarged so that 2 can offer 3 more than 3 gets at n. Applying the Nash bargaining solution with status quo at 0 to Z 31 then results in 3 getting no more than his outside option and 1 getting the rest of what is available. That is to say, 3 sells to 1 at the highest price that 2 is willing to pay, which is the perfectly competitive outcome. The argument that leads to this conclusion is a retelling of the story usually offered when studying Bertrand competition.

3.2 Noncooperative Approach

The results of the previous section are defended in Binmore (1986) using a noncooperative model. Section 3 draws attention to the need to ask whether Nash’s Demand Game or Rubinstein’s Alternating-Offers Game (or some other game) fits a particular application best. With more than two players, it is necessary to make a further distinction between telephone bargaining models and market bargaining models. The literature commonly appeals to the Telephone Game [as in Chatterjee et al. (1992)] but the Market Game fits many applications better.

Telephone Bargaining Game This game is a direct extension of Rubinstein’s Alternating-Offers Game to the three-player/three-cake problem. A player phones one of the other players and they exchange offers until agreement is reached, or else one of the players hangs up (exercises his outside option) and phones the third player, whereupon the situation repeats itself. This model fits applications like the wage-bargaining problem studied by Shaked and Sutton (1984) in which the employer has to bargain with employees one-by-one. He cannot then bring his full bargaining power to bear because an employee will respond to his threat to replace him by observing that the employer will then find himself in exactly the same bargaining situation as at present, but one period later.

But what compels the employer to honor the rules of a Telephone Game? Why does he not find some way of getting his offers to both prospective employees simultaneously, so that they find themselves competing against each other in what effectively becomes an auction? The next model allows players this freedom.

Market Bargaining Game It is easiest to have the players rotate in having the initiative. If a player with the initiative refuses the most recent demands made by both his predecessors, he then gets make a demand of his own. Binmore (1986) shows that this Market Game implements all the very natural results described in Sect. 4.1.

4 Many Players: Legislative Bargaining

With more than three players, the n-player version of the Nash bargaining solution can be used, although it is no longer true that it is necessarily implemented by subgame-perfect equilibria of the corresponding Alternating-Offers Game. It is necessary to restrict attention to stationary (Markov) subgame-perfect equilibria for this purpose. More importantly, there is the additional problem that the approach neglects the possibility that coalitions of less than n players might form. Even if they do not form, the fact that they might form will often influence the deal finally reached by the grand coalition of all n players.

Consider, for example, the three-player/four-cake problem (which adds a large fourth cake can be divided among all three players to the three-player/three-cake problem). A greedy player can then be held in check by the prospect of the other two players agreeing on a coalition from which he is excluded. This problem can be solved by applying the three-player Nash bargaining solution to the fourth cake from which all outcomes that give players less than they could get by abandoning the grand coalition for a smaller coalition have been removed.

Unfortunately, combinatorial problems arise when one goes beyond three players. The multiplicity of possible bargaining outcomes already noted in discussing Von Neumann triples then extends not only to determining who will be excluded from a coalition, but also to the overall coalitional structure.

In applications to collective decision-making in legislative assemblies, we need a bargaining theory that can: (a) account for more than three players, and (b) recognize that not all players must participate in the agreement, as in order to advance a policy it suffices to obtain approval from a majority (or a supermajority as determined by the assembly’s rules) of legislators. The n-player version of the Nash bargaining solution is then not as appealing in these applications.

Consider a legislative assembly (a Parliament; a Congress; a National, State, Regional or Municipal Assembly) that must make a collective decision of a distributive nature, such as how to allocate an existing budget among various alternative projects. In the simplest case, which most closely resembles two-agent bargaining, each agent is interested in one and only one project and wishes to maximize funds devoted to it, so that agents have purely antagonistic preferences over the division of the budget, without cross-externalities. The standard approach to study this strategic environment is to model it as an infinitely repeated sequential offers game, in which in each round a player is randomly selected to propose an allocation of the budget, players vote simultaneously whether to approve or reject this proposal, and the first proposal to gain a majority of approving votes is implemented. Baron and Ferejohn (1989) showed that in a stationary solution to this game, a proposal passes in the first round and it allocates the whole budget to a minimal winning coalition, with a disproportionately large share for project favored by the first proposer.

While there are other stationary (and many non-stationary) equilibria besides the one described by Baron and Ferejohn (1989), Eraslan (2002) strengthened the result by showing that the payoff to each agent is unique across all stationary equilibria. This payoff uniqueness holds for a more general class of games allowing for unequal discount factors, unequal recognition probabilities and supermajority voting rules [or in fact for any arbitrary definition of the collection of winning coalitions as in Eraslan and McLennan (2013)]. Merlo and Wilson (1995) introduce a budget of stochastic size, which together with a unanimity rule can lead to delays as all players sometimes prefer to wait hoping for the budget to grow; with a stochastic budget and majority rule, there multiple equilibria (Eraslan and Merlo 2002). Eraslan and McLennan (2013) provide an extensive list of references.

Consider the class of coalition formation games in characteristic form (each coalition or subset of agents has an aggregate worth) with transferable utility, in which in each round a proposer is randomly selected to propose a coalition and an allocation of the coalition’s aggregate worth among its members, and if all members agree the coalition forms and exits the game, which continues among the remaining players (Okada 19962011). Notice that the legislative bargaining game we have discussed in the previous two paragraphs is a special case in which any majority coalition has aggregate worth of one, and any minority coalition has aggregate worth of zero, and thus we can interpret legislative bargaining games as coalition formation games.

A common feature of the results of all these theories with randomly selected proposers, both the voting-based ones and the coalition-formation ones, is that whoever gets to be the first proposer (or the first proposer in a period with a large budget if the budget is stochastic) is disproportionately advantaged. A question arises: since being the proposer is so important, why would the proposer be randomly selected, and not endogenously chosen by the assembly itself? Note that we can ask the same question about the Telephone game (whoever turns down an offer gets to propose next) or the Market game (players make offers in turns): why would an assembly choose any of these particular protocols, and not others more to its own liking? Inspired by the US Congress, (Breitmoser 2011) assumes that a subset of players (committee members) are sure to be recognized first to make a proposal, in order of seniority; Ali et al. (2014) consider a more flexible setup where the next proposers are somewhat predictable, rather than certain. McKelvey and Riezman (1992) (and later McKelvey and Riezman 1993; Muthoo and Shepsle 2014; Eguia and Shepsle 2015) endogenize proposal rules to explain why seniors are more likely to emerge as early-round proposers: incumbent legislators choose pro-seniority rules in order to gain an electoral advantage in the eyes of constituents. Because a potential challenger—if elected—would have no seniority and the incumbent has seniority, rules that give seniors more power to obtain resources for their districts make incumbents more appealing to their constituents.

A different view is that incumbents can’t just vote themselves into a favored status; perhaps being a proposer is a privilege that has to be earned, and each legislator’s probability of being recognized as a proposer is proportional to the effort that the legislator invests in gaining agenda power (Yildirim 20072010). The bargaining protocol could also dispense with proposers altogether: in demand bargaining, agents sequentially place budgetary demands, and a receiver (called “formateur” in applications to government formation) decides whether it is worth putting together a majority by satisfying enough of these demands (Morelli 1999). In such demand bargaining models, the payoffs are usually more proportional to the voting weights of the agents, and the receiver is not able to extract as large a fraction of the budget as the proposer in alternating-proposal protocols.

4.1 Bringing in the Outside Option

In many application to private bargaining between two or perhaps three agents as discussed in the previous section, it makes sense that the deadlock and breakdown outcomes would be exogenously given: if buyer and seller (or the buyer, seller, and bank lender) do not agree on how to divide the surplus of a transaction such as a house sale, no transaction occurs and they each get zero surplus. A payoff of zero is then the outside option for each player and both deadlock (indefinite negotiations without resolution) and breakdown (end of negotiations without agreement) lead to this outside option.

In the standard legislative bargaining game with sequential offers (Baron and Ferejohn 1989), legislators have no option to walk out, so the game has no breakdown outcome. With time discount, the deadlock outcome is zero for every player; time discount, like inflation, makes the value of the budget shrinks a little in each period, so that all value ultimately vanishes. But this is not a realistic description of how public finance works: it is more often the case that if legislators do not agree on an allocation of the budget, the previous period’s budget is implemented.

Epple and Riordan (1987) pioneered a dynamic model in which players bargain over an infinite sequence of periods, with a new budget to be allocated in each period. In each period, if an offer is turned down, the previous period allocation is implemented. The only exogenous bargaining-failure outcome is the one for the first period; in all subsequent periods, the bargaining failure outcome is endogenous. In this manner, the outside option has been brought inside the model. We may call the endogenous outside option the “reversion” point, “status quo” outcome, or “default policy.” It is in any case the outside option—which now varies by period—exercised in case of bargaining failure within a period.

Epple and Riordan’s (1987) approach is now mainstream, but it might have been ahead of its time: for nearly two decades it was largely ignored, until Kalandrakis (2004) sparked a flurry of work on dynamic bargaining with an endogenous outside option. The main difference between the two is that Epple and Riordan (1987) use the Market protocol (players take turns to make proposals) whereas Kalandrakis uses the random proposer protocol. Kalandrakis (20042010) constructs a stationary equilibrium in which in each period, the period’s proposer gets the whole budget for the period; on the other hand, in a model in which the proposer is always the same, the proposer does not get the whole budget in any period (Diermeier and Fong 2011). Penn (2009) finds that in a model with (exogenous) random proposals, the most frequent outcomes approximately split the budget evenly between a minimal winning majority of agents. Returning to the model with strategic proposals, Bowen and Zahran (2012) construct equilibria that distribute the period budget evenly among a supermajority of the players; allowing for waste, Richter (2014) constructs an equilibrium with a fully egalitarian division of the budget in every period. In laboratory experiments, subjects prefer to allocate the budget evenly within a minimal winning majority, than evenly across all players (Battaglini and Palfrey 2012).

In more recent variations of the legislative divide-the-dollar dynamic bargaining game with an endogenous status quo, Nunnari (2015) introduces a veto player, who appropriates the full budget; Bowen et al. (2014) consider policy bundles that include both discretionary spending (the status quo discretionary spending is exogenous at zero), and mandatory spending with an endogenous status quo; and Jeon (2015) endogenizes the status quo policy and the recognition probabilities to be proposer jointly by letting both be equal to the last period’s implemented policy.

If we view multi-player bargaining as a game of dynamic coalition formation, in which the question is to identify the winning coalition that will seize control of the budget, then an endogenous status quo corresponds to a game in which players receive flow payoffs in each period while bargaining takes place, as in Konishi and Ray (2013), Gomes and Jehiel (2005) or Hyndman and Ray (2007). Building on these precedents, Ray and Vohra (2015) seek to unify the cooperative and non-cooperative approaches to coalition formation under their flexible framework of an “Equilibrium Process of Coalition Formation,” and they describe (Sect. 5.4) how to use this framework specifically to study bargaining.

Recall that if the outside option is exogenous, as in Baron and Ferejohn’s (1989) model of legislative bargaining over a single budget, we obtain a sharp prediction: while there are multiple equilibria, equilibrium payoffs are the same across all equilibria (Eraslan 2002; Eraslan and McLennan 2013).Footnote 7 In sharp contrast, if players bargain over an infinite sequence of budgets, with an endogenous status quo, there are multiple stationary equilibria. By 2014, a decade after Kalandrakis (2004), his existence result based on an equilibrium in which the random proposer acts as an ephemeral dictator who obtains all of the period’s resources had been supplemented by the construction of other stationary equilibria in which a supermajority (Bowen and Zahran 2012) or all players (Richter 2014) share resources equitably in each period. Anesi and Seidmann (2015) brought the literature full circle—and showed the depth of the equilibrium multiplicity problem—by demonstrating that almost any allocation can be supported in a Markov stationary equilibrium. In particular, any allocation that gives a positive quantity at least to a majority of agents can be sustained.

Anesi and Seidmann’s (2015) powerful result leaves us with shattered predictive power. In stark contrast to Eraslan and McLennan (2013) result that in games of bargaining over a single budget the payoff prediction by Baron and Ferejohn’s (1989) is unique and robust, Anesi and Seidmann establish the very opposite for bargaining over an infinite sequence of budgets with an exogenous status quo: the result that (almost) anything can happen is robust in a general framework. This “anything goes” prediction, also discussed in Baron and Bowen (2015), closes the research agenda of finding specific equilibria of the now standard dynamic model, but it opens up another: it is not the case that in real world applications anything happens in totally unpredictable fashion (budgets could be burned but are not; and in most assemblies some subsets of members are known to more frequently cooperate and coalesce with each other) and we still wish to explain and predict these coalitional patterns and bargaining outcomes in real world applications.

One might conjecture that we need to include other elements such as public good provision or an ideological dimension into the choice set in order to obtain more realistic predictions. Inspiration could be drawn from instance from Baron et al.’s (2012) model with two-dimensional policies and side payments; or from Battaglini and Coate’s (2007, 2008) with public good provision, taxation and debt accumulation.Footnote 8

Dynamic bargaining theories with more general models that allow for preferences over multiple dimensions of ideology, public goods and distributive components, have obtained existence and upper hemi-continuity of the equilibrium correspondence. Unfortunately, they have not shown uniqueness nor tight characterization results, neither in models with a one time policy choice (Banks and Duggan 20002006), nor with an infinitely repeated choice and an endogenous status quo as in Duggan and Kalandrakis (2012). The title of Anesi and Duggan’s (2015) “Existence and Indeterminacy of Markovian Equilibria in Dynamic Bargaining Games” is self-explanatory and it summarizes the state of the art in dynamic legislative bargaining games with an endogenous status quo.

5 Conclusion

We have reviewed theoretical solutions to bargaining problems with two, three and many players. We highlight how cooperative and non-cooperative approaches relate to each other, the importance of outside options, and how in dynamic models the outside option can become endogenous.

After four decades of research, the basic incentives in static bargaining problems are well understood. Nevertheless, in dynamic applications with an endogenous default outcome, many state-of-the-art theories deliver limited predictive power, as their set of equilibrium solutions is large. The quest for sharper predictions remains an open research agenda.