Introduction

The term ‘cooperative equilibria’ has been imported into economics from game theory. It refers to the equilibria of economic situations modelled by means of cooperative games and solved by appealing to an appropriate cooperative solution concept. The influence is not entirely one way, however. Many game theoretic notions (e.g. Cournot–Nash equilibrium, the Core) are formalizations of pre-existing ideas in economics.

The distinguishing feature of the cooperative approach in game theory and economics is that it does not attempt to model how a group of economic agents (say a buyer and a seller) may communicate among themselves. The typical starting point is the hypothesis that, in principle, any subgroup of economic agents (or perhaps some distinguished subgroups) has a clear picture of the possibilities of joint action and that its members can communicate freely before the formal play starts. Obviously, what is left out of cooperative theory is very substantial. The justification, or so one hopes, is that the drastic simplification brings to centre stage the implications of actual or potential coalition formation. In their classic book, von Neumann and Morgenstern (1944) already emphasized that the possibility of strategic coalition formation was the key aspect setting apart two from three or more players’ games.

The previous remarks emphasize free preplay communication as the essential distinguishing characteristic of cooperative theory. There is a second feature common to most of the literature but which nonetheless may not be intrinsic to the theory (this the future will determine). We refer to the assumed extensive ability of coalitions’ players to commit to a course of action once an agreement has been reached.

The remaining exposition is divided in three sections. Sections “The Dominance Approach” and “The Valuation Approach” discuss the two main approaches to cooperative theory (domination and valuation, respectively). Section “Consistency Qualifications” contains qualifications to the domination approach.

An excellent reference for the topic of this entry is Shubik (1983).

The Dominance Approach

Suppose we have N economic agents. Every agent has a strategy set Si. Denote S = S1 × … × SN with generic element S = (S1, …, × SN). Given s and a coalition CN, the expression sC denotes the strategies corresponding to members of C. Letting C’ be complement of C, the expression (sC, sC defines s in the obvious way. For every i there is a utility function ui(s). If u = (u1, …, uN) is an N-list of utilities, expressions such as uC or (uC, uC) have the obvious meaning.

Example 1 (Exchange economies): There are N consumers and l desirable goods. Each consumer has a utility function ui(xi) and initial endowments ωi. A strategy of consumer i is an N non-negative vector S = (Si1, …, × SiN) such that \( {\sum}_{j=1}^N{s}_{ij}\,\,\, \leqslant \,\,\, {\omega}_i \), i.e. si is an allocation of the initial endowments of i among the N consumers. Of course, \( {u}_j(s)={u}_j\left({\sum}_i{s}_{ij}\right) \).

Example 2 (Public goods): Suppose that to the model of Example 1 we add a public good y with production function y = F(v). Utility functions have the form uj(xj,y). A strategy for i is now an (N + 1)l vector Si = (Si1, …, Si,N+1) where Si,N+1 is allocated as input to production. We have \( {u}_j(s)={u}_j\left[{\sum}_i{s}_{ij}\left({\sum}_i{s}_{i,N+1}\right)\right] \).

Example 3 (Exchange with private bads): This is as the first example, except that there is no free disposal, i.e. \( {\sum}_{j=1}^N{s}_{ij}={\omega}_i \) for every i. Some of the goods may actually be bads. To be concrete, suppose that l = 2, one of the goods is a desirable numéraire and the other is garbage. All consumers are identical and each owns one unit of numéraire and one of garbage (see Shapley and Shubik 1969).

For a strategy profile s to be called a cooperative equilibrium we require that there is no coalition C that dominates the utility vector u(S) = (u1(s),…,uN(s)) i.e. that can ‘make effective’ for its members utility levels ui, iC, such that ui >ui(s) for all iC Denote by V(C) the utility levels that C can ‘make effective’ for its members. The precise content of the equilibrium concept depends, of course, on the definition of V(C). I proceed to discuss several possibilities (Aumann 1959, is a key reference for all this).

  1. (A)

    In line with the idea of Cournot–Nash equilibrium, we could define \( {V}_s(C)=\left\{{u}_C:{u}_C\leqslant {u}_C\left(s{\prime}_C,{s}_C\right)\right. \) for some SCSC}, that is, the agents in C take the strategies of C′ as fixed. They do not anticipate, so to speak, any retaliatory move. The cooperative solution concept that uses Vs(C) is called strong CournotNash equilibrium. It is very strong indeed. So strong, that it rarely exists. Obviously, this limits the usefulness of the concept. It is immediately obvious that it does not exist for any of the three examples above.

Note that Vs(C) depends on the reference point s. We now go to the other extreme and consider definitions where when a coalition contemplates deviating, it readies itself for a retaliatory behaviour on the part of the complementary coalition; that is, the deviation erases the initial position and is carried out if and only if better levels of utility can be reached, no matter what the agents outside the coalition do. On defining V(C), however, there is an important subtlety. The set V(C) can be defined as either what the members of C cannot be prevented from getting (i.e. the members of C move second) or, more strictly, as what the members of C can guarantee themselves (i.e. they move first). More precisely:

  1. (B)

    For every C, define:

    $$ {V}_{\beta }(C)=\left[{u}_C:\mathrm{for}\, \mathrm{any}\, {s}_C\, \mathrm{there}\, \mathrm{is}\, \mathrm{an}\, {s}_C\, \mathrm{such}\, \mathrm{that}\ {u}_C\le u\left({s}_C,{s}_{C\prime}\right)\right]. $$

This is what C cannot be prevented from getting. The set of corresponding cooperative equilibria is called the β-core of the game or economy. For any s we have Vβ(C) ⊂ Vs(C), and so there is more of a chance for a β-core equilibrium to exist than for a strong Cournot–Nash equilibrium. But there is no general existence theorem. As we shall see, the β-core is non-empty in examples 1 and 2. It is instructive to verify that it is empty in example 3. By symmetry, it is enough to check that the strategies where each agent consumes its own endowment is not an equilibrium. Take the coalition formed by two of the three (identical) agents. As a retaliatory move, the third agent would, at worst, be dumping its unit of garbage on one of the members of the coalition (or perhaps splitting it among them), but the coalition can still be better off than at the initial endowment point by dumping its two units on the third member and transferring some money from the nonreceptor to the receptor of outside garbage.

  1. (C)

    For every C define:

    $$ {V}_{\alpha }(C)=\left[{u}_C:\,\,\, \mathrm{there}\,\, \mathrm{is}\,\, {s}_C\,\, \mathrm{such}\,\, \mathrm{that}\,\, {u}_C\le {u}_C\left({s}_C,{s}_{C\prime}\right)\,\, \mathrm{for}\,\, \mathrm{any}\,\, {s}_{C\prime}\right]. $$

This is what C can guarantee itself of getting. It represents the most pessimistic appraisal of the possibilities of C. The set of corresponding equilibria is called the α-core of the game or economy. For any s we have V(C) ⊂ Vβ (C) and so there is more of a chance for an α-core equilibrium to exist than for a β-core equilibrium. For the α-core there is a general existence theorem:

Theorem

(Scarf 1971): If S is convex, compact and every ui(s) is continuous and quasiconcave, then the α-core is non-empty.

The conditions of the above theorem are restrictive. Note that the quasiconcavity of ui is required for the entire s and not only (as for Cournot–Nash equilibrium) for the vector si of own strategies. Nonetheless, it is a useful result. It tells us, for instance, that under the standard quasiconcavity hypothesis on utility functions, the α-core is non-empty in each of the three examples above. It will be instructive to verify why the initial endowment allocation is an equilibrium in example 3. In contrast to the β-core situation, a coalition of two members cannot now improve over the initial endowments because they have to move first and therefore cannot know who of the two will receive the outside member’s garbage and will need, as a consequence, some extra amount of money.

If, as in examples 1 and 2, there are no bads, the distinction between Vα and Vβ disappears. There is a unique way for the members of C’ to hurt C, namely withholding its own resources. So in both the α and β senses the set V(C) represents the utility combinations that can be attained by the members of C using only its own resources. This, incidentally, shows that the β-core is non-empty in examples 1 and 2 (since it is equal to the α-core!). There is another approach to existence in the no-bads case. Indeed, a Walrasian equilibrium (in the case of example 2 this takes the guise of a Lindahl equilibrium) is always in this core with no need of α or β qualification. In the context of example 1, the Core was first defined and exploited by Edgeworth (1881) (seeCores”).

Underlying both the α- and the β-core there is a quite pessimistic appraisal on what C’ may do if C deviates. The next two remarks discuss, very informally, other, less extreme, possibilities.

  1. (D)

    In the context of exchange economies (such as example 1) it seems sensible to suppose that a coalition of buyers and sellers in one market may neglect retaliation possibilities in unrelated markets. As it stands in subsections “The Bargaining Set” and “Coalition-Proof Cournot–Nash Equilibrium”, it is very difficult for a group of traders to improve, since, so to speak, they have to set up a separate economy covering all markets. See Mas-Colell (1982) for further discussion of this point.

  2. (E)

    For transferable utility situations (and for purposes more related to the valuation theory to be discussed in section “The Valuation Approach”), Harsanyi (1959), taking inspiration in Nash (1953), proposed that the total utility of the coalition C be defined as \( {\sum}_{i\in C}{u}_i\left({\overline{s}}_C,{\overline{s}}_C\right) \) where \( \left({\overline{s}}_C,{\overline{s}}_{C\prime}\right) \) are the minimax strategies of the zero sum game between C and C′ obtained by letting the payoff of C be \( {\sum}_{i\in C}{u}_i\left({s}_C,{s}_{C\prime}\right)-{\sum}_{i\in C\prime }{u}_i\left({s}_C,{s}_{C\prime}\right) \). Note: if the minimax strategies are not unique, a further qualification is required.

Consistency Qualifications

In this section, several solution concepts are reviewed. Loosely, their common theme is that coalitions look beyond the one-step deviation possibilities.

The Von Neumann–Morgenstern Stable Set Solutions

Suppose that the game is described to us by the sets V(C) that the members of coalitions of C can make effective for themselves. These sets do not depend on any reference combination of strategies. They are constructed from the underlying situation in some of the ways described in section “The Dominance Approach”. One says that the N-tuple of utilities uV (N) dominates the N-tuple vV(N) via coalition C, denoted u ≻ Cυ if ucV. We write u ≻ υ if ū dominates \( \overline{v} \) via some coalition. A core utility computation is then any maximal element of ≻, i.e. any uV(N) which is not dominated by any other imputation.

The following paradoxical situation may easily arise. An imputation u is not in the core. Nonetheless, all the members of any coalition that dominates u are treated, at any core imputation, worse than at u (consider for example, the predicament of a Bertrand duopolist at the joint monopoly outcome). If ≻ was transitive, then this could not happen, since (continuity complications aside) for any u there would be a core imputation directly dominating u. But ≻ is very far from transitive. The approach of von Neumann and Morgenstern consists in focusing on sets of imputations K, called stable sets, having the properties: (i) if υ ∈ K then there is no υ ∈ K that dominates u (internal stability) and (ii) if uK then υ ≻ u for some υ ∈ u (external stability). Note that these are the properties that the set of maximal elements of ≻ would have if ≻ was transitive. The interpretation of K is as a standard of behaviour. If for any reason the imputations of K are regarded as acceptable, then there is an inner consistency to this: drop all the imputations dominated by an acceptable imputation and what you have left is precisely the set of acceptable imputations.

Important as the von Neumann–Morgenstern solution is, its impact in economics has been limited. There is an existence problem, but the main difficulty is that the sets are very hard to analyse.

The Bargaining Set

This solution was proposed by Aumann and Maschler (1964) and is available in several versions. Describing one of them will give the flavour of what is involved. For an imputation u to be disqualified, it will be necessary, but not sufficient, that it be dominated (in the terminology of bargaining set theory: objected to) via some coalition C*. The objection will not ‘stick’, i.e. throw u out of the negotiation table as a tentative equilibrium, unless it is found justified. The justifiability criterion is the following: there is no other coalition C* having a \( {v}_c^{\ast}\in K\left({C}^{\ast}\right) \) with the property that \( {\upsilon}_i\geqslant {u}_i \) for every i and which gives to every common member of C and C* at least as much as they get at the objection. In other words, an objection can be countered if one of the members left out of the objecting coalition can protect themselves in a credible manner (credible in the sense that they can give to any member of C they need, as much as C gives them).

The bargaining set contains the core and, while it is conceptually quite different from a von Neumann–Morgenstern stable set solution, it still does avoid the most myopic features of the core. It is also much easier to analyse than the stable sets, although it is by no means a straightforward tool. But, again, its impact in economics has so far been limited.

A common aspect of stable set and bargaining set theory is that, implicitly or explicitly, a deviating coalition takes into consideration a subsequent, induced move by other coalitions. This is still true for the next two concepts, with one crucial qualification: a deviating coalition only takes into account subsequent moves of its own subcoalitions.

Coalition-Proof Cournot–Nash Equilibrium

This solution concept has been proposed recently by Bernheim et al. (1987). It can be viewed as a self-consistent enlargement of the set of strong Cournot–Nash equilibria. Consider the simplest case, a three-player game. Given a strategy profile \( \overline{s} \), which deviations are possible for two players coalitions? If anything, then we are led to strong Cournot–Nash equilibria. But, there is something inconsistent about this. If the strategy profile \( \overline{s} \) is not immune to deviations (i.e. there is no commitment at \( \overline{s} \)), why should the deviation be so? That is, why should it be possible to commit to a deviation? This suggests that the deviation should be required to be immune to further deviations, that is, they should be Cournot–Nash equilibria of the induced two person game (the third player remains put at \( \overline{s} \)). Obviously, deviating becomes more difficult and the equilibrium set has more of a chance of being non-empty. Unfortunately, there is no general existence theorem. For three-person games, this is precisely the Coalition-Proof Cournot–Nash equilibrium. By recursion, one obtains a definition for any number of players.

The Core

It may be surprising to list the core in a section on concepts that attempts to be less myopic than the core. But, in fact, the core as a set can be made consistent against further deviations by subcoalitions of the deviating coalition. Simply make sure always to deviate via coalitions of smallest possible cardinality.

The Valuation Approach

The aim of the valuation approach to games and conflict situations (of which the Shapley value is the central concept) is to associate to every game a reasonable outcome taking into account and compromising among all the conflicting claims. In games, those are expressed by sets V(C) of utility vectors for which C is effective. The criteria of reasonableness are expressed axiomatically. Thus the valuation approach has to be thought of more as input for an arbitrator than as a descriptive theory of equilibrium. Except perhaps for the bargaining set, this point of view is strikingly different to anything discussed so far.

Sometimes the term ‘fair’ is used in connection with the valuation approach. There are at least two reasons to avoid this usage. The first is that the initial position [embodied in the sets V(C)] is taken as given. The second is that the fairness of a solution to a game can hardly be judged in isolation, i.e. independently of the position of the players in the overall socioeconomic game.

The valuation of a game will depend on the claims, i.e. on how the sets V(C) are constructed. We saw in section “The Dominance Approach” that there was nothing straightforward about this. We will not repeat it here. It may be worthwhile to observe informally, however, that the valuation approach is altogether less strategic than the dominance one and that a useful way to think of V(C) is as the utility levels the members of C could get if the members of C′ did not exist, rather than as what the members of C could get if they go it alone [in defining V(C) this point of view can make a difference].

Consider first games with transferable utilities (N,υ) where N is a set of players and υ:2NR is a real valued function satisfying υ(φ) = 0. The restriction of υ to a CN is denoted (C, υ). The Shapley value is a certain rule that associates to every game (N,v) an imputation Sh(N,v), i.e. \( {\sum}_{i\in N}S{h}^i\left(N,\upsilon \right)=\upsilon (N) \).

The Shapley value was characterized by Shapley (1953) by four axioms that can be informally described as: (i) efficiency, i.e. Sh(N,v) is an imputation, (ii) symmetry, i.e. the particular names of the players do not matter, (iii) linearity over games and (iv) dummy, i.e. a player that contributes nothing to any coalition receives nothing.

There is a simply way to compute the Shapley value. Put P(φ,v) = 0 and, recursively, associate to every game (N,v) a number P(N,v) such that

$$ \sum_{i\in N}\left[P\left(N,\upsilon \right)-P\left(N/(i),\upsilon \right)\right]=\upsilon (N) $$
(1)

That is, the sum of marginal increments of P equals υ(N). This function is called the potential and it turns out that the marginal increments of P constitute precisely the Shapley valuations, i.e. Shi (N, υ) − P[N/(i), υ] for all (N, υ) and iN. This is discussed in Hart and Mas-Colell (1985).

The Shapley value for transferable utility games admits several generalizations to the nontransferable utility case [with convex sets V(C)]. See Harsanyi (1959), Shapley (1969), and Aumann (1985). Perhaps the most natural, although not necessarily the simpler to work with, was proposed by Harsanyi (1959) and has recently been axiomatized by Hart (1985). For a given game, an Harsanyi value imputation is obtained by rescaling individual utilities so as to guarantee the existence of an N-tuple uV(N) satisfying, simultaneously, (i) the convex set V(N) is supported at u by a hyperplane with normal q = (1,…,1), (ii) if a potential P on the set of all games is defined by formula (1) (but replacing ‘=v(N)’ by ‘∈ Bdry. V(N)’) then, as before, ui = P(N/(i), V) for all iN.

One of the most striking features of the applications of Shapley value theory to economics is that, in economies with many traders, it has turned out to be intimately related to the notion of Walrasian equilibrium. Interestingly, this is in common with the dominance approach. Aumann (1975) is a representative paper of the very extensive literature on the topic.

See Also