Abstract
In many online systems, individuals provide services for each other; the recipient of the service obtains a benefit but the provider of the service incurs a cost. If benefit exceeds cost, provision of the service increases social welfare and should therefore be encouraged—but the individuals providing the service gain no (immediate) benefit from providing the service and hence have an incentive to withhold service. Hence, there is scope for designing a protocol that improves welfare by encouraging exchange. To operate successfully within the confines of the online environment, such a protocol should be distributed, robust, and consistent with individual incentives. This paper proposes and analyzes protocols that rely solely on the exchange of fiat money or tokens. The analysis has much in common with work on search models of money but the requirements of the environment also lead to many differences from previous analyses—and some surprises; in particular, existence of equilibrium becomes a thorny problem and the optimal quantity of money is different.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper is motivated by the problem of online exchange of files (or data or services). In typical systems that serve this purpose—Napster, now defunct, is the most familiar example but there are many in current operation, including Gnutella and Kazaa (file sharing), Seti@home (computational assistance), Slashdot and Yahoo Answers (answers to queries)—a single interaction involves an agent who wants a file (or data or service) and an agent who can provide it. The former benefits from obtaining the file but the latter bears the (often non-trivial) cost of providing it and so has an incentive to free-ride.Footnote 1 Assuming that benefit exceeds the cost, provision of the service increases social welfare and should therefore be encouraged—but how?
This problem is a particular instance of trade in the absence of a double coincidence of wants, which has motivated a large literature on search models of money. Indeed, we shall formalize our problem in the same terms, and the “solution” we develop is for a (benevolent) designer to institute a system that relies on fiat money or tokens (we use the terms interchangeably), to introduce a quantity of tokens into the system, and to recommend strategies to the participants for requesting and providing service. Because our agents are self-interested, the designer must recommend strategies that constitute an equilibrium—but our environment also imposes other constraints on the designer: the system must be anonymous and distributed, must take account of the fact that agents meet only electronically (and not face-to-face), that files and tokens are indivisible, that the designer cannot know the precise parameters of the population and, perhaps most importantly, that the designer cannot constrain the number of tokens that agents hold.Footnote 2 \(^{,}\) Footnote 3
This paper asks how much a designer can accomplish, given these constraints, by judicious choice of the protocol—the quantity of tokens and the recommended strategies. To answer this question, we characterize equilibrium protocols and among these, the ones that are robust to small perturbations of the population parameters (the designer’s slight misperceptions of these parameters); we prove that robust equilibrium protocols exist; we provide bounds for the efficiency of robust equilibrium protocols; we show that the “optimal quantity of money” in our setting is different than in other settings considered in the literature; we provide an effective procedure for choosing a robust equilibrium protocol whose efficiency is at least good (if not optimal); and we provide numerical simulations to illustrate some of the theorems and also to demonstrate that design matters: a great deal of efficiency may be lost if the designer chooses the “wrong” protocol.
As in the familiar search models of money, our environment is populated by a continuum of agents each of whom is initially endowed with a unique file that can be duplicated and provided to others.Footnote 4 In each period, a fraction of the population is matched; one member of each match—the client—must decide whether to request service (provision of a file or forwarding of a packet) and the other—the server—must decide whether to provide the service (if requested). The client who receives the service derives a benefit, the server who provides the service incurs a cost. To simplify the analysis, we assume here that, except for the uniqueness of the files they possess, all agents are identical and that all files are equally valuable to receive and equally costly to provide. (We discuss extensions in the Conclusion.) We assume benefit exceeds cost, so that social welfare is increased when the service is provided, but that cost is strictly positive, so that the server has a disincentive to provide it. The designer supplies a supply of tokens and recommends strategies (circumstances under which service should be requested or provided); together these constitute a protocol. We assume that the price of service is fixed at one token; this restriction seems natural in our environment and is made in much of the search literature; see also below and the Conclusion. We differ from much of the literature in three ways suggested by the motivation discussed. First, we do not impose an exogenous upper bound on money holdings: agents can store as much money as they wish. Second, we require that the protocol should induce an equilibrium (i.e., that the recommended strategies are best replies in the (unique) steady-state distribution) that is robust to small perturbations of the population parameters. Third, we allow the designer to control both the money supply and the price. As we shall show, each of these has significant implications.
Leaving aside degenerate protocols in which there is no trade, all robust equilibrium protocols are Markov (not history dependent), symmetric (the population plays a single pure strategy) and have a particularly simple form: clients request service whenever their token holding is above zero; servers provide service when their token holding is at or below a threshold \(K\) and do not provide service when the token holding is above \(K\). We prove that robust equilibria exist but the absence of an exogenous upper holdings makes the proof surprisingly hard. (See Sect. 6). Having shown that robust equilibria exist we turn to our original question: which equilibrium protocols are the most efficient? We have shown that we can restrict attention to threshold strategies; among protocols that employ the threshold \(K\) the one that would be most efficient if agents were compliant has token supply \(K/2\). However, these protocols need not be equilibria and the most efficient protocols may have token supplies different from \(K/2\); \(K/2\) need not be the optimal quantity of money. We go on to provide estimates for efficiency of various protocols and an effective procedure for the designer to choose a “good”—if not optimal—protocol. Simulations illustrate these results and some related points.
Following a discussion of the literature below, the remainder of the paper is organized in the following way. Section 3 introduces the model and defines strategies, the steady-state value function, best responses, equilibrium, and robust equilibrium. Section 4 discusses the nature of equilibrium and robust equilibrium and establishes existence of robust equilibrium. Simulations illustrate the nature of equilibrium. Section 5 discusses efficiency of protocols, shows that almost full efficiency is obtainable by an equilibrium protocol if agents are sufficiently patient or the benefit/cost ratio is sufficiently large, establishes lower bounds for efficiency of equilibrium protocols and uses these bounds to provide an effective procedure for constructing efficient equilibrium protocols, and shows that the optimal quantity of money in our setting, with no exogenous bound on money holdings is different from the optimal quantity of money in settings with an exogenous bound on money holdings. Numerical simulations give some idea of the efficiency loss resulting from choosing the “wrong” protocol. Section 6 concludes and offers some directions for further research. Proofs are collected in the “Appendix”.
2 Literature
Following the seminal work of Kiyotaki and Wright (1989), there is a large literature on search models of money which has contributed enormously to our understanding of money in various environments. A portion of this literature—e.g. Camera and Corbae (1999), Berentsen (2002)—allows agents to accumulate more than one unit of money, while maintaining the assumption of Kiyotaki and Wright (1989) that there is an exogenously given upper bound on money holdings; this is precluded in our environment. A different portion of this literature—e.g. Cavalcanti and Wallace (1999a, b), Berentsen et al. (2007) and Hu et al. (2002)—assumes that agents have and can condition on (complete or partial) knowledge of the money holdings of (some of) their counter-parties in each match, which is again precluded in our environment. A particularly striking paper in this literature is Kocherlakota (2002), which shows that any individually rational outcome can be supported in equilibrium provided money is infinitely divisible and the common discount factor is above some minimum threshold. However, Kocherlakota (2002) also assumes that agents have a good deal of information about the money holdings of their counterparties. To quote the abstract: “The one-money theorem says that the allocation is achievable using only one money if that money is divisible and money holdings are observable. The two-money theorem says that the allocation is achievable using two divisible monies, even if money holdings are concealable.” To elaborate: the one-money theorem assumes that agents must display their true money holdings; the two-money theorem assumes that agents can display less money than they actually have but cannot display more money than they actually have. In both cases, agents have (complete or partial) knowledge about the money holdings of their counter-parties and can condition on it. In our work, agents have no knowledge about the money holdings of their counter-parties and so cannot condition on it.
Our work is closest to Zhou (1999) and Berentsen (2000). Zhou (1999) assumes money is divisible and the supply of money is given endogenously but the price is determined endogenously.Footnote 5 Berentsen (2000) assumes money is indivisible and the price is given exogenously but the money supply is determined endogenously. In our work, both the money supply and the price are chosen exogenously by the designer. Of course, from an economic point of view, all that really matters is the ratio \(M/p\) of the money supply \(M\) to the price \(p\), fixing either the money supply or the price amounts simply to choosing a normalization. So it would be more accurate to say that in both Zhou (1999) and Berentsen (2000), the ratio \(M/p\) is determined endogenously: fixing \(M\), as Zhou (1999) does, is just choosing a normalization; fixing \(p\), as Berentsen (2000) does, is just choosing a different normalization. In our work, we have chosen a particular price normalization \(p=1\) but the money supply, and hence the ratio \(M/p\), is determined exogenously by the designer. Our designer has more control and that control is important because if the designer did not control \(M/p\), the designer could not be sure of designing an optimal equilibrium protocol. Indeed, if the designer did not control \(M/p\), it would seem to make no sense to even talk about designing protocols, much less optimal equilibrium protocols. We also note that neither Zhou (1999) nor Berentsen (2000) prove that equilibrium exist; they both provide sufficient conditions but those conditions are stringent and endogenous—they are not conditions on the primitives of the model (benefit/cost ratio and discount factor).
This work also connects to an Electrical Engineering and Computer Science literature that discusses token exchanges in online communities. Some of that literature assumes that agents are compliant, rather than self-interested, and does not treat incentives and equilibrium (Chandrakumar et al. 2003; Buttyan and Hubaux 2003); some of that literature makes use of very different models than the one offered here Jarvis and Tan (2006) and Figueiredo et al. (2004); and some of the literature is not formal and rigorous, offering simulations rather than theorems (Mohr and Pai 2006). The papers closest to ours are probably Friedman et al. (2006, 2007), which treat somewhat different models. However, these papers seem puzzling in many dimensions and many of the proofs seem mysterious (at least to us).
Another literature to which this work connects is the game-theoretic literature on anonymous interactions. In a context in which interactions were publicly observable, full cooperation (i.e., provision of service) could be achieved at equilibrium by the use of trigger strategies, which deny service in the future to any agent who refuses service in the present. As Kandori (1992) and Ellison (2000) have pointed out, in some contexts, cooperation can be supported even without public observability if agents deny service in the future to all agents whenever they have observed an agent who refuses service in the present; in this equilibrium, any failure to provide service results in a contagion, producing wider and wider ripples of defection, until no agent provides service. However, contagion is not likely to sustain cooperation in the systems of interest to us, because the population is so large (typically comprising tens of thousands or even hundreds of thousands of agents) that an agent is unlikely, in a reasonable time frame, to meet any other agent whose network of past associations overlap with his. (When the population is literally a continuum, no agent ever meets any other agent whose network of past associations overlap with his.) A more relevant literature, of which Kandori (1992) is again the seminal work, uses reputation and social norms as devices as a means of incentivizing cooperation. The work that is closest to ours is Park et al. (2010), which asks which reputation-based systems can be supported in equilibrium and which of these achieve the greatest social efficiency. Because provision of service in their model depends on the reputations of both client and server, some central authority must keep track of and verify reputations; hence, these systems are not distributed in the sense we use here.
3 Model
The population consists of a continuum (mass 1) of infinitely lived agents. Each agent can provide a resource (e.g, a data file, audio file, video file, service) that is of benefit to others but is costly to produce (uploading a file uses bandwidth and time). The benefit of receiving this resource is \(b\) and the cost of producing it is \(c\); we assume \(b > c > 0\).Footnote 6 Agents care about current and future benefits/costs and discount future benefits/costs at the constant rate \(\beta \in (0,1)\). Agents are risk neutral so seek to maximize the discounted present value of a stream of benefits and costs.
Time is discrete. In each time period, a fraction \(\rho \le 1/2\) of the population is randomly chosen to be a client and matched with a randomly chosen server; the fraction \(1-2\rho \) are unmatched.Footnote 7 (No agent is both a client and a server in the same period.) When a client and server are matched, the client chooses whether or not to request service, the server chooses whether or not to provide service (e.g., transfer the file) if requested.
The parameters \(b, c, \beta , \rho \) completely describe the environment. Because the units of benefit \(b\) and cost \(c\) are arbitrary (and tokens have no intrinsic value), only the benefit/cost ratio \(r = b/c\) is actually relevant. We consider variations in the benefit/cost ratio \(r\) and the discount factor \(\beta \), but view the matching rate \(\rho \) as immutable.
3.1 Tokens and strategies
In a single interaction between a server and a client, the server has no incentive to provide services to the client. The mechanism we study for creating incentives to provide service involves the exchange of tokens. Tokens are indivisible, have no intrinsic value, cannot be counterfeited, and can be stored and transferred without loss. Each agent can hold an arbitrary non-negative finite number of tokens, but cannot hold a negative number of tokens and cannot borrow. We emphasize that our tokens are purely electronic objects and are transferred electronically.
The designer creates incentives for the agents to provide or share resources by providing a supply of tokens and recommending strategies (behavior) for agents when they are clients and servers. At the moment, we allow for strategies that depend on histories but we show that optimal strategies (best responses) depend only on current token holdings.
An event describes the particulars of a match at a particular time: whether the agent was chosen to be a client or a server or neither, whether the agent was matched with someone who was willing to serve or to buy, whether the agent received a benefit and surrendered a token or provided service and acquired a token or neither, and the change in the token holding. Write \(\epsilon _t\) for an event at time \(t\). A history of length \(T\) specifies an initial token holding \(m\) and a finite sequence of events \(h = (m; \epsilon _0, \epsilon _1, \epsilon _{T-1})\). Write \(H_T\) for the set of histories of length \(T\), \(H = \bigcup _T H_T\) for the set of finite histories. An infinite history specifies an initial token holding \(m\) and an infinite sequence of events \(h = (m; \epsilon _0, \epsilon _1, \ldots )\). We insist that finite/infinite histories be feasible in the sense that net token holdings are never negative (i.e., a request for service by an agent holding 0 tokens will not be honored). Given a finite or infinite history \(h\), write \(d(h,t)\) for the change in token holding at time \(t\) and \(d^+(h,t), d^-(h,t)\) for the positive and negative parts of \(d(h,t)\). Note that \(d(h,t) = +1\) if the agent serves, \(d(h,t)= -1\) if the agent buys, \(d(h,t) = 0\) otherwise. Note also that the token holding at the end of the finite history \(h\) is
A strategy is a pair \((\sigma , \tau ) : H \rightarrow \{0,1\}\); \(\tau \) is the client strategy and \(\sigma \) is the server strategy. Following the history \(h\), \(\tau (h) = 1\) means the client requests service and \(\tau (h) = 0\) means the client does not request service; \(\sigma (h) = 1\) means the server provides service, \(\sigma (h) = 0\) means the server does not provide service. (Note that we require individual agents to follow pure strategies, but we will eventually allow for the possibility that different agents follow different pure strategies, so the population strategy might be mixed). If service is requested and provided, a single token is transferred from client to server, so the client’s holding of tokens decreases by 1 and the server’s holding of tokens increases by 1. Tacitly, we assume that a token is transferred if and only if service is provided; like the transfer of tokens itself, this can be accomplished electronically in a completely distributed way.
3.2 Steady-state payoffs, values and optimal strategies
Because we consider a continuum population, assume that agents are matched randomly and can observe only their own histories, the relevant state of the system from the point of view of a single agent can be completely summarized by the fraction \(\mu \) of agents who do not request service when they are clients and the fraction \(\nu \) of agents who do not provide service when they are servers. If the population is in a steady state then \(\mu , \nu \) do not change over time. Given \(\mu , \nu \), a strategy \((\tau , \sigma )\) determines in the obvious way a probability distribution \(P(\tau , \sigma | \mu , \nu )\) over infinite histories \(H\). We define the discounted expected utility to an agent whose initial token holding is \(m\) and who follows the strategy \((\tau , \sigma )\) to be
(Here and below, when some of the variables \(\beta , b, c, \mu , \nu , \tau , \sigma \) are clearly understood we frequently omit all or some of them; this should not cause confusion.)
Given \( \mu , \nu , \tau , \sigma \) and an initial token holding \(m\), we define the value to be
Discounting implies that the supremum—which is taken over all strategy profiles—exists and is at most \(b/(1-\beta )\).
Given \(\mu , \nu \) the strategy \((\tau , \sigma )\) is optimal or a best response for an initial token holding of \(m\) if
for all alternative strategies \( \tau ^{\prime }, \sigma ^{\prime } \). Because agents discount the future at the constant rate \(\beta \), the strategy \((\tau , \sigma )\) is optimal if and only it has the one-shot deviation property; that is, there does not exist a finite history \(h\) and a profitable deviation \((\tau ^{\prime }, \sigma ^{\prime })\) that differs from \((\tau , \sigma )\) following the history \(h\) and nowhere else. A familiar and straightforward diagonalization argument establishes that optimal strategies exist and achieve the value; we record this fact below, omitting the proof.
Proposition 1
For each \( \mu , \nu \) and each initial token holding \(m\) there is an optimal strategy \(\tau , \sigma \) and
3.3 Optimal strategies
We want to characterize optimal strategies, but before we do, there is a degeneracy that must be addressed. If \(\mu = 1\), then no one ever requests service so the choice of whether to provide service is irrelevant; if \(\nu = 1\), then no one ever provides service so the choice of whether to request service is irrelevant. In what follows, we sometimes ignore or avoid these degenerate cases, but this should not lead to any confusion.
Fix \(\beta , b, c, \mu , \nu \); let \((\tau , \sigma )\) be optimal for the initial token holding \(m\). Note that the continuation of \((\tau , \sigma )\) must also be optimal following every history that begins with \(m\). If \(h\) is such a history and the token holding at \(h\) is \(n\), then \((\tau , \sigma )\) induces a strategy \((\tau ^h, \sigma ^h)\) from an initial token holding \(n\) that simply transposes what follows \(h\) back to time 0, and this strategy must be optimal for the initial token holding of \(n\). Conversely, any strategy that is optimal for the initial token holding of \(n\) must also be optimal following \(h\). It follows that optimal strategies \((\tau , \sigma )\) (whose existence is guaranteed by Proposition 1) depend only on the current token holding but are otherwise independent of history; we frequently say such strategies are Markov—but note that they are Markov in individual token holdings. Write \(\Sigma ( \mu , \nu , \beta )\) for the set of optimal strategies.
Theorem 1
For all \(b, c, \beta , \mu , \nu \) with \(\nu < 1\), every optimal strategy \((\tau , \sigma )\) has the property that \(\tau (n) = 1\) for every \(n \ge 1\); i.e. “always request service when possible”.Footnote 8
In view of Theorem 1, we suppress client strategies \(\tau \) entirely, assuming that clients always request service whenever possible. We abuse notation and continue to write \(\Sigma ( \mu , \nu , \beta )\) for the set of optimal strategies.
We now show that optimal (server) strategies also have a simple form. Say that the (server) strategy \(\sigma \) is a threshold strategy (with threshold \(K\)) if
We write \(\sigma _K\) for the threshold strategy with threshold \(K\) and
for the set of threshold strategies.
Theorem 2
For each \(\mu , \nu , b,c, \beta \) with \(\mu < 1\) the set of optimal (server) strategies consists of either a single threshold strategy or two threshold strategies with adjacent thresholds.
(The assumptions in Theorems 1 and 2 that \(\nu < 1\) and \(\mu < 1\) avoid the degeneracies previously noted.)
3.4 Protocols
The designer chooses a per capita supply of tokens \(\alpha \in (0, \infty )\) and recommends a strategy to each agent; we allow for the possibility that the designer recommends different strategies to different agents. Because self-interested agents will always play a best response, the designer will recommend only strategies in \(\Sigma \); in view of anonymity, it does not matter which agents are recommended to play each strategy, but rather only the fraction of agents recommended to play each strategy. Hence, we can identify a recommendation with a mixed threshold strategy, which is a probability distribution on \(\Sigma \); with the obvious abuse of notation, we view \(\gamma \) as a function \(\gamma : \mathbb N _+ \rightarrow [0,1]\) such that
Write \(\Delta (\Sigma )\) for the set of mixed threshold strategies. As usual, we identify the threshold strategy \(\sigma _K\) with the mixed strategy that puts mass 1 on \(\sigma _K\). Assuming that the designer only recommends best responses (because other recommendations would not be followed), we interpret an element \(\gamma \in \Delta (\Sigma )\) as a recommendation that the fraction \(\gamma (K)\) plays the threshold strategy \(\sigma _K\).
A protocol is a pair \(\Pi = (\alpha , \gamma )\) consisting of a per capita supply of tokens \(\alpha \in (0, \infty )\) and a mixed strategy recommendation \(\gamma \in \Delta (\Sigma )\).
3.5 Invariant distributions
If the designer chooses the protocol \(\Pi = (\alpha , \gamma )\) and agents follow the recommendation \(\gamma \), we can easily describe the evolution of the token distribution (the distribution of token holdings). The token distribution must satisfy the two feasibility conditions:
Write
Evidently, with respect to this token distribution, \(\mu \) is the fraction of agents who have no tokens, hence cannot pay for service, and \(\nu \) is the fraction of agents who do not serve (assuming they follow the protocol).
To determine the token distribution next period, it is convenient to think backward and ask how an agent could come to have \(k\) tokens in the next period. There are three possibilities; the agent could have
-
\(k-1\) tokens in the current period, be chosen as a server, meet a client who can pay for service, and provide service (hence acquire a token);
-
\(k+1\) tokens in the current period, be chosen as a client, meet a server who provides service, and buy service (hence expend a token);
-
\(k\) tokens in the current period but neither provide service nor buy service (hence neither acquire nor expend a token).
Given a recommendation \(\gamma \) it is convenient to define \(\sigma ^\gamma : \mathbb N _+ \rightarrow [0,1]\) by
Assuming that the Law of Large Numbers holds exactly in our continuum framework and that all agents follow the recommendation \(\gamma \), \(\sigma ^\gamma (n)\) is the fraction of agents in the population who serve when they have \(n\) tokens, so \(\sigma ^\gamma \) is the population strategy. Keeping in mind that token holdings cannot be negative, it is easy to see that the token distribution next period will be
where we use the convention \(\eta (-1) = 0\).
Given the protocol \(\Pi = (\alpha , \gamma )\), the (feasible) token distribution \(\eta \) is invariant if \(\eta _+ = \eta \); that is, \(\eta \) is stationary when agents comply with the recommendation \(\gamma \). Invariant distributions always exist and are unique.
Theorem 3
For each protocol \(\Pi = (\alpha , \gamma )\) there is a unique invariant distribution \(\eta ^\Pi \), which is completely determined by the feasibility conditions (2) and (3) and the recursion relationship
3.6 Definition of equilibrium and robust equilibrium
Assuming agents are rational and self-interested, they will comply with a given protocol if and only if compliance is individually optimal, that is, no agent can benefit by deviating from the protocol. To formalize this, fix a protocol \(\Pi = (\alpha , \gamma )\), and let \(\eta ^\Pi \) be the unique invariant distribution. Write
for the fraction of agents who have no tokens and the fraction of agents who do not serve (in the invariant distribution induced by \(\Pi \)), respectively. We say \(\Pi = (\alpha , \gamma )\) is an equilibrium protocol if \(\sigma _K\) is an optimal strategy (given given \(\mu ^\Pi , \eta ^\Pi \)) whenever \(\gamma (K) > 0\). That is, \(\gamma \) puts positive weight only on threshold strategies that are optimal, given the invariant distribution that \(\Pi \) itself induces.
Using the one step deviation principle, we can provide a useful alternative description of equilibrium in terms of the value function \(V\). As noted before, because optimal strategies exist and are Markov, we may unambiguously write \(V_k\) for the value following any history at which the agent has \(k\) tokens. (The value function depends on the population data \(\mu , \nu \) and on the environmental parameters \(b, c, \beta \); but there should be no confusion in suppressing those here.)
Fix any Markov strategy \(\sigma \). In order for \(\sigma \) to be optimal, it is necessary and sufficient that it achieves the value \(V_\ell \) following every token holding \(\ell \). Expressed in terms of current token holdings and future values, and taking into account how behavior in a given period affects the token holding in the next period, this means that \(\sigma \) is optimal if and only if it satisfies the following system of equations:
Applying this observation to the threshold strategy \(\sigma _K\) and carrying out the requisite algebra, we conclude that \(\sigma _K\) is optimal if and only if
(If it seems strange that \(\alpha , \gamma \) do not appear in these inequalities, remember that the value depends on the invariant distribution \(\eta ^\Pi \), which in turn depends on \(\alpha \) and on \(\gamma \).)
Given a benefit/cost ratio \(r > 1\) and a discount factor \(\beta < 1\), write \(EQ(r,\beta )\) for the set of protocols \(\Pi \) that constitute an equilibrium when the benefit/cost ratio is \(r\) and the discount factor is \(\beta \). Conversely, given a protocol \(\Pi \) write \(E(\Pi )\) for the set \(\{(r,\beta )\}\) of pairs of benefit/cost ratios \(r\) and discount factors \(\beta \) such that \(\Pi \) is an equilibrium protocol when the benefit/cost ratio is \(r\) and discount factor is \(\beta \). Note that \(EQ, E\) are correspondences (which might have empty values) and are inverse to each other.
Given \(r, \beta \) we say that \(\Pi \) is a robust equilibrium if \((r,\beta )\) belongs to the interior of \(E(\Pi )\); i.e., there is some \(\varepsilon > 0\) such that \(\Pi \in EQ(r^{\prime } ,\beta ^{\prime })\) whenever \(|r^{\prime } - r| < \varepsilon \) and \(|\beta ^{\prime } - \beta | < \varepsilon \). Write \(EQR(r,\beta )\) for the set of robust equilibrium protocols for the benefit/cost ratio \(r\) and discount factor \(\beta \) and \(ER(\Pi )\) for the set \(\{(r,\beta )\}\) of pairs of benefit/cost ratios for which \(\Pi \) is a robust equilibrium. Note that \(EQR, ER\) are correspondences (which might have empty values) and are inverse to each other.
4 Equilibrium and robust equilibrium
We first describe the nature of equilibrium and robust equilibrium and then use that description to show that robust equilibria exist. The crucial fact about equilibrium is that the strategy part of an equilibrium protocol can involve mixing over at most two thresholds and that these thresholds must be adjacent; the crucial fact about robust equilibrium is that the strategy cannot involve strict mixing at all but must rather be a pure strategy.
Theorem 4
For each benefit/cost ratio \(r > 1\) and discount factor \(\beta < 1\) the set \(EQ(r,\beta )\) is either empty or consists of protocols that involve only (possibly degenerate) mixtures of two threshold strategies with adjacent thresholds.
Theorem 5
If \(\Pi = (\alpha , \sigma )\) is a robust equilibrium then \(\sigma \) is a pure threshold strategy.
The existence of equilibrium or robust equilibrium does not seem at all obvious (and our proof is not simple). For both intuition and technical convenience, it is convenient to work “backward”: rather than beginning with population parameters \(r, \beta \) and looking for protocols \(\Pi \) that constitute an equilibrium for those parameters, we begin with a protocol \(\Pi \) and look for population parameters \(r, \beta \) for which \(\Pi \) constitutes an equilibrium. That is, we do not study the correspondences \(EQ(r,\beta )\) and \(EQR(r,\beta )\) directly, but rather the inverse correspondences \(E(\Pi )\) and \(ER(\Pi )\). This is easier for several reasons, one of which is that the latter correspondences always have non-empty values.
To give an intuitive understanding of the difficulty and how we overcome it, fix a protocol \(\Pi = (\alpha , \sigma )\) and let \(\eta ^\Pi \) be the invariant distribution. Because we will eventually want to find a robust equilibrium, we assume \(\sigma \) is a threshold strategy: \(\sigma = \sigma _K\). To look for population parameters \(r, \beta \) for which \(\Pi \) is an equilibrium, let us fix \(r\) and let \(\beta \) vary. (We could fix \(\beta \) and let \(r\) vary, or vary both \(\beta , r\) simultaneously, but the intuition is most easily conveyed by fixing \(r\) and letting \(\beta \) vary.) As we have already noted, the invariant distribution \(\eta ^\Pi \), and hence \(\mu ^\Pi , \nu ^\Pi \), depends only on \(\Pi \) and so does not change as \(\beta \) varies. Given the invariant distribution, if \(\beta \) is close to \(0\), an agent has little incentive to acquire tokens; however, the incentive to acquire tokens increases as \(\beta \rightarrow 1\). It can be shown that there is a smallest discount factor \(\beta _L(\Pi )\) with the property that an agent whose discount factor is at least \(\beta _L(\Pi )\) will be willing to continue providing service until he has acquired \(K\) tokens. This is not enough, because \(\sigma _K\) will only be incentive compatible if the agent is also willing to stop providing service after he has acquired \(K\) tokens. However, it can also be shown that there is a largest discount factor \(\beta _H(\Pi )\) for which the agent is willing to stop providing service after he has acquired \(K\) tokens, and that \(\beta _L(\Pi ) < \beta _H(\Pi )\). (Recall that \(r, \Pi \) are fixed.) For every discount factor \(\beta \) in the closed interval \([\beta _L(\Pi ), \beta _H(\Pi )]\), the protocol \(\Pi \) is an equilibrium when the population parameters are \(r, \beta \); that is, \((r,\beta ) \in E(\Pi )\). From this, it can be shown that for every discount factor \(\beta \) in the interval \((\beta _L(\Pi ), \beta _H(\Pi ))\), the protocol \(\Pi \) is a robust equilibrium when the population parameters are \(r, \beta \), that is, \((r,\beta ) \in ER(\Pi )\). Similarly, we can hold \(\beta \) fixed and let \(r\) vary from 1 to \(\infty \), construct the corresponding intervals \([r_L(\Pi ), r_H(\Pi )]\) with \(r_L(\Pi ) < r_H(\Pi )\) and then show that for every benefit/cost ratio \(r\) in the open interval \((r_L(\Pi ), r_H(\Pi ))\) the protocol \(\Pi \) is a robust equilibrium when the population parameters are \(r, \beta \); that is, \((r,\beta ) \in ER(\Pi )\). This is the content of Theorem 6 below.
Applying this procedure for every protocol yields a family \(\{ER(\Pi )\}\) of non-empty open sets of parameters \(r,\beta \) for which robust equilibria exist. However, our work is not done because we do not know whether a robust equilibrium exists for given population parameters \(r, \beta \). To see that it does, we show that \(\{ER(\Pi )\}\) covers a big enough set of population parameters. In particular, for each \(r > 1\), there is a \(\beta ^* < 1\) such that \(\{ER(\Pi )\}\) covers the set \(\{\Pi (r, \beta ) : \beta > \beta ^* \}\); this means that for each \(r > 1\) and \(\beta > \beta ^*\) there is a protocol \(\Pi \) that constitutes a robust equilibrium for the population parameters \(r, \beta \). Similarly, for each \(\beta > 0\) there is a \(r^* > 0\) such that if \(r > r^*\) there is a protocol that constitutes a robust equilibrium for the population parameters \(\beta , r\). The proof is not easy; to do so, we first establish (Theorem 6) some special properties of protocols of the form \(\Pi _K = (K/2, \sigma _K)\); we then apply these special properties (Theorem 7) to obtain the desired result.
It is natural to ask why our proof seems (and is) so much more complicated than existence proofs in the literature, such as in Berentsen (2002). The answer is that the literature establishes the existence of equilibrium only under the assumption that there is an exogenous upper bound \(K^*\) on the number of tokens any agent can hold. As discussed above, this assumption makes it relatively easy to show that equilibrium exists: Fix the benefit/cost ratio \(r >1\) and an arbitrary \(\alpha > 0\) and consider the protocol \((\alpha , \sigma _{K^*})\). As above, an agent whose discount factor \(\beta \) is at least \(\beta _L(\alpha , \sigma _{K^*})\) will provide service until he has acquired \(K^*\) tokens; under the assumption that \(K^*\) is an upper bound on the number of tokens any agent can hold, the agent will stop providing service after he has acquired \(K^*\) tokens because, by assumption, he cannot hold more than \(K^*\) tokens so providing service incurs a present cost with no future benefit. Hence, \((\alpha , \sigma _{K^*})\) is an equilibrium protocol for every \(\beta \ge \beta _L(\Pi )\) and is a robust equilibrium protocol for every \(\beta > \beta _L(\alpha , \sigma _{K^*})\). Thus, any protocol can be supported in equilibrium so long as agents are sufficiently patient. As we have noted in the Introduction, assuming an exogenous upper bound on token holdings does not seem realistic in the environments we consider.
Theorem 6
Fix a protocol \(\Pi = (\alpha , \sigma _K)\).
-
(i)
For each benefit/cost ratio \(r > 1\), the set \(\{\beta : \Pi \in EQ(r,\beta )\}\) is a non-degenerate closed interval \([\beta _L(\Pi ), \beta _H(\Pi )]\) whose endpoints are continuous functions of \(r\).
-
(ii)
For each discount factor \(\beta < 1\), the set \(\{r : \Pi \in EQ(r,\beta )\}\) is a non-degenerate closed interval \([r_l(\Pi ), r_H(\Pi )]\) whose endpoints are continuous functions of \(\beta \).
These results are illustrated for \(\alpha = 1/4\) in Figs. 1 and 2. (Fig. 1 may give the impression that the intervals for successive values of \(K\) do not overlap, but as Fig. 2 illustrates, they actually do overlap; the overlap is masked by the granularity of the Figure. However, as we have already said, we do not assert that overlapping of intervals for successive values of \(K\) is a general property).
For the special protocols \(\Pi _K = (K/2, \sigma _K)\), in which the supply of tokens is exactly half the selling threshold, we prove in Theorem 7 below that the intervals corresponding to successive values of the threshold overlap but are not nested. This is exactly what we need to guarantee that (non-degenerate) equilibria always exist provided that \(\beta , r\) are sufficiently large. Theorem 10 provides estimates on how big \(\beta , r\) must be.)
Theorem 7
Robust equilibria exist whenever \(\beta , r\) are sufficiently large. More precisely:
-
(i)
For each fixed threshold \(K\) and benefit/cost ratio \(r > 1\), successive \(\beta \)-intervals overlap but are not nested:
$$\begin{aligned} \beta _L(\Pi _{K-1}) < \beta _L(\Pi _{K}) < \beta _H(\Pi _{K-1}) < \beta _H(\Pi _{K}) \end{aligned}$$Moreover,
$$\begin{aligned} \lim _{K \rightarrow \infty } \beta _L(\Pi _{K}) = 1 \end{aligned}$$In particular, there is some \(\beta ^* < 1\) such that \(EQR(r, \beta ) \not = \emptyset \) for all \(\beta > \beta ^*\).
-
(ii)
For each fixed threshold \(K\) and discount factor \(\beta < 1\), successive \(r\)-intervals overlap but are not nested:
$$\begin{aligned} r_L(\Pi _{K-1}) < r_L(\Pi _{K}) < r_H(\Pi _{K-1}) < r_H(\Pi _{K}) \end{aligned}$$Moreover,
$$\begin{aligned} \lim _{K \rightarrow \infty } r_L(\Pi _{K}) = \infty \end{aligned}$$In particular, there is some \(r^* > 1\) such that \(EQR(r, \beta ) \not = \emptyset \) for all \(r > r^*\).
It follows from Theorem 7 that, as \(K \rightarrow \infty \), the left-hand end points \(\beta _L(\Pi _K) \rightarrow 1\), so a fortiori the lengths of \(\beta \)-intervals shrink to \(0\). It is natural to guess that the lengths of these intervals shrink monotonically to \(0\), and simulations suggest that this guess is correct, but we have neither a proof nor a good intuition that this is actually true. We also guess that the lengths of \(r\)-intervals shrink monotonically, but again we have neither a proof nor a good intuition that this is actually true.
5 Efficiency
If agents were compliant (rather than self-interested), the designer could simply instruct them to provide service at every meeting and they would comply, so the per capita social gain in each period would be \(\rho (b-c)\). If agents follow the protocol \(\Pi = (\alpha , \sigma _K)\), then service will be provided only in those meetings where the client can buy service and the server is willing to provide service, so the per capita social gain in each period will be \(\rho (b-c)(1-\mu ^\Pi )(1-\nu ^\Pi )\). Hence, we define the efficiency of the protocol \(\Pi \) to be
In general, it seems hard to determine the efficiency of a given protocol or to compare the efficiency of different protocols. However, we can provide efficiency bounds for protocols that utilize a given threshold strategy \(\sigma _K\) and compute the precise efficiency of the protocols \(\Pi _K\).Footnote 9
Theorem 8
For each \(\alpha \in (0,\infty )\), each threshold \(K\) and all values of the population parameters we have:
-
(i)
\(\text{ Eff}(\alpha , \sigma _K) \le 1 - \frac{1}{2\lceil \alpha \rceil + 1}\)
-
(ii)
\(\text{ Eff}(\alpha , \sigma _K) \le \text{ Eff}(\Pi _K)\)
-
(iii)
\(\text{ Eff}(\Pi _K) = \left(1 - \frac{1}{K+1}\right)^2 = \left(\frac{K}{K+1} \right)^2\)
Two implications of Theorem 8 are immediate. The first is that, in order that a (threshold) protocol achieve efficiency near 1, it is necessary that it provide a large number of tokens and also that it prescribe a high selling threshold. Put differently: to yield full efficiency in the limit it is not enough to increase the number of tokens without bound or to increase the threshold without bound—both must be increased without bound. The second is that the protocols \(\Pi _K\) that provide \(K/2\) tokens per capita are the most efficient protocols that utilize a given threshold strategy \(\sigma _K\).
We caution the reader, however, that the protocols \(\Pi _K\) need not be equilibrium protocols, and it is (robust) equilibrium protocols that we seek. However, it follows immediately from Theorem 7 that whenever agents are sufficiently patient or the benefit/cost ratio is sufficiently large (or both), then some protocol \(\Pi _K\) is an equilibrium for large \(K\), and hence that nearly efficient equilibrium protocols always exist.
Theorem 9
-
(i)
for each fixed discount factor \(\beta < 1\)
$$\begin{aligned} \liminf _{r \rightarrow \infty } \sup \{ \text{ Eff}(\Pi _K) : \Pi _K \in EQR(\beta ,r) \} = 1 \end{aligned}$$ -
(ii)
for each fixed benefit/cost ratio \(r > 1\)
$$\begin{aligned} \liminf _{\beta \rightarrow 1} \sup \{ \text{ Eff}(\Pi _K) : \Pi _K \in EQR(\beta ,r) \} = 1 \end{aligned}$$
In words, as agents become arbitrarily patient or the benefit/cost ratio becomes arbitrarily large, it is possible to choose robust equilibrium protocols that achieve efficiency arbitrarily close to first best. Some intuition might be useful. Consider the protocols \(\Pi _K\) and the corresponding invariant distributions. As \(K\) increases, the fraction of agents who cannot purchase service and the fraction of agents who will do not provide both decrease—so efficiency increases. However, if \(r, \beta \) are fixed and \(K\) increases, then the protocols \(\Pi _K\) will eventually cease to be equilibrium protocols so equilibrium efficiency is bounded. On the other hand, if we fix \(r\) and let \(\beta \rightarrow 1\) or fix \(\beta \) and let \(r \rightarrow \infty \), then the thresholds \(K\) for which the protocols \(\Pi _K\) are equilibrium protocols blow up, and hence, efficiency tends to 1. Put differently: high discount factors or high benefit/cost ratios make the use of high thresholds consistent with equilibrium.
Theorem 9 provides asymptotic efficiency results; the following result presents an explicit lower bound (in terms of the population parameters \(r, \beta \)) for the efficiency obtainable by a robust equilibrium protocol.
Theorem 10
Given the benefit/cost ratio \(r > 1\) and the discount factor \(\beta < 1\), defineFootnote 10
Then:
-
(i)
all the thresholds \(K\) for which \(\Pi _K\) is a robust equilibrium protocol lie in the interval \([ K^L , K^H]\);
-
(ii)
the efficiency of the optimal robust equilibrium protocol is at least \(\left( 1 - \frac{1}{K^L+1}\right)^2 = \left(\frac{K^L}{K^L+1}\right)^2\).
Theorem 10 yields a lower bound on efficiency because the optimal robust equilibrium protocol is at least as efficient as any protocol \(\Pi _K\) that is a robust equilibrium, but does not yield an upper bound on efficiency because the optimal robust equilibrium protocol might be more efficient than any protocol \(\Pi _K\) that is a robust equilibrium.
Theorem 10 also yields the designer an effective procedure for finding a robust equilibrium whose efficiency is good, if not optimal, since all that is necessary is to check protocols \(\Pi _K\) with thresholds \(K\) in the (finite) interval \([ K^L , K^H]\). Moreover, it is not necessary to conduct an exhaustive search. Rather the designer can begin by checking the protocol \(\Pi _{K}\), where \(K\) is the midpoint of the interval \([ K^L , K^H]\). If \(M_{\sigma _{K}}(K-1) \ge c/\beta \) and \(M_{\sigma _K}(K) \le c/\beta \), then \(\Pi _{K}\) is an equilibrium protocol and the search can stop. If \(M_{\sigma _K}(K-1) < c/\beta \), then for all \(K^{\prime } > K\), \(M_{\sigma _{K^{\prime }}}(K^{\prime }-1) < c/\beta \) (because \(\beta ^L(\Pi _{K^{\prime }}) > \beta ^L(\Pi _{K})\)). Therefore, threshold protocols for which \(K^{\prime } > K\) cannot be an equilibrium and the designer can restrict search to the left half interval \([ K^L,K]\). If \(M_{\sigma _{K}}(K) > c/\beta \), then for all \(K^{\prime } < K\), \(M_{\sigma _{K^{\prime }}}(K^{\prime }) > c/\beta \) (because \(\beta ^H(\Pi _{K^{\prime }}) > \beta ^H(\Pi _K\))). Therefore, threshold protocols for which \(K^{\prime } < K\) cannot be an equilibrium and the designer can restrict search to the right half interval \([ K,K^H]\). Continuing to bisect in this way, the designer can find an equilibrium threshold protocol in at most \(\log _2 (K^H - K^L)\) iterations.
5.1 The optimal quantity of money
The question naturally arises: “Which equilibrium protocols are most efficient?” Because all robust equilibrium protocols are threshold protocols, this amounts to asking for which values of \(\alpha ,K\) is \((\alpha ,\sigma _K)\) the most efficient equilibrium protocol. If we focus on \(\alpha \) we are asking a familiar question: “What is the optimal quantity of money?” Kiyotaki and Wright (1989) constrain agents to hold no more than \(1\) unit of money and show that the optimal quantity of money is \(1/2\). Berentsen (2002) relaxes the constraint on money holdings to \(K\) and shows that (with certain assumptions) the optimal quantity of money is \(K/2\). However, this conclusion is an artifact of the constraint that agents can hold no more than \(K\) units of money. In our framework, which does not place an exogenous constraint on money holdings, \(K/2\) may not be—and often will not be—the optimal quantity of money. Figure 3 illustrates this point in a simulation, but it is in fact quite a robust phenomenon.
To see what this is so, fix \(r \ge 1\) and \(K \ge 1\). Theorem 7 guarantees that there is an open interval of discount factors for which \(\Pi _K\) is an equilibrium and an open interval of discount factors for which \(\Pi _{K+1}\) is an equilibrium and these intervals overlap:
Consider a discount factor \(\beta \) with \(\beta _L(\Pi _{K}) < \beta < \beta _L(\Pi _{K+1})\). By construction, \(\Pi _{K} = (K/2, \sigma _{K})\) is an equilibrium and \(\Pi _{K+1} = ((K+1)/2, \sigma _{K+1})\) is not, so \(\Pi _{K}\) is the most efficient equilibrium protocol among the protocols \(\Pi _{K^{\prime }}\). However, these are not the only protocols: if we seek the most efficient among all equilibrium protocols, we must also consider protocols \((\alpha , \sigma _{K^{\prime }})\) for values of \(\alpha \) other than \(\alpha = K^{\prime }/2\). However, for discount factors \(\beta < \beta _L(\Pi _{K+1})\) for which \(\vert \beta _L(\Pi _{K+1}) - \beta \vert \) is sufficiently small, there will be token supplies \(\alpha < (K+1)/2\) for which \(\vert (K+1)/2 - \alpha \vert \) is as small as we like and for which \((\alpha , \sigma _{K+1})\) is an equilibrium protocol. If \(\vert (K+1)/2 - \alpha \vert \) is small, then the invariant distributions for \((\alpha , \sigma _{K+1})\) and for \(((K+1)/2, \sigma _{K+1})\) will be close, and hence the efficiency of \((\alpha , \sigma _{K+1})\) will be almost equal to the efficiency of \(((K+1)/2,\sigma _{K+1}) = \Pi _{K+1}\). Since the efficiency of \(\Pi _{K+1}\) is strictly greater than the efficiency of \(\Pi _{K}\) this means that \((\alpha , \sigma _{K+1})\) is an equilibrium protocol that is more efficient than \(\Pi _{K}\). In other words, for discount factors less than but very close to \(\beta _{K+1}\), \(K/2\) is not the optimal quantity of money.
As this discussion illustrates, it is crucial to the design problem that the designer be able to choose the quantity of money \(\alpha \), since it is through \(\alpha \) that the designer controls efficiency (social welfare).
5.2 Choosing the right protocol
The reader may wonder why we have put so much emphasis on choosing the right protocol. As Fig. 3 already shows, the reason is simple: choosing the wrong protocol can result in an enormous efficiency loss. Figure 4, which compares efficiency of the most efficient protocol with efficiency of a protocol for which the strategic threshold is constrained to be \(K = 3\), makes this point in an even starker way: as the reader will see, except for a small range of discount factors, the efficiency loss is enormous.
6 Conclusion
In this paper, we have analyzed in some detail a simple, practicable, and distributed method to incentivize trade in online environments through the use of (electronic) tokens. We have shown that when agents are patient, the method we offer can achieve outcomes that are nearly efficient, provided the right protocol (supply of tokens and recommended threshold) is chosen, but that equilibrium and efficiency are both sensitive to the precise choice of protocol. Surprisingly, the “optimal” supply of tokens need not be half the recommended threshold; this conclusion, and others, and much of the difficulty of our arguments are a consequence of our allowing agents to accumulate as many tokens as they wish, rather than imposing an exogenous bound on token holdings (which is common in the literature).
Our analysis is silent about convergence to the steady state. In particular, we do not know whether the recommended strategies would lead to convergence to the invariant distribution for all initial token distributions or for some particular token distributions. Berentsen (2002) proves convergence under some conditions, but in a continuous time model in which token holdings are subject to an exogenous bound; we have already noted that the latter is a strong (and, in our view, unrealistic) assumption. Another point is worth making as well. By definition, the recommended strategy is a best reply when the system is in the steady state, but the recommended strategy need not be a best reply—and very likely is not a best reply—when the system is not in the steady state—so why should agents follow it?
We have assumed that service and tokens are both indivisible. This seems a natural assumption given the environment in which we are interested because a partial file is usually worthless by itself and because there is no (extant) technology for online exchange of fractional tokens. The assumption that service and tokens are exchanged one-for-one is a genuine restriction. It is conceivable that there would exist an equilibrium in which different quantities of tokens sometimes change hands, and such equilibria (if they do exist) might be more efficient than the ones we consider here. Determining whether such equilibria exist and characterizing them (if they do exist) seems a daunting task that none of the literature seems to have addressed.Footnote 11
We have considered the simplest setting, in which agents are identical, all files are equally valuable, and no errors occur. In a more realistic setting, we would need to take account of heterogeneous agents and files and allow for the possibilities of errors (in transmission of files or exchange of tokens or both). We have followed here the well-known adage “one has to start somewhere”—but we are keenly aware that there is much more work to be done.
Notes
Empirical studies show that this free-riding problem can be quite severe: in the Gnutella system for instance, almost 70 % of users share no files at all (Adar and Hubeman 2000).
It might be useful to note that none of the systems mentioned above involve a central authority or central monitoring agency. Napster, for instance, merely maintained many partial lists (distributed across many servers) of music files available and contact information for subscribers who had these files; users seeking files could simply search these lists and then contact the file-holder directly. In practice, the absence of a central agency is crucial, since it could not handle the volume of traffic that would be generated and would be exceedingly vulnerable to attack. Hence, a distributed system is a sine qua non.
The reader might wonder how agents who do not meet in person can exchange tokens at all, since they can only exchange electronic files, and electronic files would seem to be easily duplicated. In fact, however, there are practicable, secure, and private procedures for online token exchange, utilizing hardware or software or both; see Buttyan and Hubaux (2001), Chandrakumar et al. (2003) and Ciuffoletti (2010) for instance. Similar procedures can also serve as escrow accounts to assure that service that is promised is actually provided and that payment that is promised is actually made.
In the real systems we have in mind, the population is in the tens of thousands or hundreds of thousands so a continuum model seems a reasonable approximation.
We say “the” price because Zhou (1999) considers only single-price equilibria.
If \(b \le c\) there is no social value to providing service; if \(c \le 0\) agents will always be willing to provide service.
Because a request for service will not be honored when an agent holds 0 tokens, it is irrelevant whether \(\tau (0) = 0\) or \(\tau (0) = 1\).
Berentsen (2002) derives similar results in a different model, with Poisson arrival rates.
Note that both the basis of the logarithms and the arguments are less than 1.
Zhou (1999) considers equilibria for various prices, but all the equilibria she studies are assumed to have a single price and agents holding in these equilibria are only in integral multiples of the single price.
References
Adar, E., Hubeman, B.: Free riding on Gnutella. First Monday 5(10), 2 (2000)
Alós-Ferrer, C.: Dynamical systems with a continuum of randomly matched agents. J. Econ. Theory 86(2), 245–267 (1999)
Berentsen, A.: Money inventories in search equilibrium. J. Money Credit Bank. 32(2), 168–178 (2000)
Berentsen, A.: On the distribution of money holdings in a random matching model. Int. Econ. Rev. 43(3), 945–954 (2002)
Berentsen, A., Camera, G., Waller, C.: Money, credit and banking. J. Econ. Theory 135(1), 171–195 (2007)
Buttyan, L., Hubaux, J.-P.: Nuglets: a virtual currency to stimulate cooperation in self-organized mobile ad hoc networks. EPFL Technical Report (2001)
Buttyan, L., Hubaux, J-P.: Stimulating cooperation in self-organizing mobile ad hoc Networks. Mobile Netw. Appl. 8(5), 579–592 (2003)
Camera, G., Corbae, D.: Money and price dispersion. Int. Econ. Rev. 40(4), 985–1008 (1999)
Cavalcanti, R.O., Wallace, N.: Inside and outside money as alternative media of exchange. J. Money Credit Bank. 31(3), 443–457 (1999)
Cavalcanti, R.O., Wallace, N.: A model of private bank-note issue. Rev. Econ. Dyn. 2(1), 104–136 (1999)
Chandrakumar, S., Sirer, E.G., Vishnumurthy, V.: KARMA: a secure economic framework for peer-to-peer resource sharing. Workshop on the Economics of Peer-to-Peer Systems (2003)
Ciuffoletti, A.: Secure token passing at application level. Futur. Gener. Comput. Syst. 26(7), 1026–1031 (2010)
Duffie, D., Sun, Y.: Existence of independent random matching. Ann. Appl. Probab. 17(1), 386–419 (2007)
Ellison, G.: Basins of attraction, long-run stochastic stability, and the speed of step-by-step evolution. Rev. Econ. Stud. 67(1), 17–45 (2000)
Figueiredo, D.R., Shapiro, J.K., Towsley, D.: Payment-based incentives for anonymous peer-to-peer systems. UMass CMPSCI Technical, Report 04–62 July (2004)
Friedman, E., Halpern, J., Kash, I.: Efficiency and nash equilibria in a scrip system for P2P networks. In: Proceedings of the Seventh ACM Conference on Electronic Commerce, pp. 140–149 (2006)
Friedman, E., Halpern, J., Kash, I.: Optimizing scrip systems: efficiency, crashes, hoarders, and altruists. In: Proceedings of the Eighth ACM Conference on Electronic Commerce, pp. 305–315 (2007)
Hu, T., Kennan, J., Wallace, N.: Coalition-proof trade and the Friedman rule in the Lagos-Wright Model. J. Polit. Econ. 117(1), 116–137 (2002)
Jarvis, S.A., Tan, G.: A payment-based incentive and service differentiation mchanism for peer-to-peer streaming broadcast. In: 14th IEEE International Workshop on Quality of Service (2006)
Kandori, M.: Social norms and community enforcement. Rev. Econ. Stud. 59(1), 63–80 (1992)
Kiyotaki, N., Wright, R.: On money as a medium of exchange. J. Political Econ. 97(4), 927–954 (1989)
Kocherlakota, N.R.: The two money theorem. Int. Econ. Rev. 43(2), 333–346 (2002)
Mohr, A.E., Pai, V.: Improving robustness of peer-to-peer streaming with incentives. Workshop on th Economics of Networks, Systems and Computation (2006)
Park, J., van der Schaar, M., Zhang, Y.: Rating protocols for online communities. ACM Transactions on Economics and Computation (to appear) (2010). http://arxiv.org/abs/1101.0272
Podczeck, K.: On existence of rich Fubini extensions. Econ. Theory 45(1–2), 1–22 (2010)
Podczeck, K., Puzzello, D.: Independent random matching. Econ. Theory 50(1), 1–29 (2012)
Zhou, R.: Individual and aggregate real balances in a random-matching model. Int. Econ. Rev. 40(4), 1009–1038 (1999)
Author information
Authors and Affiliations
Corresponding author
Appendix: Proofs
Appendix: Proofs
Proof of Theorem 1
We first estimate \(V(n+1) - V(n)\) (for \(n \ge 0\)) which is the loss from having one less token. To this end, fix an optimal Markov strategy \((\tau , \sigma )\). We define a history-dependent strategy \((\tau ^{\prime }, \sigma ^{\prime })\) and estimate the expected utility to an agent who begins with \(n\) tokens and follows \((\tau ^{\prime },\sigma ^{\prime })\); this is a lower bound on \(V(n)\). The strategy \((\tau ^{\prime }, \sigma ^{\prime })\) is most easily described in the following way: Begin by following the behavior prescribed by the strategy \((\sigma , \tau )\) but for an agent who holds one more token than is actually held, i.e., \((\tau ^{\prime },\sigma ^{\prime })(h) = (\tau , \sigma )(N(h)+1)\). If it never happens that the agent holds \(0\) tokens, requests service, and is matched with an agent who is willing to provide service, then continue in this way forever. If it does happen that the agent holds \(0\) tokens, requests service, and is matched with an agent who is willing to provide service, then service is not provided in that period (because the agent cannot pay) and after that period \((\tau ^{\prime },\sigma ^{\prime }) = (\tau , \sigma )\). In other words, the agent behaves “as if” he held one more token than actually held until the first time such behavior results in requesting service, being offered service, and being unable to pay for service; after that point, revert to \((\tau , \sigma )\). The point to keep in mind is that if a moment of deviation occurs, then an agent with one more token would hold exactly \(1\) token, would request and receive service, and in the next period would have \(0\) tokens—so that reverting to \((\tau , \sigma )\) is possible. Beginning with \(n\) tokens and following the strategy \((\tau ^{\prime }, \sigma ^{\prime })\) yields the same string of payoffs as beginning with \(n+1\) tokens and following the strategy \((\tau , \sigma )\) except in the single period in which deviation occurs; in that period, the expected loss of utility is at most \(b\rho \). Hence, the expected utility from beginning with \(n\) tokens and following the strategy \((\tau ^{\prime }, \sigma ^{\prime })\) yields utility at least \(V(n+1) - b\rho \). Hence, \(V(n+1) - V(n) \le b\rho < b < b/\beta \). However, this is the incentive compatibility condition that guarantees that an agent strictly prefers to request service when holding \(n+1\) tokens, so the proof is complete. \(\square \)
At this point, it is convenient to collect some notation and isolate two technical results. Fix \(\rho , b, c, \mu ,\nu \) and consider a Markov strategy \(\sigma \). For each \(k\), let \(V_\sigma (k, \beta )\) be the value of following \(\sigma \) when the initial token holding is \(k\) and the discount factor is \(\beta \). As with the optimal value function \(V\) defined in the text, the value function \(V_\sigma \) can be defined by a recursive system of equations:
From the value function, we define the marginal utilities
If \(\beta \) is fixed/understood, we simplify notation by writing \(V_\sigma (k) = V_\sigma (k, \beta )\) and \(M_\sigma (k) = M_\sigma (k, \beta )\).
It is also convenient to introduce some auxiliary parameters:
We note the signs of these parameters and various combinations:
Using these auxiliary parameters and the recursion relations for \(V_\sigma \) and performing some simple algebraic manipulations yields a useful matrix representation involving marginals that we will use frequently:
In short form,write this matrix representation as
Lemma 1
Fix \(\rho , b, c, \mu ,\nu \) and \( \beta \). Let \(\sigma \) be a Markov strategy with the property that
Then,
-
(i)
if \(0 \le k < K_2\) then \(M_\sigma (k) > 0\)
-
(ii)
in the range \(0\le k <K_1\), \(M_\sigma \) is either increasing, decreasing or decreasing then increasing
-
(iii)
if \(M_\sigma (K_1-1)\ge c/\beta \) then
$$\begin{aligned}M_\sigma (0)>M_\sigma (1)> \cdots > M_\sigma (K_1-2)\ge M_\sigma (K_1-1)\ge c/\beta \end{aligned}$$
Proof
We first consider the token holding levels \(0 \le k < K_1\). We make use of the matrix representation (13).
To prove (i), we first show that \(M_\sigma (k) > 0 \) for \(0 \le k < K_1\). If \(K_1 < 3\), this follows by simply solving the matrix representation, so we henceforward assume \(K_1 \ge 3\). If there exists a token holding level \(k^*\) with \( 0 \le k^* < K_1\) such that \(M_\sigma (k^*) \le 0 \), then one of the following must hold: either (a) there two consecutive such token holding levels, or (b) the marginal payoffs of the neighboring token holding levels are both positive. We consider these cases separately.
-
(a)
In this case, there exists \(k^*\) such that \(M_\sigma (k^*), M_\sigma (k^*+1)\) are both non-positive. Of these, one is at least as big; say \(M_\sigma (k^*) \ge M_\sigma (k^*+1)\). From the identities above we see that
$$\begin{aligned} M_\sigma (k^*+2)&= \frac{\phi _lM_\sigma (k^*) + \phi _cM_\sigma (k^*+1)}{-\phi _r} \\&\le \frac{(\phi _l + \phi _c)M_\sigma (k^*+1)}{-\phi _r} \\&\le M_\sigma (k^*+1) \end{aligned}$$Proceeding inductively, it follows that
$$\begin{aligned} 0 \ge M_\sigma (k^*) \ge M_\sigma (k^*+1) \ge \cdots \ge M_\sigma (K_1-1) \end{aligned}$$Moreover,
$$\begin{aligned} \phi _c M_\sigma (K_1-1)&= (1-\mu )\rho c - \phi _l M_\sigma (K_1-2)\\&> - \phi _l M_\sigma (K_1-2) \\&> -\phi _l M(K-1) \end{aligned}$$This requires \(\phi _c < -\phi _l\) which contradicts the sign relations (12). The argument when \(M_\sigma (k^*+1) \ge M_\sigma (k^*)\) is similar and is left for the reader.
-
(b)
In this case, there exists \(k^*\) such that \(M_\sigma (k^*-1)>0\), \(M_\sigma (k^*)\le 0\), \(M_\sigma (k^*+1) >0\). This entails
$$\begin{aligned} \phi _lM_\sigma (k^*-1) + \phi _c M_\sigma (k^*) + \phi _r M_\sigma (k^*+1) < 0 \end{aligned}$$(15)which again contradicts the sign relations (12).
From the above, we conclude \(M_\sigma (k) > 0\) for \(0 < k < K_1\). To see that \(M_{\sigma }(0) > 0\) note that
Therefore, \(M_\sigma (1) < M_\sigma (0)\), so \(M_{\sigma }(0) > 0\), as desired.
Finally, to see that \(M_\sigma (k) > 0\) for \(K_1 \le k < K_2\), apply the recursion equations (9) to obtain
We know that \(M_\sigma (K_1-1) > 0\) so the sign relations (12) imply that \(M_\sigma (K_1) > 0\) as well. Now it follows inductively that \(M_\sigma (k) > 0\) for \(K_1\le k < K_2\). This completes the proof of (i).
To prove (ii), it is enough to show that \(M_\sigma \) has no local maximum for \(0 < k < K_1\). If \(M\) had a local maximum \(k^*\) in this range, we would have \(M_\sigma (k^*) \ge M_\sigma (k^*-1)\) and \(M_\sigma (k^*) \ge M_\sigma (k^*+1)\). However, algebraic manipulation yields the inequalities
which is a contradiction. This establishes (ii)
To prove (iii), first manipulate the matrix identity (13) to obtain:
In view of (ii), the marginal payoffs are decreasing, so this establishes (iii).
Lemma 2
Fix \(\rho , b, c\) and a threshold protocol \(\Pi = (\alpha , \sigma _K)\) with corresponding \(\mu ^\Pi , \nu ^\Pi \). The marginal utility \(M_{\sigma _K}(k, \beta )\) is strictly increasing in the discount factor \(\beta \), i.e., if \(0 \le \beta _1 < \beta _2 < 1\), then,
Proof
To economize slightly on notation, we write \(\sigma = {\sigma _K}\). We present the proof in three steps.
In Step 1, we prove that if there exist \(0<K_1\le K_2 < K-1\) such that \(\forall k \in [K_1, K_2], M_\sigma (k,\beta _1) \ge M_\sigma (k, \beta _2)\), then at least one of the following is true, \(M_\sigma (K_1 - 1,\beta _1) \ge M_\sigma (K_1-1, \beta _2)\) or \(M_\sigma (K_2 + 1,\beta _1) \ge M_\sigma (K_2 + 1, \beta _2)\).
In Step 2, we prove that if there exists a \(k^* \in [0,K-1]\) such that \(M_{\sigma }(k^*, \beta _1) \ge M_{\sigma }(k^*, \beta _2)\), then for all \(k\in [0,K-1]\), \(M_{\sigma }(k, \beta _1) \ge M_{\sigma }(k, \beta _2)\). Step 2 uses the result of Step 1.
In Step 3, we disprove the possibility that \(k\in [0,K-1]\), \(M_{\sigma }(k, \beta _1) \ge M_{\sigma }(k, \beta _2)\).
Step 2 and Step 3 together show a contradiction and therefore, \(k\in [0,K-1]\), \(M_{\sigma }(k, \beta _1) < M_{\sigma }(k, \beta _2)\).
Step 1 We assert that if there are indices \(0<K_1\le K_2 < K-1\) such that \(M_\sigma (k,\beta _1) \ge M_\sigma (k, \beta _2)\) for all \(K_1 \le k \le K_2\) then at least one of the following must hold:
-
(A)
\(M_\sigma (K_1 - 1,\beta _1) \ge M_\sigma (K_1-1, \beta _2)\)
-
(B)
or \(M_\sigma (K_2 + 1,\beta _1) \ge M_\sigma (K_2 + 1, \beta _2)\).
To see this, note that simple manipulations of the matrix representation (13) yield
-
if \(K_2 = K_1\) then
$$\begin{aligned}&(1-\nu )\rho M_\sigma (K_1 - 1, \beta ) + (1 - \mu )\rho M_\sigma (K_2 + 1, \beta ) \\&\quad = (1/\beta - 1 + ((1-\nu ) + (1-\mu ))\rho ) M_\sigma (K_1, \beta ) \end{aligned}$$ -
if \(K_2 > K_1\) then
$$\begin{aligned}&(1-\nu )\rho M_\sigma (K_1 - 1, \beta ) + (1 - \mu )\rho M_\sigma (K_2 + 1, \beta )\\&\quad = (1/\beta -1 + (1-\mu )\rho ) M_\sigma (K_1, \beta ) \\&\quad = + (1/\beta \!-\! 1) [M_\sigma (K_1+1, \beta ) + \cdots + M_\sigma (K_2\!-\!1, \beta )]\\&\quad = + (1/\beta -1 + (1-\nu )\rho ) M_\sigma (K_2, \beta ) \end{aligned}$$
Since \(\beta _1 < \beta _2\) and we have assumed \(M_\sigma (k,\beta _1) \ge M_\sigma (k, \beta _2)\) for \(0<K_1\le K_2 < K-1\), in each of the cases above the right-hand side is larger when \(\beta = \beta _1\) than when \(\beta = \beta _2\). Because the terms in the left-hand sides are positive, it follows that at least one of (A), (B) must hold, as asserted.
Step 2 We assert first that if there is a \(k^*, 0 \le k^* \le K_1\) such that \(M_{\sigma }(k^*, \beta _1) \ge M_{\sigma }(k^*, \beta _2)\), then at least one of the following must hold:
-
(C)
there exists some \( K_3\), \(0 \le K_3 \le K_1\), such that \(M_{\sigma }(k, \beta _1) \ge M_{\sigma }(k, \beta _2)\) for all \(k\), \(0 \le k \le K_3\)
-
(D)
there exists some \( K_4\), \(0 \le K_4 \le K_1\), such that \(M_{\sigma }(k, \beta _1) \ge M_{\sigma }(k, \beta _2)\) for all \(k\), \(K_4 \le k \le K_-1\)
To see this, note first that if \(k^*=0\) satisfies the hypothesis, then (C) holds with \(K_3 = 0\) and that if \(k^*= K-1\) satisfies the hypothesis, then (D) holds with \(K_4 = K-1\). Hence, it suffices to consider a \(k^*\), \(0 < k^* < K-1\), that satisfies the hypothesis. We now make use of Step 1. Set \(K_1 = K_2 = k^*\). Applying Step 1 once increases the token holding interval where \(M_\sigma (k,\beta _1) \ge M_\sigma (k, \beta _2)\) by 1. Let \(K_1\) and \(K_2\) be the new end points of the interval and apply Step 1 again. Continuing in this way, we come eventually to a point where either \(K_1 = 0\) or \(K_2 = K-1\). If \(K_1 = 0\), set \(K_3 = K_2\) and note that (C) holds. If \(K_2 = K-1\), set \(K_4 = K-1\) and note that (D) holds
We now show that either (C) or (D) leads to the desired conclusion. Consider (C) first. Using the matrix representation (13), we obtain
The right-hand side is bigger when \(\beta = \beta _1\) than when \(\beta = \beta _2\). Therefore, \(M_\sigma (K_1+1,\beta _1) \ge M_\sigma (K_1+1,\beta _2)\). By induction, \(M_\sigma (k, \beta _1) \ge M_\sigma (k, \beta _2)\) for all \(k\), \( 0 \le k \le K-1\).
Now consider (D). Using the matrix representation (13), we obtain
The right-hand side is bigger when \(\beta = \beta _1\) than when \(\beta = \beta _2\). Therefore, \(M_\sigma (K_2-1,\beta _1) \ge M_\sigma (K_2-1,\beta _2)\). By induction, \(M_\sigma (k, \beta _1) \ge M_\sigma (k, \beta _2)\) for all \(k\), \( 0 \le k \le K-1\).
Taking (C) and (D) together completes Step 2.
Step 3 Using the matrix representation (13), we obtain
In view of Step 2, the left-hand side is bigger when \(\beta = \beta _1\) than when \(\beta = \beta _2\). However, the right-hand side is independent of \(\beta \), so this is a contradiction. We conclude that \(M_\sigma (k, \beta _1) < M_\sigma (k, \beta _2)\) for every \(k\), \(0 \le k \le K-1\). \(\square \)
Proofof Theorem 2
Fix \(\beta \). The Markov strategy \(\sigma \) is optimal if and only if it satisfies the Bellman optimality conditions:
If \(\sigma \) is not a threshold strategy, there must exist integers \(K_1 < K_2\) such that
We will show that the Bellman optimality conditions are violated at \(K_2\) and \(K_2-1\). To this end, let \(K_3\) be the smallest integer greater than \(K_2\) for which \(\sigma (K_3) = 0\). (Such an integer exists because it cannot be optimal to serve when the token holding is sufficiently high.) Thus, \(\sigma (k) = 1\), for \(K_2 \le k < K_3\) and \(M_\sigma (K_3-1) \ge c/\beta \). Following \(\sigma \),
An inductive argument shows that \(M_\sigma (K_2) > M_\sigma (K_2+1) \ge c/\beta \). According to the recursion equations (9), we have
which is a contradiction. We conclude that a non-threshold strategy cannot be optimal; equivalently, only threshold strategies can be optimal strategies.
It remains to show that the only possible optimal threshold strategies have adjacent thresholds. Consider first two threshold strategies with consecutive thresholds \(K\) and \(K+1\). We assert that
We prove direction “\(\Rightarrow \)”; the “\(\Leftarrow \)” direction is similar and left to the reader. Suppose instead that \(M_{\sigma _{K+1}}(K) \ge c/\beta \). It follows that \(-\phi _r M_{\sigma _{K+1}}(K) \ge (1-\mu )\rho c\). If we delete the last line in the matrix equation (13) for \(\sigma _{K+1}\) and move \(M_{\sigma _{K+1}}(K)\) to the right-hand side, we get another matrix equation
where \(\tilde{\mathbf{u}} = ((1-\nu )\rho b, 0,\ldots ,0,-\phi _r M_{\sigma _{K+1}}(K))^\mathrm{T}\). For the threshold \(K\), \(\Phi _{K\times K} \mathbf{M}_{\sigma _{K}} = \mathbf{u}\). Therefore,
Lemma 1 guarantees that \(\tilde{\mathbf{u}} - \mathbf{u} \ge 0\), so \(\mathbf{M}_{\sigma _{K+1}} \ge \mathbf{M}_{\sigma _{K}}\). That is, \(M_{\sigma _{K+1}}(k) \ge M_{\sigma _{K}}(k)\) for \(0\le k \le K-1\). Because \(M_{\sigma _{K+1}}(K) \ge c/\beta > M_{\sigma _{K}}(K)\), it follows that \(M_{\sigma _{K+1}}(k) \ge M_{\sigma _{K}}(k)\) for \( 0\le k \le K\). According to the matrix equation, the following identity holds for both \(\sigma = \sigma _K\) and \(\sigma = \sigma _{K+1}\):
This is a contradiction so we have established the direction \(\Rightarrow \), as desired.
It follows directly from the matrix identity that
Hence
We now assert that if \(\tilde{K} > K\) then
We have already shown that this is true when \(\tilde{K} = K+1\); i.e. \(M_{\sigma _{K+1}}(K) < c/\beta \). Consider \(\tilde{K} = K+2\). Of \(M_{\sigma _{K+2}}(K+1) \ge c/\beta \), then (27) implies that \(M_{\sigma _{K+1}}(K+1) \ge c/\beta \). Therefore, \(M_{\sigma _{K+1}}(K+1) > M_{\sigma _{K+1}}(K)\). This is a contradiction to \(M_{\sigma _{K+1}}(K+1) < M_{\sigma _{K+1}}(K)\). Following inductively we obtain the assertion (28).
A similar argument (which we omit) shows that:
Finally, suppose \(\sigma _K\) is an optimal threshold strategy. Then, \(M_{\sigma _K}(K-1) \ge c/\beta \) and \(M_{\sigma _K}(K) \le c/\beta \). If the equalities hold strictly, (28) and (29) guarantee that \(\sigma _K\) is the only optimal threshold strategy. If \(M_{\sigma _K}(K-1) = c/\beta \) (and hence, \(M_{\sigma _K}(K) < c/\beta \)), only \(\sigma _K\) and \(\sigma _{K-1}\) are optimal threshold strategies. If \(M_{\sigma _K}(K) = c/\beta \) (and hence, \(M_{\sigma _K}(K-1) > c/\beta \)), only \(\sigma _K\) and \(\sigma _{K+1}\) are optimal threshold strategies. This completes the proof. \(\square \)
Proof of Theorem 3
This follows immediately from the representation of \(\eta _+\) and the definition of invariance. \(\square \)
Proof of Theorem 4
Given a protocol \(\Pi = (\alpha , \sigma )\), let \(\eta ^\Pi \) be the unique invariant distribution; let \(\mu ^\Pi \) be the fraction of agents who have no tokens and \(\nu ^\Pi \) the fraction of agents who do not provide service; these depend only on \(\Pi \) and not on the population parameters. If \(\sigma = \sum \gamma (K) \sigma _K\) is a best response given the population parameters and \(\mu ^\Pi , \nu ^\Pi \), \(\gamma \) must put strictly positive weight only on threshold strategies \(\sigma _K\) that are pure best responses. In view of Theorem 2, there are at most two threshold strategies that are pure best responses and they are at adjacent thresholds. That is, \(\sigma \) is either a pure threshold strategy or a mixture of two adjacent threshold strategies, as asserted. \(\square \)
Proof of Theorem 5
Suppose to the contrary that \(\Pi = (\alpha , \sigma )\) is a robust equilibrium protocol and that \(\sigma = \sum \gamma (K) \sigma _K\) is a proper mixed strategy, so that \(\gamma (K) > 0\) for at least two values of the threshold \(K\), Let \(\mu ^\Pi \) be the fraction of agents who have no tokens and \(\nu ^\Pi \) the fraction of agents who do not provide service; these depend only on \(\Pi \) and not on the population parameters. In view of Theorem 4, \(\sigma \) must assign positive probability only to two adjacent threshold strategies; say \(\sigma = \gamma (K)\sigma _K + \gamma (K+1)\sigma _{K+1}\) with \(\gamma (K) > 0\) and \(\gamma (K+1) > 0\), and both \(\sigma _K, \sigma _{K+1}\) must be best responses. Because \(\sigma _K(K+1) = 0\) and \(\sigma _{K+1}(K+1) = 1\), equations (8), (9) (which provide necessary and sufficient conditions for optimality in terms of the true value function) entail that
Hence, \(-c + \beta V_{K+1} = \beta V_K\). Because \(\sigma _K\) is a best response, the value functions \(V_{\sigma _K}\) must coincide with the true value function \(V\). Hence, an agent following \(\sigma _K\) must be indifferent to providing service when holding \(K\) tokens. However, if \(\beta \) increases slightly \(M_{\sigma _K}\) also increases, whence an agent following \(\sigma _K\) must strictly prefer to provide service. In other words, when \(\beta \) increases slightly, \(\sigma _K\) can no longer be a best response and \(\sigma _K\) can no longer be an equilibrium protocol. This is a contradiction, so we conclude that a robust equilibrium protocol \(\Pi \) cannot involve proper mixed strategies, as asserted. \(\square \)
Proof of Theorem 6
We divide the proof of (i) into several steps.
-
Step 1
We first prove there exists \(\beta ^L \in [0, 1)\) such that
$$\begin{aligned} M_{\sigma }(K-1, \beta )&< \frac{c}{\beta } \text{ for } \beta < \beta ^L \\ M_{\sigma }(K-1, \beta ^L)&= \frac{c}{\beta } \\ M_{\sigma }(K-1, \beta )&> \frac{c}{\beta } \text{ for } \beta > \beta ^L \end{aligned}$$To see this, define the auxiliary function
$$\begin{aligned} F(\beta ) = M_{\sigma }(K-1,\beta ) - \frac{c}{\beta } \end{aligned}$$\(F\) is evidently continuous. Lemma 2 guarantees that \(M_\sigma (K-1,\beta )\) is strictly increasing in \(\beta \), so \(F(\beta )\) is also strictly increasing in \(\beta \) as well. We show that \(F(1) > 0\) and \(\lim _{\beta \rightarrow 0}F(\beta ) < 0\) and then apply the intermediate value theorem to find \(\beta ^L\).
To see that \(F(1) > 0\), note first that the coefficients in the left-hand matrix of (13) are simply \(\phi _l = -\rho (1-\nu )\), \(\phi _c = \rho (1-\nu + 1-\mu )\) and \(\phi _r = \rho (1-\mu )\). We split the matrix \(\mathbf{M}_{\sigma _K}\) in two parts. To do this, write
$$\begin{aligned} \mathbf{u}^{\prime }&= (\rho (1-\nu )c \quad 0 \quad \ldots \quad 0 \quad \rho (1-\mu )c)^T\nonumber \\ \mathbf{u}^{\prime \prime }&= (\rho (1-\nu )(b-c) \quad 0 \quad \ldots \quad 0 \quad 0 )^T \end{aligned}$$(30)and define \(\mathbf{M ^{\prime }}_{\sigma _K}, \mathbf{M ^{\prime \prime }}_{\sigma _K}\) to be the solutions to the equations
$$\begin{aligned} \Phi \mathbf{M}^{\prime }_{\sigma _K} = \mathbf{u}^{\prime }, ~~~\Phi \mathbf{M^{\prime \prime }}_{\sigma _K} = \mathbf{u}^{\prime \prime } \end{aligned}$$(31)Note that \(\mathbf{M}_{\sigma _K} = \mathbf{M}^{\prime }_{\sigma _K} + \mathbf{M}^{\prime \prime }_{\sigma _K}\) and \(\mathbf{M}_{\sigma _K}\) is the solution to (13). It is easy to check that \(\mathbf{M}^{\prime }_{\sigma _K}\) is a constant matrix: \(M^{\prime }_{\sigma _K}(k) = c\) for \(0 \le k < K-1\). Lemma 1 guarantees that the entries of \(\mathbf{M}^{\prime \prime }_{\sigma _K}\) are strictly positive: \(M^{\prime \prime }_{\sigma }(k) > 0\) for \(0 \le k < K-1\). Hence the entries of \(\mathbf{M}_{\sigma _K}\) are strictly greater than \(c\): \(M_{\sigma }(k) > c\) for\(0 \le k < K-1\). In particular, \(F(1) > 0\).
To see that \(\lim _{\beta \rightarrow 0}F(\beta ) < 0\), suppose not. Because \(F\) is strictly increasing, this means \(F(\beta ) \ge 0\) for every \(\beta \in (0,1]\), which entails that \(M_{\sigma }(k) \ge \frac{c}{\beta }\) for \(0 \le k < K-1\). Summing the rows in (13) yields:
$$\begin{aligned} \rho (1-\nu )b + \rho (1-\mu )c > K(1-\beta )\frac{c}{\beta } = \frac{Kc}{\beta } - Kc \end{aligned}$$(32)Note that \(Kc/\beta \) flows up as \(\beta \rightarrow 0\), so this is impossible. We conclude that \(\lim _{\beta \rightarrow 0}F(\beta ) < 0\), as asserted. Because \(F\) is strictly increasing, the intermediate value theorem guarantees that we can find an unique \(\beta ^L\) such that
$$\begin{aligned} F(\beta )&< 0 \quad \text{ for } \beta <\beta ^L \\ F(\beta ^L)&= 0 \\ F(\beta )&> 0 \quad \text{ for } \beta > \beta ^L \end{aligned}$$The definition of \(F\) yields the desired property of \(\beta ^L\)
-
Step 2
Next we prove there exists \(\beta ^H \in (\beta ^L, 1)\) such that if \(\beta \in [0, \beta ^H]\) then
$$\begin{aligned} M_{\sigma _K, \beta }(K-1)&< \frac{\phi _c + \phi _r}{-\phi _l}\frac{c}{\beta } \ \ \text{ for } \beta < \beta ^H\\ M_{\sigma _K, \beta ^H}(K-1)&= \frac{\phi _c + \phi _r}{-\phi _l}\frac{c}{\beta } \\ M_{\sigma _K, \beta }(K-1)&> \frac{\phi _c + \phi _r}{-\phi _l}\frac{c}{\beta } \ \ \text{ for } \beta > \beta ^H \end{aligned}$$To see this, note first that \(\frac{\phi _c + \phi _r}{-\phi _l}\frac{c}{\beta } = \left[1 - \frac{1}{\rho (1-\nu )}+\frac{1}{\rho (1-\nu )\beta }\right] \frac{c}{\beta }\) and define another auxiliary function:
$$\begin{aligned} G(\beta ) = M_{\Pi }(K-1, \beta ) - \left(1 - \frac{1}{\rho (1-\nu )}+\frac{1}{\rho (1-\nu )\beta }\right)\frac{c}{\beta } \end{aligned}$$\(G\) is continuous and increasing. From Step 1 it follows that \(M_{\sigma _K}(K-1, 1) > c\) so \(G(1) = M_{\sigma _K}(K-1, 1) - c > 0\). It also follows that \(M_{\sigma _K}(K-1, \beta ^L) = \frac{c}{\beta ^L}\); because \((1 - \frac{1}{\rho (1-\nu )}+\frac{1}{\rho (1-\nu )\beta ^L})\frac{c}{\beta ^L} > \frac{1}{\beta ^L}\), we conclude that \(G(\beta ^L) < 0\). Because \(G\) is continuous and increasing, there is a unique \(\beta ^H \in (\beta ^L, 1)\) such that
$$\begin{aligned}&G(\beta ) < 0 \; \text{ for } \; \beta <\beta ^H \\&G(\beta ^H) = 0 \\&G(\beta ) > 0 \; \text{ for } \; \beta > \beta ^L \end{aligned}$$ -
Step 3
The definitions of \(F, G\) imply that in order for \(\Pi \) to be an equilibrium protocol when the discount factor is \(\beta \) it is the necessary and sufficient condition that \(F(\beta ) \ge 0\) and \(G(\beta ) \le 0\). Hence, \(\Pi \) is an equilibrium protocol when the discount factor is \(\beta \) exactly for \(\beta \in [\beta ^L, \beta ^H]\).
Because \(F, G\) are continuous in all their arguments and strictly increasing, \(\beta ^L, \beta ^H\), which are the zeroes of \(F, G\), are continuous functions of the parameters as well. This completes the proof of (i). The proof of (ii) is similar and left to the reader.
Proof of Theorem 7
We first consider (i). Fix \(r\). Consider the two protocols \(\Pi _{K} = (K/2, \sigma _K)\) and \(\Pi _{K+1} = ((K+1)/2, \sigma _{K+1})\) and the corresponding intervals \([\beta ^L_1, \beta ^H_1]\) and \([\beta ^L_2, \beta ^H_2]\) of discount factors that sustain equilibrium. We need to show that
(The sustainable ranges for two consecutive threshold protocols overlap but are not nested). There are three inequalities to be established; we carry out the analyses in (A), (B), and (C) below.
-
(A)
To prove \(\beta ^L_2 > \beta ^L_1\), write \(\beta = \beta ^L_1\). We show that \(M_{\sigma _{K+1}}(K) < \frac{c}{\beta }\). To see this, suppose not; i.e. \(M_{\sigma _{K+1}}(K) \ge \frac{c}{\beta }\). The construction of \(\beta ^L_1\) guarantees that \(M_{\sigma _{K}}(K-1)=c/\beta \). We will use this inequality and equality to show that all marginal payoffs of \(\Pi _{K+1}\) so large that they violate the restrictions imposed by the bounded benefit \(b\) and cost \(c\).
To simplify the notation, let \(\omega _X = \frac{X+1}{X}(\frac{1}{\beta }-1)\frac{1}{\rho }\). Note \(\omega _{K+1} < \omega _{K}\). Then the matrix identity (13) becomes:
Suppose \(M_{\sigma _{K+1}}(K) \ge M_{\sigma _{K}}(K-1) = \frac{c}{\beta }\). We investigate the relation between \(M_{\sigma _{K+1}}(K-1)\) and \(M_{\sigma _{K}}(K-2)\). Using the matrix identity,
Moreover, if \( 2\le k\le K-1\) then
By induction,
Next we prove \(M_{\sigma _{K+1}}(0) \ge M_{\sigma _{K}}(0)\). This is relatively easy since, if \(M_{\sigma _{K+1}}(0) < M_{\sigma _{K}}(0)\), then using the marginal payoff matrix and by induction, \(M_{\sigma _{K+1}}(K-1) < M_{\sigma _{K}}(K-1) = \frac{c}{\beta }\). This is a contradiction to \(M_{\sigma _{K+1}}(K-1) > M_{\sigma _{K+1}}(K) = \frac{c}{\beta }\) . Therefore, \(M_{\sigma _{K+1}}(0) \ge M_{\sigma _{K}}(0)\).
The marginal payoffs are bounded as follows,
However, since
and \(M_{\sigma _{K+1}}(0) + M_{\sigma _{K+1}}(K) > M_{\sigma _{K}}(0) + M_{\sigma _{K}}(K-1)\), a contradiction occurs. Therefore, for \(\beta = \beta _1^L\), \(M_{\sigma _{K+1}}(K) < \frac{c}{\beta }\). This means \(\beta _2^L > \beta _1^L\). This completes (A).
-
(B)
To prove \(\beta ^H_2 > \beta ^H_1\), let \(\beta = \beta _1^H\), we need to show that the protocol \(\Pi _{K+1}\) must have \(M_{\sigma _{K+1}}(K+1) < c/\beta \). We use contradiction to prove this. The idea is: Suppose \(M_{\sigma _{K+1}}(K+1) \ge c/\beta \), then we show that all the marginal payoffs of \(\Pi _{K+1}\) are large enough such that they violate the restriction imposed by the bounded benefit \(b\) and cost \(c\).
Suppose \(M_{\sigma _{K+1}}(K+1) \ge M_{\sigma _{K}}(K) = c/\beta \). According to the matrix equation, similar to part (A), by induction we can get,
Also \(M_{\sigma _{K+1}}(0) \ge M_{\sigma _K}(0)\). The marginal payoffs are bounded as follows,
However, since
and \(M_{\sigma _{K+1}}(0) + M_{\sigma _{K+1}}(K+1) > M_{\sigma _{K}}(0) + M_{\sigma _{K}}(K)\), a contradiction occurs. Therefore, for \(\beta = \beta _1^H\), \(M_{\sigma _{K+1}}(K+1) < \frac{c}{\beta }\). This means \(\beta _2^H > \beta _1^H\). This completes part (B).
-
(C)
To prove \(\beta ^L_2 < \beta ^H_1\), write \(\beta = \beta _1^H\). We show that \(M_{\sigma _{K+1}}(K) > M_{\sigma _{K}}(K) = \frac{c}{\beta }\). If not, then as in (A) we must have \(M_{\sigma _{K+1}}(K) \le M_{\sigma _{K}}(K) = \frac{c}{\beta }\); in that case we show \(M_{\sigma _{K+1}}(k) \le M_{\sigma _{K}}(k)\) for \(0 \le k \le K\). This will again violate the restrictions imposed by \(b\) and \(c\).
We extend the marginal payoff matrix in (33) from \(K\times K\) to \((K+1) \times (K+1)\) and incorporate \(M_{\sigma _{K}}(K)\). If \(M_{\sigma _{K}}(K) = \frac{c}{\beta }\), such extension does not change the solution of the marginal payoffs \(M_{\sigma _{K}}(k), \forall k\in [0, K]\). Note the new coefficient matrix has the same size of the coefficient matrix for \(\sigma _{K+1}\). Suppose \(M_{\sigma _{K+1}}(K) < M_{\sigma _{K}}(K) = \frac{c}{\beta }\). According to the matrix equation,
Moreover, for \(0 \le k \le K\), we have
By induction, \(M_{\sigma _{K+1}}(k) < M_{\sigma _{K}}(k)\,0 \le k \le K\). However, since
Again, the left-hand side is bigger when \(X = K\) than when \(X = K+1\), which is a contradiction. This completes part (C).
Combining (A), (B) and (C) establishes the desired string of inequalities. The remaining conclusions of (i) follow immediately.
The argument for (ii) is very similar and left to the reader.
Proof of Theorem 8
Fix a protocol \(\Pi = (\alpha , \sigma _K)\) and let \(\eta ^\Pi \) be the corresponding invariant distribution. We first find a closed form expression for \(\eta ^\Pi \). To do this, plug the strategy \(\sigma _K\) into the characterization of the invariant distribution given in Theorem 3. A little algebra provides an identify involving \(\eta ^\Pi (0), \eta ^\Pi (1), \eta ^\Pi (K)\) and a simpler recursion relationship.
From this we can solve recursively, obtaining
for all \(k = 0, 1, \ldots ,K\). Note that the one remaining degree of freedom is pinned down by the requirement that the total token holding be equal to \(\alpha \).
We next solve the following simple maximization problem:
To solve this problem, set \(f(x) = x(1-x)^K\). A straightforward calculus exercise shows that if \( 0 \le x_1 \le \frac{1}{K+1} \le x_2 \le 1\) and \(f(x_1) = f(x_2)\) then:
-
(a)
\(x_1 + x_2 \ge \frac{2}{K+1}\), with equality achieved at \(x_1 = x_2 = \frac{1}{K+1}\).
-
(b)
\(x_1x_2 \le \frac{1}{K+1}\), with equality achieved at \(x_1 = x_2 = \frac{1}{K+1}\).
Putting (a) and (b) together shows that the optimal solution to the maximization problem (38) is to have \(x_1 = x_2 = \frac{1}{K+1}\) and \(\max E^* = \left(1 - \frac{1}{K+1}\right)^2\).
If we take \(x_1 = \mu ^\Pi , x_2 = \nu ^\Pi \) and apply the closed form solution (37) for the invariant distribution, we see that \(f(x_1) = f(x_2)\). By definition, \(\text{ Eff}(\Pi ) = E^*(x_1, x_2)\) so
On the other hand, if \(\alpha = K/2\) then the invariant distribution has \(\eta ^\Pi (k) = \frac{1}{K+1}\) for all \(k\) and
Taken together, part (ii) and (iii) are proved.
Next fix a protocol \((\alpha , \sigma _K)\). Let \(\lceil \alpha \rceil \) be the least integer greater than or equal to \(\alpha \) and set \(K^* = 2\lceil \alpha \rceil \). There are two cases to consider.
In the first case, \(K \le K^*\).
which is the desired result in the first case.
In the second case, \(K > K^*\). Define the protocol \(\Pi ^{\prime } = (\lceil \alpha \rceil , \sigma _K)\); let \(\eta ^{\prime }\) be the invariant token distribution for \(\Pi ^{\prime }\). Let \(\Pi ^* = (\lceil \alpha \rceil , \sigma _{K^*})\); note that the invariant token distribution \(\eta ^*\) is uniform (\(\eta ^*(k) = \frac{1}{K^\star +1}= \frac{1}{2\lceil \alpha \rceil +1}\) for all \(k = 0,1,\ldots ,K^*\)). Note that \(\Pi ^{\prime }\) and \(\Pi \) have the same strategy component but that the token supply for \(\Pi ^{\prime }\) is larger than for \(\Pi \), and that \(\Pi ^{\prime }\) and \(\Pi ^*\) have the same token supply but that the strategy component of \(\Pi ^{\prime }\) has a higher threshold.
We assert that \(\eta ^{\prime }(0) \ge \frac{1}{2\lceil \alpha \rceil +1}\). If not then \(\eta ^{\prime }(0) < \frac{1}{2\lceil \alpha \rceil +1}=\frac{1}{K^\star +1}\). It follows that for all \( k \in \{0,1,\ldots ,K\}\) we have \(\eta ^{\prime }(k) < \frac{1}{K^*+1} = \eta ^*(k)\). Hence
This is a contradiction. Hence, \(\eta ^{\prime }(0) \ge \frac{1}{2\lceil \alpha \rceil +1}\).
Because the token supply for \(\Pi \) is less than \(\Pi ^{\prime }\), the number of agents with no tokens is larger, so \(\eta (0) > \eta ^{\prime }(0) \ge \frac{1}{2\lceil \alpha \rceil +1}\). Hence,
which is the desired result in the second case. This complete the proof for part (i).
Proof of Theorem 9
Both assertions follow immediately by combining Theorems 7 and 8. \(\square \)
Proof of Theorem 10
We first derive the lower bound \(K^L\). If \(\Pi _K = (K/2, \sigma _K)\) is an equilibrium protocol then consecutive marginal utilities bear the relationship
(Because \(\beta \) is fixed, we suppress it in the notation.) Therefore, \(M_{ \sigma _K}(k) \!\!>\!\! \frac{-\phi _l}{\phi _c} M_{ \sigma _K}(k-1)\). By induction,
Because \(\phi _c M_{ \sigma _K}(0) = (1-\nu )\rho b - \phi _r M_{ \sigma _K}(1) > (1-\nu )\rho b + (1-\nu )\rho c\), we have
Therefore,
Because \(\Pi _K\) is assumed to be an equilibrium protocol, we must have \(M_{ \sigma _K}(K) \le c/\beta \). Moreover, we must also have
because otherwise \(M_{ \sigma _K}(K) > c/\beta \). Therefore,
This provides the lower bound \(K^L\).
We now derive the upper bound \(K^H\). Rewriting the relation between consecutive marginal utilities, we obtain
Therefore, \(M_{ \sigma _K}(k) < \frac{-\phi _l}{\phi _c + \phi _r} M_{ \sigma _K}(k-1)\). By induction,
Because \(\phi _c M_{ \sigma _K}(0) = (1-\nu )\rho b - \phi _r M_{ \sigma _K}(1) < (1-\nu )\rho b - \phi _r b/\beta = 2(1-\nu )\rho b\), we have,
Therefore,
Because \(\Pi _K\) is assumed to be an equilibrium protocol, we must have \(M_{ \sigma _K}(K-1) \ge c/\beta \). Moreover,
because otherwise \(M_{ \sigma _K}(K-1) < c/\beta \). Therefore,
This provides the upper bound \(K^H\).
Combining the two estimates yields the range containing all integers \(K\) for which \(\Pi _K\) is an equilibrium protocol. The estimate for efficiency follows immediately since \(Eff(\Pi _K) \ge Eff(\Pi _{K^L})\) if \(K \ge K^L\), so the proof is complete. \(\square \)
Rights and permissions
About this article
Cite this article
van der Schaar, M., Xu, J. & Zame, W. Efficient online exchange via fiat money. Econ Theory 54, 211–248 (2013). https://doi.org/10.1007/s00199-013-0744-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00199-013-0744-4