Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The notion of public goods , which are non-rival and non-excludable, was first introduced by Samuelson (1954). Examples of public goods include clean environment, national security, scientific knowledge, accessible public capital, technical know-how and public information. The non-exclusiveness and positive externalities of public goods constitutes major factors for market failure in their provision. The provision of public goods constitutes a classic case of market failure which calls for cooperative optimization. However, cooperation cannot be sustainable unless there is guarantee that the agreed-upon optimality principle can be maintained throughout the planning duration.

This Chapter presents two sets of applications in subgame consistent cooperative provision of public goods to solve the problem. The first application is based on Yeung and Petrosyan (2013b) in which the analysis is based on a cooperative stochastic differential game framework. The second application is based on Yeung and Petrosyan (2014b) in which the analysis is conducted in a randomly-furcating stochastic dynamic game framework. The continuous-time differential game analysis is provided in Sects. 12.1, 12.2, 12.3 and 12.4. Section 12.1 provides an analytical framework of cooperative public good s provision. An application in multiple asymmetric agents public capital build-up in given in Sect. 12.2. An application in the development of technical knowledge as a public good in an industry is provided in Sect. 12.3. In Sect. 12.4 application in infinite horizon cooperative public capital goods provision is examined. The discrete-time dynamic game analysis is provided in Sects. 12.5 and 12.6. Cooperative public good s provision under accumulation and payoff uncertainties is presented in Sect. 12.5 and an illustration is given in Sect. 12.6. Appendices of the chapter and chapter notes are contained in Sects. 12.7, 12.8 and 12.9 respectively.

1 Cooperative Public Goods Provision: An Analytical Framework

In this Section we set up an analytical framework to study collaborative public goods provision. In particular, group optimal strategies, subgame consistent cooperative schemes and payoff distribution procedures are investigated.

1.1 Game Formulation and Non-cooperative Outcome

Consider the case of the provision of a public good in which a group of n agents carry out a project by making continuous contributions of some inputs or investments to build up a productive stock of a public good. Let K(s) denote the level of the productive stock and I i (s) denote the contribution or investment by agent i at time s, the stock accumulation dynamics is governed by

$$ dK(s)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{I}_j(s)-\delta K(s)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;ds+\sigma K(s)dz(s),\kern0.48em K(0)={K}_0, $$
(1.1)

where δ is the rate of decay of the productive stock, z(s) is Wiener process and σ is a scaling constant.

The instantaneous payoff to agent i at time instant s is

$$ {R}_i\left[K(s)\right]-{C}_i\left[{I}_i(s)\right],\kern0.84em i\in \left\{1,2,\cdots, n\right\}=N, $$
(1.2)

where R i (K) is the revenue/payoff to agent i if the productive stock is K and C i [I i ] is the cost of investing I i by agent i. Marginal cost of investment is increasing in I i . Marginal revenue product of the productive stock is non-negative, that is \( {R}_i^{\prime }(K)\ge 0 \), before a saturation level \( \overline{K} \) has been reached; and marginal cost of investment is positive and non-decreasing, that is \( {C}_i^{\prime}\left[{I}_i\right]>0 \) and \( {C}_i^{{\prime\prime}}\left[{I}_i\right]\ge 0 \). Moreover, the payoffs of the players are transferable.

The objective of agent \( i\in N \) is to maximize its expected net revenue over the planning horizon T, that is

$$ E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^T}\left\{{R}_i\left[K(s)\right]-{C}_i\left[{I}_i(s)\right]\right\}{e}^{-rs}ds+{q}_i\left[K(T)\right]{e}^{-rT}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\} $$
(1.3)

subject to the stock accumulation dynamics (1.1), where r is the discount rate, and \( {q}_i\left[K(T)\right]\ge 0 \) is an amount conditional on the productive stock that agent i would receive at time T.

Acting for individual interests, the agents are involved in a stochastic differential game . In such a framework, a feedback Nash equilibrium has to be sought. Let \( \Big\{{\phi}_i\left(s,K\right)={I}_i^{*}(s)\in {I}^i \), for \( i\in N \) and \( s\in \left[0,T\right]\Big\} \) denote a set of feedback strategies that brings about a feedback Nash equilibrium of the game (1.1) and (1.3). Invoking Theorem 1.1 in Chap. 3 for solving stochastic differential games , a feedback solution to the problem (1.1) and (1.3) can characterized by the following set of Hamilton-Jacobi-Bellman equations:

$$ \begin{array}{l}-{V}_t^i\left(t,K\right)-\frac{1}{2}{V}_{KK}^i\left(t,K\right){\sigma}^2{K}^2=\underset{I_i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}_i(K)-{C}_i\left({I}_i\right)\right]{e}^{-rt}\\ {}+{V}_K^i\left(t,K\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_j\left(t,K\right)+{I}_i-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(1.4)
$$ {V}^i\left(T,K\right)={q}_i(K){e}^{-rT},\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N. $$
(1.5)

A Nash equilibrium non-cooperative outcome of public goods provision by the n agents is characterized by the solution of the system of partial differential equations (1.4 and 1.5).

1.2 Subgame Consistent Cooperative Scheme

It is well-known problem that noncooperative provision of goods with externalities, in general, would lead to dynamic inefficiency. Cooperative games suggest the possibility of socially optimal and group efficient solutions to decision problems involving strategic action. Now consider the case when the agents agree to cooperate and extract gains from cooperation. In particular, they act cooperatively and agree to distribute the joint payoff among themselves according to an optimality principle . If any agent disagrees and deviates from the cooperation scheme, all agents will revert to the noncooperative framework to counteract the free-rider problem in public goods provision. In particular, free-riding would lead to a lower future payoff due to the loss of cooperative gains. Thus a credible threat is in place. As stated before group optimality , individual rationality and subgame consistency are three crucial properties that sustainable cooperative scheme has to satisfy.

To fulfil group optimality the agents would seek to maximize their expected joint payoff. To maximize their expected joint payoff the agents have to solve the stochastic dynamic programming problem

$$ \begin{array}{l}\underset{\left\{{I}_1(s),{I}_2(s),\cdots, {I}_n(s)\right\}}{ \max }E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.}{\displaystyle {\int}_{\kern0.0em 0}^T}\left\{{R}_j\left[K(s)\right]-{C}_j\left[{I}_j(s)\right]\right\}{e}^{-rs}ds\\ {}+{q}_j\left[K(T)\right]{e}^{-rT}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\end{array} $$
(1.6)

subject to the stock dynamics (1.1).

Let \( \Big\{{\psi}_i\left(s,K\right) \), for \( i\in N \) and denote a set of strategies that brings about an optimal solution to the stochastic control problem (1.1) and (1.6). Invoking the standard stochastic dynamic programming technique in Theorem A.3 of the Technical Appendices an optimal solution to the stochastic control problem (1.1) and (1.6) can characterized by the following set of equations (see also Fleming and Rishel 1975; Ross 1983):

$$ \begin{array}{l}-{W}_t\left(t,K\right)-\frac{1}{2}{W}_{KK}\left(t,K\right){\sigma}^2{K}^2\\ {}=\underset{I_1,{I}_2,\cdots, {I}_n}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.}\left[{R}_j(K)-{C}_j\left({I}_j\right)\right]{e}^{-rt}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]+{W}_K\left(t,K\right)\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{I}_j-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(1.7)
$$ W\left(T,K\right)={\displaystyle \sum_{j=1}^n}{q}_j\left[K(T)\right]{e}^{-rT}. $$
(1.8)

A group optimal solution of public goods provision by the n agents is characterized by the solution of the partial differential equation (1.7 and 1.8). In particular, W(t, K) gives the maximized joint payoff of the n players at time \( t\in \left[0,T\right] \) given that the state is x.

Substituting the optimal strategies \( \Big\{{\psi}_i\left(s,K\right) \), for \( i\in N \) and \( s\in \left[0,T\right]\Big\} \) into (1.1) yields the optimal path of productive stock dynamics:

$$ dK(s)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{\psi}_j\left(s,K(s)\right)-\delta K(s)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;ds+\sigma K(s)dz(s),\kern0.36em K(0)={K}_0. $$
(1.9)

We use X * s to denote the set of realizable values of K(s) generated by (1.9) at time s. The term \( {K}_s^{*}\in {X}_s^{*} \) is used to denote and element in X * s .

Let \( \xi \left(\cdot, \cdot \right) \) denote the agreed-upon imputation vector guiding the distribution of the total cooperative payoff under the agreed-upon optimality principle along the cooperative trajectory \( {\left\{{K}^{*}(s)\;\right\}}_{s\in \left[0,T\right]} \). At time s and if the productive stock is K * s , the imputation vector according to \( \xi \left(\cdot, \cdot \right) \) is

$$ \xi \left(s,{K}_s^{*}\right)=\left[{\xi}^1\left(s,{K}_s^{*}\right),{\xi}^2\left(s,{K}_s^{*}\right),\cdots, {\xi}^n\left(s,{K}_s^{*}\right)\right]\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em s\in \left[0,T\right]. $$
(1.10)

A variety of examples of imputation s ξ(s, K * s ) can be found in Chap. 2. For individual rationality to be maintained throughout all time \( s\in \left[0,T\right] \), it is required that:

$$ {\xi}^i\left(s,{K}_s^{*}\right)\ge {V}^i\left(s,{K}_s^{*}\right),\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N\kern0.24em \mathrm{and}\kern0.24em s\in \left[0,T\right]. $$

To satisfy group optimality , the imputation vector has to satisfy

$$ W\left(s,{K}_s^{*}\right)={\displaystyle \sum_{j=1}^n}{\xi}^i\left(s,{K}_s^{*}\right),\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em s\in \left[0,T\right]. $$

1.3 Payoff Distribution Procedure

Following the analysis in Chap. 3, we formulate a Payoff Distribution Procedure so that the agreed-upon imputation s (1.10) can be realized. Let B i(s, K*(s)) for \( s\in \left[0,T\right) \) denote the payment that agent i will received at time s under the cooperative agreement if K*(s) is realized at that time.

The payment scheme involving B i(s, K*(s)) constitutes a PDP in the sense that along the cooperative trajectory \( {\left\{{K}^{*}(s)\;\right\}}_{s\in \left[0,T\right]} \) the imputation to agent i covering the duration [τ, T] can be expressed as:

$$ {\xi}^i\left(\tau, {K}_{\tau}^{*}\right)=E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em \tau}^{\kern0.0em T}}{B}^i\left(s,{K}^{*}(s)\right){e}^{-rs}ds+{q}_i\left[{K}^{*}(T)\right]{e}^{-rT}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.{K}^{*}\left(\tau \right)={K}_{\tau}^{*}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}, $$
(1.11)

for \( i\in N \) and \( \tau \in \left[0,T\right] \).

The values of B i(s, K*(s)) for \( i\in N \) and \( s\in \left[\tau, T\right) \), which leads to the realization of imputation (1.10) and hence a subgame consistent cooperative solution can be obtained by the following theorem.

Theorem 1.1

A PDP for agent \( i\in N \) with a terminal payment q i (K * T ) at time T and an instantaneous payment at time \( s\in \left[0,T\right] \) which present value is:

$$ \begin{array}{l}{B}_i\left(s,{K}_s^{*}\right){e}^{-rs}=-{\xi}_s^i\left(s,{K}_s^{*}\right)-\frac{1}{2}{\sigma}^2{\left({K}_s^{*}\right)}^2{\xi}_{K_s{K}_s}^i\left(s,{K}_s^{*}\right)\\ {}-{\xi}_{K_s}^i\left(s,{K}_s^{*}\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{\psi}_j^{*}\left(s,{K}_s^{*}\right)-\delta {K}_s^{*}}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N\kern0.24em \mathrm{and}\kern0.24em {K}_s^{*}\in {X}_s^{*},\end{array} $$
(1.12)

would lead to the realization of the imputation ξ(s, K * s ) in (1.10).

Proof

See Appendix A. ■

Note that the payoff distribution procedure in Theorem 1.1 would give rise to the agreed-upon imputation in (1.10) and therefore subgame consistency is satisfied.

When all agents are using the cooperative strategies, the payoff that player i will directly receive at time s is

$$ {R}_i\left({K}_s^{*}\right)-{C}_i\left[{\psi}_i^{*}\left(s,{K}_s^{*}\right)\right]. $$

However, according to the agreed upon imputation , agent i is supposed to receive B i i (s, K * s ). Therefore a transfer payment (which could be positive or negative)

$$ {\varpi}_i\left(s,{K}_s^{*}\right)={B}_i\left(s,{K}_s^{*}\right)-\left\{{R}_i\left({K}_s^{*}\right)-{C}_i\left[{\psi}_i^{*}\left(s,{K}_s^{*}\right)\right]\right\} $$
(1.13)

will be imputed to agent \( i\in N \) at time \( s\in \left[0,T\right] \).

2 An Application in Asymmetric Agents Public Capital Build-up

In this section, we examine an application of the analysis in the build-up of public capital by multiple asymmetric agents.

2.1 Game Model

Consider an economic region with n asymmetric agents. These agents receive benefits from an existing public capital stock K(s). The accumulation dynamics of the public capital stock is governed by

$$ dK(s)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{I}_j(s)-\delta K(s)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;ds+\sigma K(s)dz(s),\kern0.36em K(0)={K}_0, $$
(2.1)

where δ is the depreciation rate of the public capital and \( {I}_i(s)\in \left[0,\overline{I}\right] \) is the investment made by the ith agent in the public capital.

Each agent gains from the existing level of public capital and the ith agent seeks to maximize its expected stream of monetary gains:

$$ \begin{array}{l}E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\kern0.0em T}}\left\{{\alpha}_iK(s)-{c}_i{\left[{I}_i(s)\right]}^2\right\}{e}^{-rs}ds\\ {}+\left[{q}_1^iK(T)+{q}_2^i\right]{e}^{-rT}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.K(0)={K}_0\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N,\end{array} $$
(2.2)
  • subject to (2.1);

where α i , c i , q i1 and q i2 are positive constants, and \( {\alpha}_i\ne {\alpha}_j,{c}_i\ne {c}_j,{q}_1^i\ne {q}_1^j \) and \( {q}_2^i\ne {q}_2^j \), for \( i,j\in N \) and \( i\ne j \).

In particular, α i K(s) gives the gain that agent i derives from the public capital, c i [I i (s)]2 is the cost of investing I i (s) in the public capital, and \( \left[{q}_1^iK(T)+{q}_2^i\right] \) is the terminal valuation of the public capital at time T. Invoking the analysis in (1.5 and 1.6) in Sect. 12.1 we obtain the corresponding Hamilton-Jacobi-Bellman equations characterizing a non-cooperative outcome as:

$$ \begin{array}{l}-{V}_t^i\left(t,K\right)-\frac{1}{2}{V}_{KK}^i\left(t,K\right){\sigma}^2{K}^2=\underset{I_i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left\{{\alpha}_iK-{c}_i{\left({I}_i\right)}^2\right\}{e}^{-rt}\\ {}+{V}_K^i\left(t,K\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_j\left(t,K\right)+{I}_i-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(2.3)
$$ {V}^i\left(T,K\right)=\left[{q}_1^iK(T)+{q}_2^i\right]{e}^{-rT},\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N; $$
(2.4)

Performing the maximization operator in (2.4) yields:

$$ {I}_i=\frac{V_K^i\left(t,K\right)}{2{c}_i}{e}^{rt},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N. $$
(2.5)

To solve the game (2.1 and 2.2) we first obtain the value functions indicating the game equilibrium payoffs of the agents as follows.

Proposition 2.1

The value function V i(t, K) of agent i can be obtained as:

$$ {V}^i\left(t,K\right)=\left[{A}_i(t)K+{C}_i(t)\right]{e}^{-rt}\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N; $$
(2.6)

where

$$ {A}_i(t)=\left({q}_1^i-\frac{\alpha_i}{r+\delta}\right){e}^{-\left(r+\delta \right)\left(T-t\right)}+\frac{\alpha_i}{r+\delta }, $$

and the value of C i (t) is generated by the following first order linear differential equation:

$$ \begin{array}{l}{\overset{.}{C}}_i(t)=r{C}_i(t)+\frac{{\left[{A}_i(t)\right]}^2}{4{c}_i}-\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\frac{A_i(t){A}_j(t)}{2{c}_j}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\\ {}{C}_i(T)={q}_2^i,\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N,\end{array} $$
(2.7)

Proof

See Appendix B. ■

Using Proposition 2.1 and (2.5) the game equilibrium strategies can be obtained to characterize the market equilibrium. The asymmetry of agents brings about different payoffs and investment levels in public capital investments.

2.2 Cooperative Provision of Public Capital

Now we consider the case when the agents agree to act cooperatively and seek higher gains. They agree to maximize their expected joint gain and distribute the cooperative gain proportional to their non-cooperative gains. To maximize their expected joint gains the agents maximize

$$ \begin{array}{l}E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\kern0.0em T}{\displaystyle \sum_{j=1}^n}}\left\{{\alpha}_jK(s)-{c}_j{\left[{I}_j(s)\right]}^2\right\}{e}^{-rs}ds\\ {}+{\displaystyle \sum_{j=1}^n}\left[{q}_1^jK(T)+{q}_2^j\right]{e}^{-rT}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.K(0)={K}_0\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\end{array} $$
(2.8)

subject to dynamics (2.1).

Following the analysis in (1.7 and 1.8) in Sect. 12.1, the corresponding stochastic dynamic programming equation can be obtained as:

$$ \begin{array}{l}-{W}_t\left(t,K\right)-\frac{1}{2}{W}_{KK}\left(t,K\right){\sigma}^2{K}^2\\ {}=\underset{I_1,{I}_2,\cdots, {I}_n}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\left[{\alpha}_jK-{c}_j{\left({I}_j\right)}^2\right]{e}^{-rt}+{W}_K\left(t,K\right)\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{I}_j-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(2.9)
$$ W\left(T,K\right)={\displaystyle \sum_{j=1}^n}\left({q}_1^jK+{q}_2^j\right){e}^{-rT}. $$
(2.10)

Performing the maximization operator in (2.9) yields:

$$ {I}_i=\frac{W_K\left(t,K\right)}{2{c}_i}{e}^{rt},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N. $$
(2.11)

The maximized expected joint profit of the n participating firms can be obtained as:

Proposition 2.2

The value function W(t, K) indicating the maximized expected joint payoff is

$$ W\left(t,K\right)=\left[A(t)K+C(t)\right]{e}^{-rt}, $$
(2.12)

where

$$ A(t)=\left({\displaystyle \sum_{j=1}^n}{q}_1^j-\frac{{\displaystyle \sum_{j=1}^n}{\alpha}_j}{r+\delta}\right){e}^{-\left(r+\delta \right)\left(T-t\right)}+\frac{{\displaystyle \sum_{j=1}^n}{\alpha}_j}{r+\delta },\kern0.36em \mathrm{and} $$

and the value of C(t) is generated by the following first order linear differential equation:

$$ \overset{.}{C}(t)=rC(t)-{\displaystyle \sum_{j=1}^n}\frac{{\left[A(t)\right]}^2}{4{c}_j},\kern0.48em C(T)={\displaystyle \sum_{j=1}^n}{q}_2^j. $$

Proof

Follow the proof of Proposition 2.1. ■

Using (2.11) and Proposition 2.2 the optimal trajectory of public capital stock can be expressed as:

$$ dK(s)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n\frac{A(s)}{2{c}_j}-\delta K(s)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;ds+\sigma K(s)dz(s),\kern0.48em K(0)={K}_0. $$
(2.13)

We use X * s to denote the set of realizable values of \( {K}^{*}(s) \) generated by (2.13) at time s. The term \( {K}_s^{*}\in {X}_s^{*} \) is used to denote and element in X * s .

2.3 Subgame Consistent Payoff Distribution

Under cooperation every agent will be using the Pareto optimal strategies in (2.11) and the expected payoff that agent i will receive over the cooperative duration [0, T] becomes:

$$ E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\kern0.0em T}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\alpha}_i{K}^{*}(s)-\frac{{\left[A(s)\right]}^2}{4{c}_i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right){e}^{-rs}ds+{\displaystyle \sum_{j=1}^n}\left[{q}_1^i{K}^{*}(T)+{q}_2^i\right]{e}^{-rT}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em i\in N. $$

At initial time 0, the agents agree to distribute the cooperative gain proportional to their non-cooperative gains. Therefore agent i will receive an imputation

$$ \begin{array}{l}{\zeta}^i\left(0,{K}_0\right)=\frac{V^i\left(0,{K}_0\right)}{{\displaystyle \sum_{j=1}^n}{V}^j\left(0,{K}_0\right)}W\left(0,{K}_0\right)\\ {}=\frac{A_i(0){K}_0+{C}_i(0)}{{\displaystyle \sum_{j=1}^n}{A}_j(0){K}_0+{C}_j(0)}\left[A(0){K}_0+C(0)\right],\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N.\end{array} $$

With the agents agreeing to distribute their gains proportional to their non-cooperative gains, the imputation vector becomes

$$ \begin{array}{l}{\xi}^i\left(s,{K}_s^{*}\right)=\frac{V^i\left(s,{K}_s^{*}\right)}{{\displaystyle \sum_{j=1}^n}{V}^j\left(s,{K}_s^{*}\right)}W\left(s,{K}_s^{*}\right)\\ {}=\frac{\left[{A}_i(t)K+{C}_i(t)\right]}{{\displaystyle \sum_{j=1}^n\left[{A}_j(t)K+{C}_j(t)\right]}}\left[A(t)K+C(t)\right]{e}^{-rt},\end{array} $$
(2.14)

for \( i\in N \) and \( s\in \left[0,T\right] \) if the public capital stock is \( {K}_s^{*}\in {X}_s^{*} \).

To guarantee dynamical stability in a dynamic cooperation scheme, the solution has to satisfy the property of subgame consistency which requires the satisfaction of (2.14). Invoking Theorem 1.1, a PDP for agent \( i\in N \) with a terminal payment \( \left[{q}_1^iK(T)+{q}_2^i\right] \) at time T and an instantaneous payment (in present value) at time \( s\in \left[0,T\right] \)

$$ \begin{array}{l}{B}_i\left(s,{K}_s^{*}\right){e}^{-rs}=r\frac{\left[{A}_i(s){K}_s^{*}+{C}_i(s)\right]}{{\displaystyle \sum_{j=1}^n\left[{A}_j(s){K}_s^{*}+{C}_j(s)\right]}}\left[A(s){K}_s^{*}+C(s)\right]{e}^{-rs}\\ {}-\frac{\left[{A}_i(s){K}_s^{*}+{C}_i(s)\right]}{{\displaystyle \sum_{j=1}^n\left[{A}_j(s){K}_s^{*}+{C}_j(s)\right]}}\left[\overset{.}{A}(s){K}_s^{*}+\overset{.}{C}(s)\right]{e}^{-rs}\\ {}-\frac{\left[A(s){K}_s^{*}+C(s)\right]{e}^{-rs}}{{\left({\displaystyle \sum_{j=1}^n\left[{A}_j(s){K}_s^{*}+{C}_j(s)\right]}\right)}^2}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n\left[{A}_j(s){K}_s^{*}+{C}_j(s)\right]}\left[{\overset{.}{A}}_i(s){K}_s^{*}+{\overset{.}{C}}_i(s)\right]\\ {}-\left[{A}_i(s){K}_s^{*}+{C}_i(s)\right]{\displaystyle \sum_{j=1}^n\left[{\overset{.}{A}}_j(s){K}_{s*}+{\overset{.}{C}}_j(s)\right]}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\\ {}-{\xi}_{K_s}^i\left(s,{K}_s^{*}\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n\frac{A(s)}{2{c}_j}-\delta {K}_s^{*}}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]-\frac{1}{2}{\sigma}^2{\left({K}_s^{*}\right)}^2{\xi}_{K_s{K}_s}^i\left(s,{K}_s^{*}\right),\end{array} $$
(2.15)

where

$$ \begin{array}{l}{\xi}_{K_s}^i\left(s,{K}_s^{*}\right)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\frac{\left[{A}_i(s){K}_s^{*}+{C}_i(s)\right]}{{\displaystyle \sum_{j=1}^n\left[{A}_j(s){K}_s^{*}+{C}_j(s)\right]}}A(s){e}^{-rs}\\ {}+\frac{A_i(s){\displaystyle \sum_{j=1}^n\left[{A}_j(s){K}_s^{*}+{C}_j(s)\right]}-\left[{A}_i(s){K}_s^{*}+{C}_i(s)\right]{\displaystyle \sum_{j=1}^n{A}_j(s)}}{{\left({\displaystyle \sum_{j=1}^n\left[{A}_j(s){K}_s^{*}+{C}_j(s)\right]}\right)}^2}\\ {}\left[A(s){K}_s^{*}+C(s)\right]{e}^{-rs}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\end{array} $$

and \( {\xi}_{K_s{K}_s}^i\left(s,{K}_s^{*}\right)=\partial {\xi}_{K_s}^i\left(s,{K}_s^{*}\right)/\partial {K}_s, \)

for \( i\in N \) and \( {K}_s^{*}\in {X}_s^{*} \),

would lead to the realization of the imputation ξ(s, K * s ) in (2.14).

The values of the terms \( {A}_j(s),{\overset{.}{A}}_j(s),{C}_j(s) \) and \( {\overset{.}{C}}_j(s) \) are given in Proposition 2.2 and its proof.

Finally, when all agents are using the cooperative strategies, the payoff that player i will directly receive at time s is

\( {\alpha}_i{K}_s^{*}-\frac{{\left[A(s)\right]}^2}{4{c}_i} \).

However, according to the agreed upon imputation , agent i is to receive B i (s, K * s ) in (2.15). Therefore a transfer payment

$$ {\varpi}_i^i\left(s,{K}_s^{*}\right)={B}_i\left(s,{K}_s^{*}\right)-\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\alpha}_i{K}_s^{*}-\frac{{\left[A(s)\right]}^2}{4{c}_i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right] $$
(2.16)

will be imputed to agent \( i\in N \) at time \( s\in \left[0,T\right] \).

3 An Application in the Development of Technical Knowledge

In this section, we examine the application of the analysis in the development of technical knowledge as a public good in an industry.

3.1 Game Formulation and Noncooperative Market Outcome

Consider an industry with two types of firms using a common type of technology. There are n 1 type 1 firms and n 2 type 2 firms and the planning horizon is [0, T]. We use \( {I}_i^{(1)}(s)\in \left[0,\overline{I}\right] \) to denote the technology investment of the ith type 1 firm, for \( i\in \left\{1,2,\cdots, {n}_1\right\}\equiv {N}_1 \). Similarly, \( {I}_j^{(2)}(s)\in \left[0,\overline{I}\right] \) is used to denote the technology investment of the jth type 2 firm, for \( j\in \left\{{n}_1+1,{n}_1+2,\cdots, {n}_1+{n}_2\right\}\equiv {N}_2 \). The technology accumulation dynamics is governed by

$$ dK(s)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{i\in {N}_1}{I}_i^{(1)}(s)+{\displaystyle \sum_{j\in {N}_2}{I}_j^{(2)}(s)}-\delta K(s)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;ds+\sigma K(s)dz(s),\kern0.48em K(0)={K}_0, $$
(3.1)

where δ is the depreciation rate of technology.

Each firm benefits from the existing level of technology. The ith type 1 firm seeks to maximize its expected stream of profits:

$$ \begin{array}{l}E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\kern0.0em T}}\left\{{\alpha}_1K(s)-{b}_1{\left[K(s)\right]}^2-{\rho}_1{I}_i^{(1)}(s)-\left({c}_1/2\right){\left[{I}_i^{(1)}(s)\right]}^2\right\}{e}^{-rs}ds\\ {}+{e}^{-rT}\left[{q}_1{\left(K(T)\right)}^2+{q}_2K(T)+{q}_3\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.K(0)={K}_0\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in {N}_1,\end{array} $$
(3.2)

subject to (3.1).

In particular, given the technology level K(s), the instantaneous revenue of a type 1 firm is \( K(s)\left[{\alpha}_1-{b}_1K(s)\right] \). The cost of investment is \( {\rho}_1{I}_i^{(1)}(s)-\left(1/2\right){\left[{I}_i^{(1)}(s)\right]}^2 \). For each firm, there is a terminal valuation \( {e}^{-rT}\left[{q}_1{\left(K(T)\right)}^2+{q}_2K(T)+{q}_3\right] \) with \( {q}_1<0,{q}:2>0 \) and \( {q}_3>0 \).

The jth type 2 firm seeks to maximize its expected stream of profits:

$$ \begin{array}{l}E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\kern0.0em T}}\left\{{\alpha}_2K(s)-{b}_2{\left[K(s)\right]}^2-{\rho}_2{I}_i^{(2)}(s)-\left({c}_2/2\right){\left[{I}_i^{(2)}(s)\right]}^2\right\}{e}^{-rs}ds\\ {}+{e}^{-rT}\left[{q}_1{\left(K(T)\right)}^2+{q}_2K(T)+{q}_3\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.K(0)={K}_0\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em j\in {N}_2,\end{array} $$
(3.3)

subject to (3.1).

To derive the noncooperative market outcome of the industry we invoke the analysis in (1.4 and 1.5) in Sect. 12.1 and obtain the corresponding Hamilton-Jacobi-Bellman equations

$$ \begin{array}{l}-{V}_t^{(1)i}\left(t,K\right)-\frac{1}{2}{V}_{KK}^{(1)i}\left(t,K\right){\sigma}^2{K}^2\\ {}=\underset{I_i^{(1)}}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left\{{\alpha}_1K-{b}_1{K}^2-{\rho}_1{I}_i^{(1)}-\left({c}_1/2\right){\left({I}_i^{(1)}\right)}^2\right\}{e}^{-rt}\\ {}+{V}_K^{(1)i}\left(t,K\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\begin{array}{l}\ell \in {N}_1\\ {}\ell \ne i\end{array}}{\phi}_{\ell}^{(1)}\left(t,K\right)+{\displaystyle \sum_{\ell \in {N}_2}{\phi}_{\ell}^{(2)}\left(t,K\right)}+{I}_i^{(1)}-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}{V}^{(1)i}\left(T,K\right)={e}^{-rT}\left({q}_1{K}^2+{q}_2K+{q}_3\right),\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in {N}_1;\end{array} $$
(3.4)
$$ \begin{array}{l}-{V}_t^{(2)j}\left(t,K\right)-\frac{1}{2}{V}_{KK}^{(2)j}\left(t,K\right){\sigma}^2{K}^2\\ {}=\underset{I_j^{(2)}}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left\{{\alpha}_2K-{b}_2{K}^2-{\rho}_2{I}_j^{(2)}-\left({c}_2/2\right){\left({I}_j^{(2)}\right)}^2\right\}{e}^{-rt}\\ {}+{V}_K^{(2)j}\left(t,K\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\ell \in {N}_1}{\phi}_{\ell}^{(1)}\left(t,K\right)+{\displaystyle \sum_{\begin{array}{l}\ell \in {N}_2\\ {}\ell \ne j\end{array}}{\phi}_{\ell}^{(2)}\left(t,K\right)}+{I}_j^{(2)}-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;s\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}{V}^{(2)j}\left(T,K\right)={e}^{-rT}\left({q}_1{K}^2+{q}_2K+{q}_3\right),\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em j\in {N}_2.\end{array} $$
(3.5)

Performing the maximization operator in (3.4) and (3.5) yields game equilibrium investment strategies of the type 1 firm and the type 2 firms as:

$$ {I}_i^{(1)}=\frac{V_K^{(1)i}\left(t,K\right){e}^{rt}-{\rho}_1}{c_1},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in {N}_1; $$
(3.6)

and

$$ {I}_j^{(2)}=\frac{V_K^{(2)j}\left(t,K\right){e}^{rt}-{\rho}_2}{c_2},\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em j\in {N}_2. $$
(3.7)

To solve the game we first obtain the value functions indicating the game equilibrium payoffs of the firms as follows.

Proposition 3.1

The value functions indicating the game equilibrium payoffs of the firms are

$$ \begin{array}{l}{V}^{(1)i}\left(t,K\right)=\left[{A}_1(t){K}^2+{B}_1(t)K+{C}_1(t)\right]{e}^{-rt}\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in {N}_1;\kern0.36em \mathrm{and}\\ {}{V}^{(2)j}\left(t,K\right)=\left[{A}_2(t){K}^2+{B}_2(t)K+{C}_2(t)\right]{e}^{-rt},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em j\in {N}_2;\end{array} $$
(3.8)

where the values of A 1(t), A 2(t), B 1(t), B 2(t), C 1(t) and C 2(t) are generated by the following block-recursive ordinary differential equations:

$$ \begin{array}{l}{\overset{.}{A}}_1(t)=\frac{\left(2-4{n}_1\right)}{c_1}{\left[{A}_1(t)\right]}^2-\frac{4{n}_2}{c_2}{A}_1(t){A}_2(t)+\left(r+2\delta -{\sigma}^2\right){A}_1(t)+{b}_1,\\ {}{\overset{.}{A}}_2(t)=\frac{\left(2-4{n}_2\right)}{c_2}{\left[{A}_2(t)\right]}^2-\frac{4{n}_1}{c_1}{A}_1(t){A}_2(t)+\left(r+2\delta -{\sigma}^2\right){A}_2(t)+{b}_2,\\ {}{A}_1(T)={q}_1\kern0.24em \mathrm{and}\kern0.24em {A}_2(T)={q}_1;\end{array} $$
(3.9)
$$ \begin{array}{l}{\overset{.}{B}}_1(t)=\left[\left(r+\delta \right)-\left(\frac{4{n}_1}{c_1}-\frac{2}{c_1}\right){A}_1(t)-2\frac{n_2}{c_2}{A}_2(t)\right]{B}_1(t)-2\frac{n_2}{c_2}{A}_1(t){B}_2(t)\\ {}+2\left(\frac{n_1{\rho}_1}{c_1}+\frac{n_2{\rho}_2}{c_2}\right){A}_1(t)-{\alpha}_1.\\ {}{\overset{.}{B}}_2(t)=\left[\left(r+\delta \right)-\left(\frac{4{n}_2}{c_2}-\frac{2}{c_2}\right){A}_2(t)-2\frac{n_1}{c_1}{A}_1(t)\right]{B}_2(t)-2\frac{n_1}{c_1}{A}_2(t){B}_1(t)\\ {}+2\left(\frac{n_1{\rho}_1}{c_1}+\frac{n_2{\rho}_2}{c_2}\right){A}_2(t)-{\alpha}_2.\\ {}{B}_1(T)={q}_2\kern0.24em \mathrm{and}\kern0.24em {B}_2(T)={q}_2;\end{array} $$
(3.10)
$$ \begin{array}{l}{\overset{.}{C}}_1(t)=r{C}_1(t)-\left(\frac{n_1}{c_1}-\frac{1}{2{c}_1}\right){\left[{B}_1(t)\right]}^2-\frac{n_2}{c_2}{B}_1(t){B}_2(t)\\ {}+\left(\frac{n_1{\rho}_1}{c_1}+\frac{n_2{\rho}_2}{c_2}\right){B}_1(t)-\frac{\rho_1^2}{2{c}_1};\\ {}{\overset{.}{C}}_2(t)=r{C}_2(t)-\left(\frac{n_2}{c_2}-\frac{1}{2{c}_2}\right){\left[{B}_2(t)\right]}^2-\frac{n_1}{c_1}{B}_1(t){B}_2(t)\\ {}+\left(\frac{n_1{\rho}_1}{c_1}+\frac{n_2{\rho}_2}{c_2}\right){B}_2(t)-\frac{\rho_2^2}{2{c}_2};\\ {}{C}_1(T)={q}_3\kern0.24em \mathrm{and}\kern0.24em {C}_2(T)={q}_3.\end{array} $$
(3.11)

Proof

See Appendix C. ■

System (3.9, 3.10 and 3.11) is a block-recursive system of ordinary differential equations. In particular, (3.9) is a system which involves A 1(t) and A 2(t); (3.10) is a system which involves A 1(t), A 2(t), B 1(t) and B 2(t); and (3.11) is a system which involves B 1(t), B 2(t), C 1(t) and C 2(t).

A convenient way to solve the problem numerically is to express system (3.9) as an initial value problem with the variables \( {A}_1^{*}(t)={A}_1\left(T-t\right) \) and \( {A}_2^{*}(t)={A}_2\left(T-t\right) \) where:

$$ \begin{array}{l}{\overset{.}{A}}_1^{*}(t)=\frac{\left(4{n}_1-2\right)}{c_1}{\left[{A}_1^{*}(t)\right]}^2+\frac{4{n}_2}{c_2}{A}_1^{*}(t){A}_2^{*}(t)-\left(r+2\delta -{\sigma}^2\right){A}_1^{*}(t)-{b}_1,\\ {}{\overset{.}{A}}_2^{*}(t)=\frac{\left(4{n}_2-2\right)}{c_2}{\left[{A}_2^{*}(t)\right]}^2+\frac{4{n}_1}{c_1}{A}_1^{*}(t){A}_2^{*}(t)-\left(r+2\delta -{\sigma}^2\right){A}_2^{*}(t)-{b}_2,\\ {}{A}_1^{*}(0)={q}_1\kern0.24em \mathrm{and}\kern0.24em {A}_2^{*}(0)={q}_1.\end{array} $$
(3.12)

Using Euler’s method, the numerical solution of (3.12) could be readily evaluated as:

$$ \begin{array}{l}{A}_1^{*}\left(t+\Delta t\right)={A}_1^{*}(t)+\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\frac{\left(4{n}_1-2\right)}{c_1}{\left[{A}_1^{*}(t)\right]}^2+\frac{4{n}_2}{c_2}{A}_1^{*}(t){A}_2^{*}(t)\\ {}-\left(r+2\delta -{\sigma}^2\right){A}_1^{*}(t)-{b}_1\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\Delta t,\\ {}{A}_2^{*}\left(t+\Delta t\right)={A}_2^{*}(t)+\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\frac{\left(4{n}_2-2\right)}{c_2}{\left[{A}_2^{*}(t)\right]}^2+\frac{4{n}_1}{c_1}{A}_1^{*}(t){A}_2^{*}(t)\\ {}-\left(r+2\delta -{\sigma}^2\right){A}_2^{*}(t)-{b}_2\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\Delta t,\end{array} $$
(3.13)

The numerical values generated in (3.13) yields \( {A}_1^{*}(t)={A}_1\left(T-t\right) \) and \( {A}_2^{*}(t)={A}_2\left(T-t\right) \). Substituting A 1(t) and A 2(t) into (3.10) yields a pair of linear differential equations in B 1(t) and B 2(t) which could readily be solved numerically. Substituting B 1(t) and B 2(t) into (3.11) yields a pair of independent linear differential equations in C 1(t) and C 2(t), which once again is readily solvable numerically.

Using Proposition 3.1 and (3.6 and 3.7) the game equilibrium strategies can be obtained and the market equilibrium be characterized.

3.2 Cooperative Development of Technical Knowledge

Now we consider the case when the firms agree to act cooperatively and seek higher expected profits. They agree to maximize their expected joint profit and share the excess of cooperative profits over noncooperative profits equally. To maximize their expected joint profits the firms maximize

$$ \begin{array}{l}E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\kern0.0em T}}{\displaystyle \sum_{h\in {N}_1}}\left\{{\alpha}_1K(s)-{b}_1{\left[K(s)\right]}^2-{\rho}_1{I}_h^{(1)}(s)-\left({c}_1/2\right){\left[{I}_h^{(1)}(s)\right]}^2\right\}{e}^{-rs}ds\\ {}+{\displaystyle {\int}_{\kern0.0em 0}^{\kern0.0em T}}{\displaystyle \sum_{k\in {N}_2}}\left\{{\alpha}_2K(s)-{b}_2{\left[K(s)\right]}^2-{\rho}_2{I}_k^{(2)}(s)-\left({c}_2/2\right){\left[{I}_k^{(2)}(s)\right]}^2\right\}{e}^{-rs}ds\\ {}+\left({n}_1+{n}_2\right){e}^{-rT}\left[{q}_1{\left(K(T)\right)}^2+{q}_2K(T)+{q}_3\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.K(0)={K}_0\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(3.14)

subject to dynamics (3.1).

Following the analysis in (1.6, 1.7, 1.8, 1.9 and 1.10) in Sect. 12.1, the corresponding stochastic dynamic programming equation can be obtained as:

$$ \begin{array}{l}-{W}_t\left(t,K\right)-\frac{1}{2}{W}_{KK}\left(t,K\right){\sigma}^2{K}^2\\ {}=\underset{I_1^{(1)},{I}_2^{(1)},\cdots, {I}_{n_1}^{(1)};{I}_{n_1+1}^{(2)},{I}_{n_1+2}^{(2)},\cdots, {I}_{n_1+{n}_2}^{(2)}}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{h\in {N}_1}}\left[{\alpha}_1K-{b}_1{K}^2-{\rho}_1{I}_h^{(1)}-\left({c}_1/2\right){\left({I}_h^{(1)}\right)}^2\right]\;{e}^{-rt}\\ {}+{\displaystyle \sum_{k\in {N}_2}}\left\{{\alpha}_2K-{b}_2{K}^2-{\rho}_2{I}_k^{(2)}-\left({c}_2/2\right){\left({I}_k^{(2)}\right)}^2\right\}\;{e}^{-rt}\\ {}+{W}_K\left(t,K\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{h\in {N}_1}{I}_h^{(1)}(s)+{\displaystyle \sum_{k\in {N}_2}{I}_k^{(2)}(s)}-\delta K(s)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(3.15)
$$ W\left(T,K\right)=\left({n}_1+{n}_2\right){e}^{-rT}\left({q}_1{K}^2+{q}_2K+{q}_3\right). $$
(3.16)

Performing the maximization operator in (3.15) yields:

$$ \begin{array}{l}{I}_i^{(1)}=\frac{W\left(t,K\right){e}^{rt}-{\rho}_1}{c_1},\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in {N}_1;\kern0.24em \mathrm{and}\\ {}{I}_j^{(2)}=\frac{W\left(t,K\right){e}^{rt}-{\rho}_2}{c_2},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em j\in {N}_2.\end{array} $$
(3.17)

The expected joint payoff of the firms can be obtained as:

Proposition 3.2

The value function W(t, K), which reflects the maximized expected joint payoff at time t given the level of technology K is

$$ W\left(t,K\right)=\left[A(t){K}^2+B(t)K+C(t)\right]{e}^{-rt}, $$
(3.18)

where the values of A(t), B(t) and C(t) are generated by the following block recursive ordinary differential equations:

$$ \begin{array}{l}\overset{.}{A}(t)=\left(r+2\delta -{\sigma}^2\right)A(t)-2\left(\frac{n_1}{c_1}+\frac{n_2}{c_2}\right){\left[A(t)\right]}^2+{n}_1{b}_1+{n}_2{b}_2,\\ {}A(T)=\left({n}_1+{n}_2\right){q}_1;\end{array} $$
(3.19)
$$ \begin{array}{l}\overset{.}{B}(t)=\left[r+\delta -2\left(\frac{n_1}{c_1}+\frac{n_2}{c_2}\right)A(t)\right]B(t)+2\left[\frac{n_1{\rho}_1}{c_1}+\frac{n_2{\rho}_2}{c_2}\right]A(t)-{n}_1{\alpha}_1-{n}_2{\alpha}_2,\\ {}B(T)=\left({n}_1+{n}_2\right){q}_2;\end{array} $$
(3.20)
$$ \begin{array}{l}\overset{.}{C}(t)=rC(t)-\frac{n_1}{2{c}_1}{\left[B(t)-{\rho}_1\right]}^2-\frac{n_2}{2{c}_2}{\left[B(t)-{\rho}_2\right]}^2,\\ {}C(T)=\left({n}_1+{n}_2\right){q}_3.\end{array} $$
(3.21)

Proof

Follow the proof of Proposition 3.1. ■

Using (3.17) and Proposition 3.2 the optimal technology accumulation dynamics can be expressed as:

$$ \begin{array}{l}dK(s)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\frac{n_1}{c_1}\left[2A(s)K(s)+B(s)-{\rho}_1\right]+\frac{n_2}{c_2}\left[2A(s)K(s)+B(s)-{\rho}_2\right]\\ {}-\delta K(s)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]ds+\sigma K(s)dz(s),\kern1em K(0)={K}_0\end{array} $$
(3.22)

We use X * s to denote the set of realizable values of K*(s) generated by (3.22) at time s. The term \( {K}_s^{*}\in {X}_s^{*} \) is used to denote and element in X * s .

With the firms agreeing to share the excess of cooperative profits over noncooperative profits equally the imputation vector becomes

$$ \begin{array}{ll}{\xi}^{(1)i}\left(s,{K}_s^{*}\right)=& {V}^{(1)i}\left(s,{K}_s^{*}\right)+\frac{1}{n_1+{n}_2}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.W\left(s,{K}_s^{*}\right)\hfill \\ {}& -{\displaystyle \sum_{h\in {N}_1}}{V}^{(1)h}\left(s,{K}_s^{*}\right)-{\displaystyle \sum_{k\in {N}_2}}{V}^{(2)k}\left(s,{K}_s^{*}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\kern0.5em \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{type}1\mathrm{firm}\ i\in {N}_1;\hfill \end{array} $$
$$ \begin{array}{ll}{\xi}^{(2)j}\left(s,{K}_s^{*}\right)=& {V}^{(2)j}\left(s,{K}_s^{*}\right)+\frac{1}{n_1+{n}_2}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.W\left(s,{K}_s^{*}\right)\hfill \\ {}& -{\displaystyle \sum_{h\in {N}_1}}{V}^{(1)h}\left(s,{K}_s^{*}\right)-{\displaystyle \sum_{k\in {N}_2}}{V}^{(2)k}\left(s,{K}_s^{*}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{type}\ 2\ \mathrm{f}\mathrm{irm}\;j\in {N}_2;\hfill \end{array} $$
(3.23)

at time instant \( s\in \left[0,T\right] \) if the state of technology is \( {K}_s^{*}\in {X}_s^{*} \).

Invoking Theorem 1.1, a PDP for firm \( i\in {N}_1 \) and firm \( j\in {N}_2 \) with a terminal payment \( \left[{q}_1{\left({K}_T^{*}\right)}^2+{q}_2{K}_T^{*}+{q}_3\right] \) at time T and an instantaneous payment (in present value) at time \( s\in \left[0,T\right] \) equalling

\( \begin{array}{l}{B}_i^{(1)}\left(s,{K}_s^{*}\right){e}^{-rs}=-{\xi}_s^{(1)i}\left(s,{K}_s^{*}\right)\\ {}-{\xi}_{K_s}^{(1)i}\left(s,{K}_s^{*}\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\frac{n_1}{c_1}\right.\left[2A(s){K}_s^{*}+B(s)-{\rho}_1\right]+{n}_2\left[2A(s){K}_s^{*}+B(s)-{\rho}_2\right]\\ {}-\delta {K}_s^{*}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]-\frac{1}{2}{\sigma}^2{\left({K}_s^{*}\right)}^2{\xi}_{K_s{K}_s}^{(1)i}\left(s,{K}_s^{*}\right),\kern0.17em \mathrm{given}\kern0.17em \mathrm{to}\kern0.17em \mathrm{the}\kern0.17em \mathrm{type}\kern0.17em 1\kern0.17em \mathrm{firm}\kern0.29em i\in {N}_1;\end{array} \)

and

\( \begin{array}{l}{B}_j^{(2)}\left(s,{K}_s^{*}\right){e}^{-rs}=-{\xi}_s^{(2)j}\left(s,{K}_s^{*}\right)\\ {}-{\xi}_{K_s}^{(2)j}\left(s,{K}_s^{*}\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\frac{n_1}{c_1}\right.\left[2A(s){K}_s^{*}+B(s)-{\rho}_1\right]+{n}_2\left[2A(s){K}_s^{*}+B(s)-{\rho}_2\right]\\ {}-\delta {K}_s^{*}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]-\frac{1}{2}{\sigma}^2{\left({K}_s^{*}\right)}^2{\xi}_{K_s{K}_s}^{(2)j}\left(s,{K}_s^{*}\right),\kern0.17em \mathrm{given}\kern0.17em \mathrm{to}\kern0.17em \mathrm{the}\kern0.17em \mathrm{type}\kern0.17em 2\kern0.17em \mathrm{firm}\kern0.29em j\in {N}_2;\end{array} \)

would lead to the realization of the imputation ξ(s, K * s ) in (3.23) and hence a subgame consistent scheme.

The terms \( {\xi}_s^{\left(\omega \right){i}_{\omega }}\left(s,{K}_s^{*}\right),{\xi}_{K_s}^{\left(\omega \right){i}_{\omega }}\left(s,{K}_s^{*}\right) \) and \( {\xi}_{K_s{K}_s}^{\left(\omega \right){i}_{\omega }}\left(s,{K}_s^{*}\right) \), for \( \omega \in \left\{1,2\right\} \) and \( {i}_{\omega}\in {N}_{\omega } \), can be obtained readily using Proposition 3.1, Proposition 3.2 and (3.23).

Moreover, the game (3.1, 3.2 and 3.3) can be extended to include the case with more than 2 types of firms. Finally, worth-noting is that the payoff structures and state dynamics of the game (3.1, 3.2 and 3.3) encompass those of the existing dynamic games of public goods provision. For instance, Fershtman and Nitzan (1991) is case where \( {n}_1=n,\kern0.24em {n}_2=0,\kern0.24em {\rho}_1={\rho}_2=0 \) and \( \sigma =0 \). Wirl (1996) is the case where \( {n}_1=2,\;{n}_2=0,\;{\rho}_1={\rho}_2=0 \) and \( \sigma =0 \). Wang and Ewald (2010) is the case where \( {n}_1=2,\;{n}_2=0 \) and \( {\rho}_1={\rho}_2=0 \). Dockner et al. (2000) is case where \( {n}_1=1,\;{n}_2=1,\;{b}_1={b}_2=1,\;{\rho}_1={\rho}_2=\rho,\;{c}_1={c}_2=1 \) and \( \sigma =0 \).

4 Infinite Horizon Analysis

In this section, we consider the case when the planning horizon approaches infinity, that is \( T\to \infty \). The objective of agent \( i\in N \) is to maximize its expected payoff

$$ E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\infty }}\left\{{R}_i\left[K(s)\right]-{C}_i\left[{I}_i(s)\right]\right\}{e}^{-rs}ds\left|\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K(0)={K}_0\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\} $$
(4.1)

subject to dynamics (1.1).

The corresponding Hamilton-Jacobi-Bellman equations in current value formulation characterizing a feedback solution of the infinite horizon problem (1.1) and (4.1) are (see Theorem 5.1 in Chap. 3):

$$ \begin{array}{l}r{V}^i(K)-\frac{1}{2}{V}_{KK}^i(K){\sigma}^2{K}^2=\underset{I_i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}_i(K)-{C}_i\left({I}_i\right)\right]\\ {}+{V}_K^i(K)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_j(K)+{I}_i-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\kern0.0em \left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\;\mathrm{f}\mathrm{o}\mathrm{r}\;i\in N,\end{array} $$
(4.2)

Performing the maximization operator in (4.2) yields:

$$ d{C}_i\left({I}_i\right)/d{I}_i={V}_K^i(K),\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N $$
(4.3)

Condition (4.3) reflects that in a non-cooperative equilibrium the marginal cost of investment of agent i will be equal to the agent’s implicit marginal valuation/benefit of the productive stock in the infinite horizon case.

4.1 Subgame Consistent Cooperative Provision

Consider the case when the agents agree to act cooperatively and seek higher gains. They agree to maximize their expected joint gain and distribute the cooperative gain according to the imputation vector

$$ \xi (K)=\left[{\xi}^1(K),{\xi}^2(K),\cdots, {\xi}^n(K)\right]\;\mathrm{when}\ \mathrm{the}\ \mathrm{state}\ \mathrm{is}\;K. $$
(4.4)

To maximize their expected joint gains the agents maximize

$$ \underset{\left\{{I}_1(s),{I}_2(s),\cdots, {I}_n(s)\right\}}{ \max }E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.}{\displaystyle {\int}_{\kern0.0em 0}^{\infty }}\left\{{R}_j\left[K(s)\right]-{C}_j\left[{I}_j(s)\right]\right\}{e}^{-rs}ds\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\} $$
(4.5)

subject to dynamics (1.1).

Invoking stochastic dynamic programming techniques an optimal solution to the stochastic control problem (1.1) and (4.5) can characterized by the following set of equations (see Theorem A.4 in the Technical Appendices):

$$ \begin{array}{l}rW(K)-\frac{1}{2}{W}_{KK}(K){\sigma}^2{K}^2\\ {}=\underset{I_1,{I}_2,\cdots, {I}_n}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.}\left[{R}_j(K)-{C}_j\left({I}_j\right)\right]+{W}_K(K)\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{I}_j-\delta K}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\kern0.0em \left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}.\end{array} $$
(4.6)

In particular, W(K) gives the maximized expected joint payoff of the n players at time given that the level of technology is K. Let ψ * j (K), for \( j\in N \), denote the game equilibrium investment strategy of agent i, the optimal trajectory of the public goods can be expressed as:

$$ dK(s)=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{\psi}_j\left(K(s)\right)-\delta K(s)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;ds+\sigma K(s)dz(s), $$
(4.7)

for \( K(0)={K}_o \),

We use X* to denote the set of realizable values of K generated by (4.7). The term \( {K}^{*}\in {X}^{*} \) is used to denote an element in X*.

Following the analysis in Theorem 5.3 in Chap. 3, we formulate a Payoff Distribution Procedure (PDP) so that the agreed-upon imputation s (4.4) can be realized. Let B i(K*) denote the payment that agent i will received under the cooperative agreement if K* is realized.

A theorem characterizing a formula for B i(K*), for \( i\in N \), which yields (4.4) is provided below.

Theorem 4.1

A PDP with an instantaneous payment equaling

$$ {B}_i\left({K}^{*}\right)=r{\xi}^i\left({K}^{*}\right)-{\xi}_K^i\left({K}^{*}\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{\psi}_j^{*}\left({K}^{*}\right)-\delta {K}^{*}}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]-\frac{1}{2}{\sigma}^2{\left({K}^{*}\right)}^2{\xi}_{KK}^i\left({K}^{*}\right),\kern0.62em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N,\;\mathrm{given}\ \mathrm{that}\ \mathrm{the}\ \mathrm{state}\ \mathrm{is}\;{K}^{*}\in {X}^{*} $$
(4.8)

would lead to the realization of the imputation ξ(K*) in (4.4).

Proof

See Theorem 5.3 in Chap. 3. ■

Note that the payoff distribution procedure in Theorem 4.1 would give rise to the agreed-upon imputation in (4.4) and therefore subgame consistency is satisfied.

When all agents are using the cooperative strategies and the state equals K*, the payoff that player i will directly receive is

$$ {R}_i\left({K}^{*}\right)-{C}_i\left[{\psi}_i\left({K}^{*}\right)\right]. $$

However, according to the agreed upon imputation , agent i is to receive B i (K*). Therefore a transfer payment

$$ {\varpi}_i\left({K}^{*}\right)={B}_i\left({K}^{*}\right)-\left\{{R}_i\left({K}^{*}\right)-{C}_i\left[{\psi}_i\left({K}^{*}\right)\right]\right\}. $$
(4.9)

will be imputed to agent \( i\in N \).

4.2 Infinite Horizon Public Capital Goods Provision: An Illustration

In this section we consider the infinite horizon game of public capital goods provision in which the expected payoff to agent \( i\in N \) is:

$$ E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\infty }}\left\{{\alpha}_iK(s)-{c}_i{\left[{I}_i(s)\right]}^2\right\}{e}^{-rs}ds\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.K(0)={K}_0\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N. $$
(4.10)

The accumulation dynamics of the public capital stock is governed by (2.1).

Setting up the corresponding Hamilton-Jacobi-Bellman equations according to (4.2) and performing the maximization operator yields:

$$ {I}_i=\frac{V_K^i(K)}{2{c}_i},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N. $$

The value functions which reflect the expected noncooperative payoffs of the agents can be obtained as:

Proposition 4.1

The value function reflecting the expected noncooperative payoff of agent i is:

$$ {V}^i(K)=\left({A}_iK+{C}_i\right),\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N; $$
(4.11)

where \( {A}_i=\frac{\alpha_i}{\left(r+\delta \right)} \), and

$$ {C}_i=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\frac{A_i{A}_j}{2{c}_jr}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]-\frac{{\left({A}_i\right)}^2}{4{c}_ir}. $$

Proof

Following the derivation of Proposition 2.1, one can obtain the value function as in (4.11). ■

Consider the case when the agents agree to act cooperatively and seek higher gains. They agree to maximize their expected joint gain and distribute the cooperative gain proportional to their non-cooperative gains. To maximize their expected joint gains the agents maximize

$$ E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em 0}^{\infty }{\displaystyle \sum_{j=1}^n}}\left\{{\alpha}_jK(s)-{c}_j{\left[{I}_j(s)\right]}^2\right\}{e}^{-rs}ds\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.K(0)={K}_0\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\} $$
(4.12)

subject to dynamics (2.1).

Performing the maximization operator in (4.12) yields:

$$ {I}_i=\frac{W_K(K)}{2{c}_i},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N. $$

The value function W(K) which reflects the maximized expected joint profits of the nwould lead to the realization of the imputation as:

Proposition 4.2

$$ W(K)=\left[AK+C\right], $$
(4.13)

where \( A={\displaystyle \sum_{j=1}^n}\frac{\alpha_j}{\left(r+\delta \right)} \) and \( C={\displaystyle \sum_{j=1}^n}\frac{(A)^2}{4{c}_jr}. \)

Proof

Following the derivation of Proposition 2.2, one can obtain the value function as in (4.13). ■

With the agents agreeing to distribute their gains proportional to their non-cooperative gains, the imputation vector becomes

$$ {\xi}^i\left({K}^{*}\right)=\frac{V^i\left({K}^{*}\right)}{{\displaystyle \sum_{j=1}^n}{V}^j\left({K}^{*}\right)}W\left({K}^{*}\right)=\frac{\left({A}_iK+{C}_i\right)}{{\displaystyle \sum_{j=1}^n\left({A}_jK+{C}_j\right)}}\left(AK+C\right), $$
(4.14)

for \( i\in N \) if the public capital stock is \( {K}^{*}\in {X}^{*} \).

To guarantee dynamical stability in a dynamic cooperation scheme, the solution has to satisfy the property of subgame consistency which requires the satisfaction of (4.14). Following Theorem 4.1 we can obtain the PDP that brings about a subgame consistent solution with instantaneous payments:

$$ \begin{array}{l}{B}_i\left({K}^{*}\right)=\frac{r\left({A}_i{K}^{*}+{C}_i\right)}{{\displaystyle \sum_{j=1}^n\left({A}_j{K}^{*}+{C}_j\right)}}\left(A{K}^{*}+C\right)\\ {}-{\xi}_K^i\left({K}^{*}\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\frac{A}{2{c}_j}-\delta {K}^{*}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]-\frac{1}{2}{\sigma}^2{\left({K}^{*}\right)}^2{\xi}_{KK}^i\left({K}^{*}\right),\end{array} $$
(4.15)

where

$$ \begin{array}{l}{\xi}_K^i\left({K}^{*}\right)=\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \\ {}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\frac{\left[\right({A}_i\left(A{K}^{*}+C\right)+\left({A}_i{K}^{*}+{C}_i\right)A\Big]{\displaystyle \sum_{j=1}^n\left({A}_j{K}^{*}+{C}_j\right)}}{{\left[{\displaystyle \sum_{j=1}^n\left({A}_j{K}^{*}+{C}_j\right)}\right]}^2}\\ {}-\frac{\left({A}_i{K}^{*}+{C}_i\right)\left(A{K}^{*}+C\right){\displaystyle \sum_{j=1}^n{A}_j}}{{\left[{\displaystyle \sum_{j=1}^n\left({A}_j{K}^{*}+{C}_j\right)}\right]}^2}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \\ {}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\frac{A}{2{c}_j}-\delta {K}^{*}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\kern0.24em \mathrm{and}\\ {}{\xi}_{KK}^i\left({K}^{*}\right)=d{\xi}_K^i\left({K}^{*}\right)/dK,\end{array} $$

for \( i\in N \) if the public capital stock is \( {K}^{*}\in {X}^{*} \).

Therefore a transfer payment

$$ {\varpi}_i\left({K}^{*}\right)={B}_i\left({K}^{*}\right)-\left[{\alpha}_i{K}^{*}-\left({A}^2/{c}_i\right)\right] $$

will be imputed to agent \( i\in N \).

5 Public Goods Provision Under Accumulation and Payoff Uncertainties

This Section considers cooperative provision of public goods by asymmetric agents in a discrete-time dynamic game framework with uncertainties in stock accumulation dynamics and future payoff structures. One of the major hindrances for dynamic cooperation in public goods provision is the uncertainty in the future gains. This section resolves the problem with subgame consistent schemes. The analytical framework and the non-cooperative outcome of public goods provision are provided in Sect. 12.5.1. Details of a Pareto optimal cooperative scheme are presented in Sect. 12.5.2. A payment mechanism ensuring subgame consistency is derived in Sect. 12.5.3.

5.1 Analytical Framework and Non-cooperative Outcome

Consider the case of the provision of a public good in which a group of n agents carry out a project by making contributions to the building up of the stock of a productive public good. The game involves T stages of operation and after the T stages each agent received a terminal payment in stage \( T+1 \). We use K t to denote the level of the productive stock and I i t the public capital investment by agent i at stage \( t\in \left\{1,2,\cdots, T\right\} \). The stock accumulation dynamics is governed by the stochastic difference equation:

$$ {K}_{t+1}={K}_t+{\displaystyle \sum_{j=1}^n{I}_t^j-\delta {K}_t}+{\vartheta}_t, \kern0.24em K{}_1{}={K}^0, $$
(5.1)

for \( t\in \left\{1,2,\cdots, T\right\} \),

where δ is the depreciation rate and ϑ t is a sequence of statistically independent random variables .

The payoff of agent i at stage t is affected by a random variable θ t . In particular, the payoff to agent i at stage t is

$$ {R}^i\left({K}_t,{\theta}_t\right)-{C}^i\left({I}_t^i,{\theta}_t\right),\kern0.24em i\in \left\{1,2,\cdots, n\right\}=N, $$
(5.2)

where R i(K t , θ t ) is the revenue/payoff to agent iC i(I i t , θ t ) is the cost of investing \( {I}_t^i\in {X}^i \), and θ t for \( t\in \left\{1,2,\cdots, T\right\} \) are independent discrete random variable s with range \( \left\{{\theta}_t^1,{\theta}_t^2,\cdots, {\theta}_t^{\eta_t}\right\} \) and corresponding probabilities \( \left\{{\lambda}_t^1,{\lambda}_t^2,\cdots, {\lambda}_t^{\eta_t}\right\} \), where η t is a positive integer for \( t\in \left\{1,2,\cdots, T\right\} \). In stage 1, it is known that θ 1 equals θ 11 with probability \( {\lambda}_1^1=1 \).

Marginal revenue product of the productive stock is positive, that is \( \partial {R}^i\left({K}_t,{\theta}_t\right)/\partial {K}_t>0 \), before a saturation level \( \overline{K} \) has been reached; and marginal cost of investment is positive and non-decreasing, that is \( \partial {C}^i\left({I}_t^i,{\theta}_t\right)/\partial {I}_t^i\Big)>0 \) and \( {\partial}^2{C}^i\left({I}_t^i,{\theta}_t\right)/\partial {I_t^i}^2\ge 0 \).

The objective of agent \( i\in N \) is to maximize its expected net revenue over the planning horizon, that is

$$ \begin{array}{l}{E}_{\theta_1,{\theta}_2,\cdots, {\theta}_T;{\vartheta}_1,{\vartheta}_2,\cdots, {\vartheta}_T}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{s=1}^T}\left[{R}^i\left({K}_s,{\theta}_s\right)-{C}^i\left({I}_s^i,{\theta}_s\right)\right]{\left(1+r\right)}^{-\left(s-1\right)}\\ {}+{q}^i\left({K}_{T+1}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\end{array} $$
(5.3)

subject to the stock accumulation dynamics (5.1),

where \( {E}_{\theta_1,{\theta}_2,\cdots, {\theta}_T;{\vartheta}_1,{\vartheta}_2,\cdots, {\vartheta}_T} \) is the expectation operation with respect to the random variable s \( {\theta}_1,{\theta}_2,\cdots, {\theta}_T \) and \( {\vartheta}_1,{\vartheta}_2,\cdots, {\vartheta}_T;\;r \) is the discount rate, and \( {q}^i\left({K}_T\right)\ge 0 \) is an amount conditional on the productive stock that agent i would receive at stage \( T+1 \). Since there is no uncertainty in stage \( T+1 \), we use \( {\theta}_{T+1}^1 \) to denote the condition in stage \( T+1 \) with probability \( {\lambda}_{T+1}^1=1 \).

To solve the game, we follow the analysis in Chap. 9 and begin with the subgame starting at the last operating stage, that is stage T. If \( {\theta}_T^{\sigma_T}\in \left\{{\theta}_T^1,{\theta}_T^2,\cdots, {\theta}_T^{\eta_T}\right\} \) has occurred at stage T and the public capital stock is \( {K}_T=K \), the subgame becomes:

$$ \begin{array}{l}\underset{I_T^i}{ \max }{E}_{\vartheta_T}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}^i\left({K}_T,{\theta}_T^{\sigma_T}\right)-{C}^i\left({I}_T^i,{\theta}_T^{\sigma_T}\right)\right]{\left(1+r\right)}^{-\left(T-1\right)}\\ {}+{q}^i\left({K}_{T+1}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N\end{array} $$
(5.4)

subject to

$$ {K}_{T+1}={K}_T+{\displaystyle \sum_{j=1}^n{I}_T^j-\delta {K}_T}+{\vartheta}_T,{K}_T=K. $$
(5.5)

The subgame (5.4 and 5.5) is a stochastic dynamic game . Invoking the standard techniques for solving stochastic dynamic game s, a characterization the feedback Nash equilibrium is provided in the following lemma.

Lemma 5.1

A set of strategies \( {\phi}_T^{\left({\sigma}_T\right)*}(K)=\left\{{\phi}_T^{\left({\sigma}_T\right)1*}(K),{\phi}_T^{\left({\sigma}_T\right)2*}(K),\cdots \cdots, {\phi}_T^{\left({\sigma}_T\right)n*}(K)\right\} \) provides a Nash equilibrium solution to the subgame (5.4 and 5.5) if there exist functions \( {V}^{\left({\sigma}_T\right)i}\left(t,K\right) \), for \( i\in N \) and \( t\in \left\{1,2\right\} \), such that the following conditions are satisfied:

$$ \begin{array}{l}{V}^{\left({\sigma}_T\right)i}\left(T,K\right)=\underset{I_T^i}{ \max }{E}_{\vartheta_T}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}^i\left({K}_T,{\theta}_T^{\sigma_T}\right)-{C}^i\left({I}_T^i,{\theta}_T^{\sigma_T}\right)\right]{\left(1+r\right)}^{-\left(T-1\right)}\\ {}+{V}^{\left({\sigma}_{T+1}\right)i}\left[T+1,K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_T^{\left({\sigma}_T\right)j*}(K)+{I}_T^i-\delta K}+{\vartheta}_T\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}{V}^{\left({\sigma}_{T+1}\right)i}\left(T+1,K\right)={q}^i(K){\left(1+r\right)}^{-T};\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N\end{array} $$
(5.6)

Proof

The system of equations in (5.6) satisfies the standard stochastic dynamic programming property and the Nash property for each agent \( i\in N \). Hence a Nash equilibrium of the subgame (5.4 and 5.5) is characterized. Details of the proof of the results can be found in Theorem 4.1 in Chap. 7. ■

Using Lemma 5.1, one can characterize the value functions \( {V}^{\left({\sigma}_T\right)i}\left(T,K\right) \) for all \( {\sigma}_T\in \left\{1,2,\cdots, {\eta}_T\right\} \) if they exist. In particular, \( {V}^{\left({\sigma}_T\right)i}\left(T,K\right) \) yields agent i’s expected game equilibrium payoff in the subgame starting at stage T given that \( {\theta}_T^{\sigma_T} \) occurs and \( {K}_T=K \).

Then we proceed to the subgame starting at stage \( T-1 \) when \( {\theta}_{T-1}^{\sigma_{T-1}}\in \left\{{\theta}_{T-1}^1,{\theta}_{T-1}^2,\cdots, {\theta}_{T-1}^{\eta_{T-1}}\right\} \) occurs and \( {K}_{T-1}=K \). In this subgame agent \( i\in N \) seeks to maximize his expected payoff

$$ \begin{array}{l}{E}_{\theta_T;{\vartheta}_{T-1},{\vartheta}_T}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{s=T-1}^T}\left[{R}^i\left({K}_s,{\theta}_s\right)-{C}^i\left({I}_s^i,{\theta}_s\right)\right]{\left(1+r\right)}^{-\left(s-1\right)}\\ {}+{q}^i\left({K}_{T+1}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\\ {}={E}_{\vartheta_{T-1}}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}^i\left({K}_{T-1},{\theta}_{T-1}^{\sigma_{T-1}}\right)-{C}^i\left({I}_{T-1}^i,{\theta}_{T-1}^{\sigma_{T-1}}\right)\right]{\left(1+r\right)}^{-\left(T-2\right)}\\ {}+{\displaystyle \sum_{\sigma_T=1}^{\eta_T}{\lambda}_T^{\sigma_T}}\left[{R}^i\left({K}_T,{\theta}_T^{\sigma_T}\right)-{C}^i\left({I}_T^i,{\theta}_T^{\sigma_T}\right)\right]{\left(1+r\right)}^{-\left(T-2\right)}+{q}^i\left({K}_{T+1}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(5.7)

subject to the capital accumulation dynamics

$$ {K}_{t+1}={K}_t+{\displaystyle \sum_{j=1}^n{I}_t^j-\delta {K}_t}+{\vartheta}_t,{K}_{T-1}=K,\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em t\in \left\{T-1,T\right\}. $$
(5.8)

If the functions \( {V}^{\left({\sigma}_T\right)i}\left(T,K\right) \) for all \( {\sigma}_T\in \left\{1,2,\cdots, {\eta}_T\right\} \) characterized in Lemma 5.1 exist, the subgame (5.7 and 5.8) can be expressed as a game in which agent i seeks to maximize the expected payoff

$$ \begin{array}{l}{E}_{\vartheta_{T-1}}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}^i\left({K}_{T-1},{\theta}_{T-1}\right)-{C}^i\left({I}_{T-1}^i,{\theta}_{T-1}\right)\right]{\left(1+r\right)}^{-\left(T-2\right)}\\ {}+{\displaystyle \sum_{\sigma_T=1}^{\eta_T}{\lambda}_T^{\sigma_T}}{V}^{\left({\sigma}_T\right)i}\left[T,{K}_{T-1}+{\displaystyle \sum_{j=1}^n{I}_{T-1}^j-\delta {K}_{T-1}}+{\vartheta}_{T-1}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N,\end{array} $$
(5.9)

using his control \( {I}_{T-1}^i \).

A Nash equilibrium of the subgame (5.9) can be characterized by the following lemma.

Lemma 5.2

A set of strategies

\( {\phi}_{T-1}^{\left({\sigma}_{T-1}\right)*}(K)=\left\{{\phi}_{T-1}^{\left({\sigma}_{T-1}\right)1*}(K),{\phi}_{T-1}^{\left({\sigma}_{T-1}\right)2*}(K),\cdots, {\phi}_{T-1}^{\left({\sigma}_{T-1}\right)n*}(K)\right\} \) provides a Nash equilibrium solution to the subgame (5.9) if there exist functions \( {V}^{\left({\sigma}_T\right)i}\left(T,{K}_T\right) \) for \( i\in N \) and \( {\sigma}_T=\left\{1,2,\cdots, {\eta}_T\right\} \) characterized in Lemma 5.1, and functions \( {V}^{\left({\sigma}_{T-1}\right)i}\left(T-1,K\right) \), for \( i\in N \), such that the following conditions are satisfied:

$$ \begin{array}{l}{V}^{\left({\sigma}_{T-1}\right)i}\left(T-1,K\right)=\underset{I_{T-1}^i}{ \max }{E}_{\vartheta_{T-1}}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}^i\left({K}_{T-1},{\theta}_{T-1}^{\sigma_{T-1}}\right)-{C}^i\left({I}_{T-1}^i,{\theta}_{T-1}^{\sigma_{T-1}}\right)\right]{\left(1+r\right)}^{-\left(T-2\right)}\\ {}+{\displaystyle \sum_{\sigma_T=1}^{\eta_T}{\lambda}_T^{\sigma_T}}{V}^{\left({\sigma}_T\right)i}\left[T,K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_{T-1}^{\left({\sigma}_{T-1}\right)j*}(K)+{I}_{T-1}^i-\delta K}+{\vartheta}_{T-1}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N.\end{array} $$
(5.10)

Proof

The conditions in Lemma 5.1 and the system of equations in (5.10) satisfies the standard discrete-time stochastic dynamic programming property and the Nash property for each agent \( i\in N \). Hence a Nash equilibrium of the subgame (5.9) is characterized. ■

Using Lemma 5.2, one can characterize the functions \( {V}^{\left({\sigma}_T\right)i}\left(T-1,K\right) \) for all \( {\theta}_{T-1}^{\sigma_{T-1}}\in \left\{{\theta}_{T-1}^1,{\theta}_{T-1}^2,\cdots, {\theta}_{T-1}^{\eta_{T-1}}\right\} \), if they exist. In particular, \( {V}^{\left({\sigma}_{T-1}\right)i}\left(T-1,K\right) \) yields agent i’s expected game equilibrium payoff in the subgame starting at stage \( T-1 \) given that \( {\theta}_{T-1}^{\sigma_{T-1}} \) occurs and \( {K}_{T-1}=K \).

Consider the subgame starting at stage \( t\in \left\{T-2,T-3,\cdots, 1\right\} \) when \( {\theta}_t^{\sigma_t}\in \left\{{\theta}_t^1,{\theta}_t^2,\cdots, {\theta}_t^{\eta_t}\right\} \) occurs and \( {K}_t=K \), in which agent \( i\in N \) maximizes his expected payoff

$$ \begin{array}{l}{E}_{\vartheta_t}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}^i\left(K,{\theta}_t^{\sigma_t}\right)-{C}^i\left({I}_t^i,{\theta}_t^{\sigma_t}\right)\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}{V}^{\left({\sigma}_{t+1}\right)i}\left[t+1,K+{\displaystyle \sum_{j=1}^n{I}_t^j-\delta K}+{\vartheta}_t\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N,\end{array} $$
(5.11)

subject to the public capital accumulation dynamics

$$ {K}_{t+1}={K}_t+{\displaystyle \sum_{j=1}^n{I}_t^j-\delta {K}_t}+{\vartheta}_t,{K}_t=K. $$
(5.12)

A Nash equilibrium solution for the game (5.1, 5.2 and 5.3) can be characterized by the following theorem.

Theorem 5.1

A set of strategies \( {\phi}_t^{\left({\sigma}_t\right)*}(K)=\left\{{\phi}_t^{\left({\sigma}_t\right)1*}(K),{\phi}_t^{\left({\sigma}_t\right)2*}(K),\cdots \cdots, {\phi}_t^{\left({\sigma}_t\right)n*}(K)\right\}, \), for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \), constitutes a Nash equilibrium solution to the game (5.1, 5.2 and 5.3) if there exist functions \( {V}^{\left({\sigma}_t\right)i}\left(t,K\right) \), for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\},\;t\in \left\{1,2,\cdots, T\right\} \) and \( i\in N \), such that the following recursive relations are satisfied:

$$ \begin{array}{l}{V}^{\left({\sigma}_T\right)i}\left(T+1,K\right)={q}^i\left({K}_{T+1}\right){\left(1+r\right)}^{-T},\\ {}{V}^{\left({\sigma}_t\right)i}\left(t,K\right)=\underset{I_t^i}{ \max }{E}_{\vartheta_t}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{R}^i\left({K}_t,{\theta}_t^{\sigma_t}\right)-{C}^i\left({I}_t^i,{\theta}_t^{\sigma_t}\right)\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}{V}^{\left({\sigma}_{t+1}\right)i}\left[t+1,K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_t^{\left({\sigma}_t\right)j*}(K)+{I}_t^i-\delta {K}_t}+{\vartheta}_t\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\},\;t\in \left\{1,2,\cdots, T\right\}\;\mathrm{and}\kern0.24em i\in N.\end{array} $$
(5.13)

Proof

The results in (5.13) characterizing the game equilibrium in stage T and stage \( T-1 \) are proved in Lemma 5.1 and Lemma 5.2. Invoking the subgame in stage \( t\in \left\{1,2,\cdots, T-1\right\} \) as expressed in (5.11 and 5.12), the results in (5.13) satisfy the optimality conditions in stochastic dynamic programming and the Nash equilibrium property for each agent in each of these subgames. Therefore, a feedback Nash equilibrium of the game (5.1, 5.2 and 5.3) is characterized. ■

Hence, the noncooperative outcome of the public capital provision game (5.1, 5.2 and 5.3) can be obtained.

5.2 Optimal Cooperative Scheme

Now consider the case when the agents agree to cooperate and enhance their gains from cooperation. In particular, they act cooperatively to maximize their expected joint payoff and distribute the joint payoff among themselves according to an agreed-upon optimality principle . If any agent deviates from the cooperation scheme, all agents will revert to the noncooperative framework to counteract the free-rider problem in public goods provision. As stated before, group optimality , individual rationality and subgame consistency are three crucial properties that sustainable cooperative scheme has to satisfy.

5.2.1 Pareto Optimal Provision

To fulfil group optimality the agents would seek to maximize their expected joint payoff. In particular, they have to solve the discrete-time stochastic dynamic programming problem of maximizing

$$ \begin{array}{l}{E}_{\theta_1,{\theta}_2,\cdots, {\theta}_T;{\vartheta}_1,{\vartheta}_2,\cdots, {\vartheta}_T}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{s=1}^T}\left[{R}^j\left({K}_s,{\theta}_s\right)-{C}^j\left({I}_s^j,{\theta}_s\right)\right]{\left(1+r\right)}^{-\left(s-1\right)}\\ {}+{\displaystyle \sum_{j=1}^n}{q}^j\left({K}_{T+1}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\end{array} $$
(5.14)

subject to dynamics (5.1).

To solve the dynamic programming problem (5.1) and (5.14), we first consider the problem starting at stage T. If \( {\theta}_T^{\sigma_T}\in \left\{{\theta}_T^1,{\theta}_T^2,\cdots, {\theta}_T^{\eta_T}\right\} \) has occurred at stage T and the state \( {K}_T=K \), the problem becomes:

$$ \begin{array}{l}\underset{I_T^1,{I}_T^2,\cdots, {I}_T^n}{ \max }{E}_{\vartheta_T}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\left[{R}^j\left(K,{\theta}_T^{\sigma_T}\right)-{C}^j\left({I}_T^j,{\theta}_T^{\sigma_T}\right)\right]{\left(1+r\right)}^{-\left(T-1\right)}\\ {}+{\displaystyle \sum_{j=1}^n}{q}^j\left({K}_{T+1}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(5.15)
$$ \mathrm{subject}\ \mathrm{t}\mathrm{o}\ {K}_{T+1}={K}_T={\displaystyle \sum_{j=1}^n{I}_T^j-\delta {K}_T}+{\vartheta}_T,{K}_T=K. $$
(5.16)

A characterization of an optimal solution to the stochastic control problem (5.15 and 5.16) is provided in the following lemma.

Lemma 5.3

A set of controls \( {I}_T^{\left({\sigma}_T\right)*}={\psi}_T^{\left({\sigma}_T\right)*}(K)=\left\{{\psi}_T^{\left({\sigma}_T\right)1*}(K),{\psi}_T^{\left({\sigma}_T\right)2*}(K),\cdots \cdots, {\psi}_T^{\left({\sigma}_T\right)n*}(K)\right\} \) provides an optimal solution to the stochastic control problem (5.15 and 5.16) if there exist functions \( {W}^{\left({\sigma}_{T+1}\right)}\left(T,K\right) \) such that the following conditions are satisfied:

$$ \begin{array}{l}{W}^{\left({\sigma}_T\right)}\left(T,K\right)\\ {}=\underset{I_T^{\left({\sigma}_T\right)1},{I}_T^{\left({\sigma}_T\right)2},\cdots, {I}_T^{\left({\sigma}_T\right)n}}{ \max }{E}_{\vartheta_T}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\left[{R}^j\left(K,{\theta}_T^{\sigma_T}\right)-{C}^j\left({I}_T^j,{\theta}_T^{\sigma_T}\right)\right]{\left(1+r\right)}^{-\left(T-1\right)}\\ {}+{\displaystyle \sum_{j=1}^n}{q}^j\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K+{\displaystyle \sum_{h=1}^n{I}_T^h-\delta K+{\vartheta}_T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\;{\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}{W}^{\left({\sigma}_{T+1}\right)i}\left(T+1,K\right)={\displaystyle \sum_{j=1}^n}{q}^j(K){\left(1+r\right)}^{-T}.\end{array} $$
(5.17)

Proof

The system of equations in (5.17) satisfies the standard discrete-time stochastic dynamic programming property. See Theorem A.6 in the Technical Appendices for details of the proof of the results. ■

Using Lemma 5.3, one can characterize the functions \( {W}^{\left({\sigma}_T\right)}\left(T,K\right) \) for all \( {\theta}_T^{\sigma_T}\in \left\{{\theta}_T^1,{\theta}_T^2,\cdots, {\theta}_T^{\eta_T}\right\} \), if they exist. In particular, \( {W}^{\left({\sigma}_T\right)}\left(T,K\right) \) yields the expected cooperative payoff starting at stage T given that \( {\theta}_T^{\sigma_T} \) occurs and \( {K}_T=K \).

Following the analysis in Sect. 12.5.1, the control problem starting at stage t when \( {\theta}_t^{\sigma_t}\in \left\{{\theta}_t^1,{\theta}_t^2,\cdots, {\theta}_t^{\eta_t}\right\} \) occurs and \( {K}_t=K \) can be expressed as:

$$ \begin{array}{l}\underset{I_t^{\left({\sigma}_t\right)1},{I}_t^{\left({\sigma}_t\right)2},\cdots, {I}_t^{\left({\sigma}_t\right)n}}{ \max }{E}_{\vartheta_t}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\left[{R}^j\left(K,{\theta}_t^{\sigma_t}\right)-{C}^j\left({I}_t^j,{\theta}_t^{\sigma_t}\right)\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}{W}^{\left({\sigma}_{t+1}\right)}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.t+1,K+{\displaystyle \sum_{h=1}^n{I}_t^h-\delta K+{\vartheta}_t}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\kern0.0em \left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(5.18)

where \( {W}^{\left({\sigma}_{t+1}\right)}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.t+1,K+{\displaystyle \sum_{h=1}^n{I}_t^h-\delta K+{\vartheta}_t}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right) \) is the expected optimal cooperative payoff in the control problem starting at stage \( t+1 \) when \( {\theta}_{t+1}^{\sigma_{t+1}}\in \left\{{\theta}_{t+1}^1,{\theta}_{t+1}^2,\cdots, {\theta}_{t+1}^{\eta_{t+1}}\right\} \) occurs.

An optimal solution for the stochastic control problem (5.14) can be characterized by the following theorem.

Theorem 5.2

A set of controls \( {\psi}_t^{\left({\sigma}_t\right)*}(K)=\left\{{\psi}_t^{\left({\sigma}_t\right)1*}(K),{\psi}_t^{\left({\sigma}_t\right)2*}(K),\cdots \cdots, {\psi}_t^{\left({\sigma}_t\right)n*}(K)\right\} \), for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \) provides an optimal solution to the stochastic control problem (5.1) and (5.14) if there exist functions \( {W}^{\left({\sigma}_t\right)}\left(t,K\right) \), for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \), such that the following recursive relations are satisfied:

$$ \begin{array}{l}{W}^{\left({\sigma}_T\right)}\left(T+1,K\right)={\displaystyle \sum_{j=1}^n}{q}^j(K){\left(1+r\right)}^{-T},\\ {}{W}^{\left({\sigma}_T\right)}\left(t,K\right)=\\ {}\underset{I_t^{\left({\sigma}_t\right)1},{I}_t^{\left({\sigma}_t\right)2},\cdots, {I}_t^{\left({\sigma}_t\right)n}}{ \max }{E}_{\vartheta_t}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\left[{R}^j\left(K,{\theta}_t^{\sigma_t}\right)-{C}^j\left({I}_t^j,{\theta}_t^{\sigma_t}\right)\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}{W}^{\left({\sigma}_{t+1}\right)}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.t+1,K+{\displaystyle \sum_{h=1}^n{I}_t^h-\delta K+{\vartheta}_t}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\kern0.0em \left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(5.19)

for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \).

Proof

Invoking Lemma 5.3 and the specification of the control problem starting in stage \( t\in \left\{1,2,\cdots, T-1\right\} \) as expressed in (5.18), the results in (5.19) satisfy the optimality conditions in discrete-time stochastic dynamic programming . Therefore, an optimal solution of the stochastic control problem is characterized in Theorem 5.2. ■

Substituting the optimal control \( \Big\{{\psi}_t^{\left({\sigma}_t\right)i*}(K) \), for \( t\in \left\{1,2,\cdots T\right\} \) and \( i\in N\Big\} \) into (5.1), one can obtain the dynamics of the cooperative trajectory of public capital accumulation as:

$$ {K}_{t+1}={K}_t+{\displaystyle \sum_{j=1}^n{\psi}_t^{\left({\sigma}_t\right)j*}\left({K}_t\right)-\delta {K}_t}+{\vartheta}_t,\;K{}_1{}=K\ \mathrm{if}\ {\theta}_t^{\sigma_t}\ \mathrm{occurs}\ \mathrm{at}\ \mathrm{stage}\;t, $$
(5.20)

for \( t\in \left\{1,2,\cdots, T\right\},\;{\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \)

We use X * t to denote the set of realizable values of K t at stage t generated by (5.20). The term \( {K}_t^{*}\in {X}_t^{*} \) is used to denote an element in X * t .

The term \( {W}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right) \) gives the expected total cooperative payoff over the stages from t to T if \( {\theta}_t^{\sigma_t} \) occurs and \( {K}_t^{*}\in {X}_t^{*} \) is realized at stage t.

5.2.2 Individually Rational Condition

The agents then have to agree to an optimality principle in distributing the total cooperative payoff among themselves. For individual rationality to be upheld the expected payoffs an agent receives under cooperation have to be no less than his expected noncooperative payoff along the cooperative state trajectory \( {\left\{{K}_t^{*}\;\right\}}_{\kern0.5em t=1}^{\kern0.5em T+1} \). Let \( {\xi}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right)=\left[{\xi}^{\left({\sigma}_t\right)1}\left(t,{K}_t^{*}\right),{\xi}^{\left({\sigma}_t\right)2}\left(t,{K}_t^{*}\right),\cdots, {\xi}^{\left({\sigma}_t\right)n}\left(t,{K}_t^{*}\right)\right] \) denote the imputation vector guiding the distribution of the total expected cooperative payoff under the agreed-upon optimality principle along the cooperative trajectory given that \( {\theta}_t^{\sigma_t} \) has occurred in stage t, for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \).

If for example, the optimality principle specifies that the agents share the expected total cooperative payoff proportional to their non-cooperative payoffs, then the imputation to agent i becomes:

$$ {\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)=\frac{V^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)}{{\displaystyle \sum_{j=1}^n{V}^{\left({\sigma}_t\right)j}\left(t,{K}_t^{*}\right)}}{W}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right), $$
(5.21)

for \( i\in N,\kern0.24em {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \).

For individual rationality to be guaranteed in every stage \( k\in \left\{1,2,\cdots, T\right\} \), it is required that the imputation satisfies:

$$ {\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)\ge {V}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right), $$
(5.22)

for \( i\in N,\;{\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \).

To ensure group optimality , the imputation vector has to satisfy

$$ {W}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right)={\displaystyle \sum_{j=1}^n}{\xi}^{\left({\sigma}_t\right)j}\left(t,{K}_t^{*}\right), $$
(5.23)

for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \).

Hence, a valid imputation scheme \( {\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right) \), for \( i\in N \) and \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \), has to satisfy conditions (5.22) and (5.23).

5.3 Subgame Consistent Payment Mechanism

To guarantee dynamical stability in a stochastic dynamic cooperation scheme, the solution has to satisfy the property of subgame consistency in addition to group optimality and individual rationality . For subgame consistency to be satisfied, the imputation according to the original optimality principle has to be maintained in all the T stages along the cooperative trajectory \( {\left\{{K}_t^{*}\;\right\}}_{t=1}^T \). In other words, the imputation

$$ {\xi}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right)=\left[{\xi}^{\left({\sigma}_t\right)1}\left(t,{K}_t^{*}\right),{\xi}^{\left({\sigma}_t\right)2}\left(t,{K}_t^{*}\right),\cdots, {\xi}^{\left({\sigma}_t\right)n}\left(t,{K}_t^{*}\right)\right] $$
(5.24)

has to be upheld for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \) and \( {K}_t^{*}\in {X}_t^{*} \).

5.3.1 Payoff Distribution Procedure

We first formulate a Payoff Distribution Procedure (PDP) so that the agreed-upon imputation (5.24) can be realized. Let \( {B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right) \) denote the payment that agent i will received at stage t under the cooperative agreement if \( {\theta}_t^{\sigma_t}\in \left\{{\theta}_t^1,{\theta}_t^2,\cdots, {\theta}_t^{\eta_t}\right\} \) occurs and \( {K}_t^{*}\in {X}_t^{*} \) is realized at stage \( t\in \left\{1,2,\cdots, T\right\} \). The payment scheme \( \Big\{{B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right) \) for \( i\in N \) contingent upon the event \( {\theta}_t^{\sigma_t} \) and state K * t , for \( t\in \left\{1,2,\cdots, T\right\}\Big\} \) constitutes a PDP in the sense that the imputation to agent i over the stages 1 to T can be expressed as:

$$ \begin{array}{l}{\xi}^{\left({\sigma}_1\right)i}\left(1,{K}^0\right)={B}_1^{\left({\sigma}_1\right)i}\left({K}^0\right)\\ {}+{E}_{\theta_2,\cdots, {\theta}_T;{\vartheta}_1,{\vartheta}_2,\cdots, {\vartheta}_T}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\zeta =2}^T}{B}_{\zeta}^{\left({\sigma}_{\zeta}\right)i}\left({K}_{\zeta}^{*}\right)+{q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right),\end{array} $$
(5.25)

for \( i\in N \).

Moreover, according to the agreed-upon optimality principle in (5.24), if \( {\theta}_t^{\sigma_t} \) occurs and \( {K}_t^{*}\in {X}_t^{*} \) is realized at stage t the imputation to agent i is \( {\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right) \). Therefore the payment scheme \( {B}_t^{\left({\sigma}_t\right)}\left({K}_t^{*}\right) \) has to satisfy the conditions

$$ \begin{array}{l}{\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)={B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right)\\ {}+{E}_{\theta_{t+1},{\theta}_{t+2},\cdots, {\theta}_T;{\vartheta}_t,{\vartheta}_{t+1},\cdots, {\vartheta}_T}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\zeta =t+1}^T}{B}_{\zeta}^{\left({\sigma}_{\zeta}\right)i}\left({K}_{\zeta}^{*}\right)+{q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\end{array} $$
(5.26)

for \( i\in N \) and all \( t\in \left\{1,2,\cdots, T\right\} \).

For notational convenience the term \( {\xi}^{\left({\sigma}_{T+1}\right)i}\left(T+1,{K}_{T+1}^{*}\right) \) is used to denote \( {q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T} \). Crucial to the formulation of a subgame consistent solution is the derivation of a payment scheme \( \Big\{{B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right) \), for \( i\in N,\;{\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\},\;{K}_t^{*}\in {X}_t^{*} \) and \( t\in \left\{1,2,\cdots, T\right\} \) \( \Big\} \) so that the imputation in (5.26) can be realized.

A theorem for the derivation of a subgame consistent payment scheme can be established as follows.

Theorem 5.3

A payment equaling

$$ \begin{array}{l}{B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right)={\left(1+r\right)}^{\left(t-1\right)}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)\\ {}-{E}_{\vartheta_t}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_{t+1}\right)i}\left[t+1,{K}_t^{*}+{\displaystyle \sum_{h=1}^n{\psi}_t^{\left({\sigma}_t\right)h*}\left({K}_t^{*}\right)-\delta {K}_t^{*}+{\vartheta}_t}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\kern0.0em \left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(5.27)

given to agent \( i\in N \) at stage \( t\in \left\{1,2,\cdots, T\right\} \), if \( {\theta}_t^{\sigma_t} \) occurs and \( {K}_t^{*}\in {X}_t^{*} \), leads to the realization of the imputation in (5.26).

Proof

To construct the proof of Theorem 5.3, we first express the term

$$ \begin{array}{ll}\hfill & {E}_{\theta_{t+1},{\theta}_{t+2},\cdots, {\theta}_T;{\vartheta}_t,{\vartheta}_{t+1},\cdots, {\vartheta}_T}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\zeta =t+1}^T}{B}_{\zeta}^{\left({\sigma}_{\zeta}\right)i}\left({K}_{\zeta}^{*}\right){\left(1+r\right)}^{-\left(\zeta -1\right)}\\ {}& +{q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)={E}_{\vartheta_{t+1}}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{B}_{t+1}^{\left({\sigma}_{t+1}\right)i}\left({K}_{t+1}^{*}\right){\left(1+r\right)}^{-\left(t-1\right)}\hfill \\ {}& +{E}_{\theta_{t+2},{\theta}_{t+3},\cdots, {\theta}_T;{\vartheta}_{t+2},{\vartheta}_{t+3},\cdots, {\vartheta}_T}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\zeta =t+2}^T}{B}_{\zeta}^{\left({\sigma}_{\zeta}\right)i}\left({K}_{\zeta}^{*}\right){\left(1+r\right)}^{-\left(\zeta -1\right)}\hfill \\ {}& +{q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\hfill \end{array} $$
(5.28)

Then, using (5.26) we can express the term \( {\xi}^{\left({\sigma}_{t+1}\right)i}\left(t+1,{K}_{t+1}^{*}\right) \) as

$$ \begin{array}{l}{\xi}^{\left({\sigma}_{t+1}\right)i}\left(t+1,{K}_{t+1}^{*}\right)={B}_{t+1}^{\left({\sigma}_{t+1}\right)i}\left({K}_{t+1}^{*}\right){\left(1+r\right)}^{-t}\\ {}+{E}_{\theta_{t+2},{\theta}_{t+3},\cdots, {\theta}_T;{\vartheta}_{t+2},{\vartheta}_{t+3},\cdots, {\vartheta}_T}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\zeta =t+2}^T}{B}_{\zeta}^{\left({\sigma}_{\zeta}\right)i}\left({K}_{\zeta}^{*}\right)+{q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right).\end{array} $$
(5.29)

The expression on the right-hand-side of equation (5.29) is the same as the expression inside the square brackets of (5.28). Invoking equation (5.29) we can replace the expression inside the square brackets of (5.28) by \( {\xi}^{\left({\sigma}_{t+1}\right)i}\left(t+1,{K}_{t+1}^{*}\right) \) and obtain:

\( \begin{array}{l}{E}_{\theta_{t+1},{\theta}_{t+2},\cdots, {\theta}_T;{\vartheta}_t,{\vartheta}_{t+1},\cdots, {\vartheta}_T}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\zeta =t+1}^T}{B}_{\zeta}^{\left({\sigma}_{\zeta}\right)i}\left({K}_{\zeta}^{*}\right){\left(1+r\right)}^{-\left(\zeta -1\right)}+{q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\\ {}={E}_{\vartheta_t}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_{t+1}\right)i}\left(t+1,{K}_{t+1}^{*}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;\\ {}={E}_{\vartheta_t}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_{t+1}\right)i}\left[t+1,{K}_t^{*}+{\displaystyle \sum_{h=1}^n{\psi}_t^{\left({\sigma}_t\right)h*}\left({K}_t^{*}\right)-\delta {K}_t^{*}+{\vartheta}_t}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;\end{array} \).

Substituting the term

$$ {E}_{\theta_{t+1},{\theta}_{t+2},\cdots, {\theta}_T;{\vartheta}_t,{\vartheta}_{t+1},\cdots, {\vartheta}_T}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\zeta =t+1}^T}{B}_{\zeta}^{\left({\sigma}_{\zeta}\right)i}\left({K}_{\zeta}^{*}\right){\left(1+r\right)}^{-\left(\zeta -1\right)}+{q}^i\left({K}_{T+1}^{*}\right){\left(1+r\right)}^{-T}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right) $$

by \( {E}_{\vartheta_t}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_{t+1}\right)i}\left[t+1,{K}_t^{*}+{\displaystyle \sum_{h=1}^n{\psi}_t^{\left({\sigma}_t\right)h*}\left({K}_t^{*}\right)-\delta {K}_t^{*}+{\vartheta}_t}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right] \) in (5.26) we can express (5.26) as:

$$ \begin{array}{l}{\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)={B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right){\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{E}_{\vartheta_t}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_{t+1}\right)i}\left[t+1,{K}_t^{*}+{\displaystyle \sum_{h=1}^n{\psi}_t^{\left({\sigma}_t\right)h*}\left({K}_t^{*}\right)-\delta {K}_t^{*}+{\vartheta}_t}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;.\end{array} $$
(5.30)

For condition (5.30), which is an alternative form of (5.26), to hold it is required that:

$$ \begin{array}{l}{B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right)={\left(1+r\right)}^{\left(t-1\right)}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)\\ {}-{E}_{\vartheta_t}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_{t+1}\right)i}\left[t+1,{K}_t^{*}+{\displaystyle \sum_{h=1}^n{\psi}_t^{\left({\sigma}_t\right)h*}\left({K}_t^{*}\right)-\delta {K}_t^{*}+{\vartheta}_t}\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(5.31)

for \( i\in N \) and \( t\in \left\{1,2,\cdots, T\right\} \).

Therefore by paying \( {B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right) \) to agent \( i\in N \) at stage \( t\in \left\{1,2,\cdots, T\right\} \), if \( {\theta}_t^{\sigma_t} \) occurs and \( {K}_t^{*}\in {X}_t^{*} \) is realized, leads to the realization of the imputation in (5.26). Hence Theorem 5.3 follows. ■

For a given imputation vector

$$ {\xi}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right)=\left[{\xi}^{\left({\sigma}_t\right)1}\left(t,{K}_t^{*}\right),{\xi}^{\left({\sigma}_t\right)2}\left(t,{K}_t^{*}\right),\cdots, {\xi}^{\left({\sigma}_t\right)n}\left(t,{K}_t^{*}\right)\right], $$

for \( {\sigma}_t\in \left\{1,2,\cdots, {\eta}_t\right\} \) and \( t\in \left\{1,2,\cdots, T\right\} \), Theorem 5.3 can be used to derive the PDP that leads to the realization this vector.

5.3.2 Transfer Payments

When all agents are using the cooperative strategies given that \( {K}_t^{*}\in {X}_t^{*} \), and \( {\theta}_t^{\sigma_t} \) occur, the payoff that agent i will directly receive at stage t becomes

$$ \left[{R}^i\left({K}_t^{*},{\theta}_t^{\sigma_t}\right)-{C}^i\left({\psi}_t^{\left({\sigma}_t\right)i*}\left({K}_t^{*}\right),{\theta}_t^{\sigma_t}\right)\right]{\left(1+r\right)}^{-\left(t-1\right)} $$
(5.32)

However, according to the agreed upon imputation , agent i is supposed to receive \( {B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right) \) at stage t as given in Theorem 5.3. Therefore a transfer payment (which can be positive or negative)

$$ {\varpi}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right)={B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right)-\left[{R}^i\left({K}_t^{*},{\theta}_t^{\sigma_t}\right)-{C}^i\left({\psi}_t^{\left({\sigma}_t\right)i*}\left({K}_t^{*}\right),{\theta}_t^{\sigma_t}\right)\right]{\left(1+r\right)}^{-\left(t-1\right)}, $$
(5.33)

for \( t\in \left\{1,2,\cdots, T\right\} \) and \( i\in N \),

will be assigned to agent i to yield the cooperative imputation \( {\xi}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right) \).

6 An Illustration

In this section, we provide an illustration of the derivation of a subgame consistent solution of public goods provision under accumulation and payoff uncertainties in a multiple asymmetric agents situation. The basic game structure is a discrete-time analog of an example in Yeung and Petrosyan (2013b) but with the crucial addition of uncertain future payoff structures to reflect probable changes in preferences, technologies, demographic structures and institutional arrangements.

6.1 Public Capital Build-up Amid Uncertainties

We consider an n asymmetric agents economic region in which the agents receive benefits from an existing public capital stock K t at each stage \( t\in \left\{1,2,\cdots, T\right\} \). The accumulation dynamics of the public capital stock is governed by the stochastic difference equation:

$$ {K}_{t+1}={K}_t+{\displaystyle \sum_{j=1}^n{I}_t^j-\delta {K}_t}+{\vartheta}_t, \kern0.24em K{}_1{}={K}^0,\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;t\in \left\{1,2,3\right\}, $$
(6.1)

where ϑ t is a discrete random variable with non-negative range {ϑ 1 t , ϑ 2 t , ϑ 3 t } and corresponding probabilities {γ 1 t , γ 2 t , γ 3 t }, and \( {\displaystyle \sum_{j=1}^3{\gamma}_t^j}{\vartheta}_t^j={\varpi}_t>0 \).

At stage 1, it is known that \( {\theta}_1^{\sigma_1}={\theta}_1^1 \) has happened with probability \( {\lambda}_1^1=1 \), and the payoff of agent i is

$$ {\alpha}_1^{\left({\sigma}_1\right)i}{K}_1-{c}_1^{\left({\sigma}_1\right)i}{\left({I}_1^i\right)}^2; $$

At stage \( t\in \left\{2,3\right\} \), the payoff of agent i is

$$ {\alpha}_t^{\left({\sigma}_t\right)i}{K}_t-{c}_t^{\left({\sigma}_t\right)i}{\left({I}_t^i\right)}^2, $$

if \( {\theta}_t^{\sigma_t}\in \left\{{\theta}_t^1,{\theta}_t^2,{\theta}_t^3,{\theta}_t^4\right\} \) occurs.

In particular, \( {\alpha}_t^{\left({\sigma}_t\right)i}{K}_t \) gives the gain that agent i derives from the public capital at stage \( t\in \left\{1,2,3\right\} \), and \( {c}_t^{\left({\sigma}_t\right)i}{\left({I}_t^i\right)}^2 \) is the cost of investing I i t in the public capital.

The probability that \( {\theta}_t^{\sigma_t}\in \left\{{\theta}_t^1,{\theta}_t^2,{\theta}_t^3,{\theta}_t^4\right\} \) will occur at stage \( t\in \left\{2,3\right\} \) is \( {\lambda}_t^{\sigma_t}\in \left\{{\lambda}_t^1,{\lambda}_t^2,{\lambda}_t^3,{\lambda}_t^4\right\} \). In stage 4, a terminal payment contingent upon the size of the capital stock equaling \( \left({q}^i{K}_4+{m}^i\right){\left(1+r\right)}^{-3} \) will be paid to agent i. Since there is no uncertainty in stage 4, we use θ 14 to denote the condition in stage 4 with probability \( {\lambda}_4^1=1 \).

The objective of agent \( i\in N \) is to maximize the expected payoff:

$$ \begin{array}{l}{E}_{\theta_1,{\theta}_2,{\theta}_3;{\vartheta}_1,{\vartheta}_2,{\vartheta}_3}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\tau =1}^3}\left[{\alpha}_{\tau}^{\left({\sigma}_{\tau}\right)i}{K}_{\tau }-{c}_{\tau}^{\left({\sigma}_{\tau}\right)i}{\left({I}_{\tau}^i\right)}^2\right]{\left(1+r\right)}^{-\left(\tau -1\right)}\\ {}+\left({q}^i{K}_4+{m}^i\right){\left(1+r\right)}^{-3}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(6.2)

subject to the public capital accumulation dynamics (6.1).

The noncooperative outcome will be examined in the next subsection.

6.2 Noncooperative Outcome

Invoking Theorem 5.1, one can characterize the noncooperative Nash equilibrium strategies for the game (6.1 and 6.2) as follows. In particular, a set of strategies \( \Big\{{I}_t^{\left({\sigma}_t\right)i*}={\phi}_t^{\left({\sigma}_t\right)i*}(K) \), for \( {\sigma}_1\in \left\{1\right\},\;{\sigma}_2,{\sigma}_3\in \left\{1,2,3,4\right\},\;t\in \left\{1,2,3\right\} \) and \( i\in N\Big\} \) provides a Nash equilibrium solution to the game (6.1 and 6.2) if there exist functions \( {V}^{\left({\sigma}_t\right)i}\left(t,K\right) \), for \( i\in N \) and \( t\in \left\{1,2,3\right\} \), such that the following recursive relations are satisfied:

$$ \begin{array}{l}{V}^{\left({\sigma}_t\right)i}\left(t,K\right)=\underset{I_t^i}{ \max }{E}_{\vartheta_t}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{\alpha}_t^{\left({\sigma}_t\right)i}K-{c}_t^{\left({\sigma}_t\right)i}{\left({I}_t^i\right)}^2\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}{V}^{\left({\sigma}_{t+1}\right)i}\left[t+1,K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_t^{\left({\sigma}_t\right)j*}(K)+{I}_t^i-\delta K}+{\vartheta}_t\;\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\\ {}=\underset{I_t^i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{\alpha}_t^{\left({\sigma}_t\right)i}K-{c}_t^{\left({\sigma}_t\right)i}{\left({I}_t^i\right)}^2\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_t^y}{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}{V}^{\left({\sigma}_{t+1}\right)i}\left[t+1,K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_t^{\left({\sigma}_t\right)j*}(K)+{I}_t^i-\delta K}+{\vartheta}_t^y\;\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\ t\in \left\{1,2,3\right\};\end{array} $$
(6.3)
$$ {V}^{\left({\sigma}_4\right)i}\left(4,K\right)=\left({q}^iK+{m}^i\right){\left(1+r\right)}^{-3}. $$
(6.4)

Performing the indicated maximization in (6.3) yields:

$$ \begin{array}{l}{I}_t^i={\phi}_t^{\left({\sigma}_t\right)i*}(K)\\ {}=\frac{{\left(1+r\right)}^{t-1}}{2{c}_t^{\left({\sigma}_t\right)i}}{\displaystyle \sum_{y=1}^3{\gamma}_t^y}{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}{V}_{K_{t+1}}^{\left({\sigma}_{t+1}\right)i}\left[t+1,K+{\displaystyle \sum_{j=1}^n{\phi}_t^{\left({\sigma}_t\right)j*}(K)-\delta K}+{\vartheta}_t^y\;\right],\end{array} $$
(6.5)

for \( i\in N,\;t\in \left\{1,2,3\right\},\;{\sigma}_1=1 \), and \( {\sigma}_{\tau}\in \left\{1,2,3,4\right\} \) for \( \tau \in \left\{2,3\right\} \).

The game equilibrium payoffs of the agents can be obtained as:

Proposition 6.1

The value function which represents the expected payoff of agent i is:

$$ {V}^{\left({\sigma}_t\right)i}\left(t,K\right)=\left[{A}_t^{\left({\sigma}_t\right)i}K+{C}_t^{\left({\sigma}_t\right)i}\right]{\left(1+r\right)}^{-\left(t-1\right)}, $$
(6.6)

for \( i\in N,t\in \left\{1,2,3\right\},{\sigma}_1=1, \) and \( {\sigma}_{\tau}\in \left\{1,2,3,4\right\} \) for \( \tau \in \left\{2,3\right\} \);

where

\( {A}_3^{\left({\sigma}_3\right)i}={\alpha}_3^{\left({\sigma}_3\right)i}+{q}^i\left(1-\delta \right){\left(1+r\right)}^{-1} \), and

$$ \begin{array}{l}{C}_3^{\left({\sigma}_3\right)i}=-\frac{{\left({q}^i\right)}^2{\left(1+r\right)}^{-2}}{4{c}_3^{\left({\sigma}_3\right)i}}+\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{q}^i{\displaystyle \sum_{j=1}^n}\frac{q^j{\left(1+r\right)}^{-1}}{2{c}_3^{\left({\sigma}_3\right)j}}+{q}^i{\varpi}_3+{m}^i\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]{\left(1+r\right)}^{-1};\\ {}{A}_2^{\left({\sigma}_2\right)i}={\alpha}_2^{\left({\sigma}_2\right)i}+{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}{A}_3^{\left({\sigma}_3\right)i}\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_2^{\left({\sigma}_2\right)i}=-\frac{1}{4{c}_2^{\left({\sigma}_2\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}\left({A}_3^{\left({\sigma}_3\right)i}\right){\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_3^{\left({\sigma}_3\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_3=1}^4{\lambda}_3^{\rho_3}}\frac{A_3^{\left({\rho}_3\right)j}{\left(1+r\right)}^{-1}}{2{c}_2^{\left({\sigma}_2\right)j}}+{\varpi}_2\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_3^{\left({\sigma}_3\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\};\\ {}{A}_1^{\left({\sigma}_1\right)i}={\alpha}_1^{\left({\sigma}_1\right)i}+{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}{A}_2^{\left({\sigma}_2\right)i}\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_1^{\left({\sigma}_1\right)i}=-\frac{1}{4{c}_1^{\left({\sigma}_1\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}\left({A}_2^{\left({\sigma}_2\right)i}\right){\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_2^{\left({\sigma}_2\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_2=1}^4{\lambda}_2^{\rho_2}}\frac{A_2^{\left({\rho}_2\right)j}{\left(1+r\right)}^{-1}}{2{c}_1^{\left({\sigma}_1\right)j}}+{\varpi}_1\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_2^{\left({\sigma}_2\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\};\end{array} $$

for \( i\in N \).

Proof

See Appendix D. ■

Substituting the relevant derivatives of the value functions \( {V}^{\left({\sigma}_t\right)i}\left(t,K\right) \) in Proposition 6.1 into the game equilibrium strategies (6.5) yields a noncooperative Nash equilibrium solution of the game (6.1 and 6.2).

6.3 Cooperative Provision of Public Capital

Now we consider the case when the agents agree to cooperate and seek to enhance their gains. They agree to maximize their expected joint gain and distribute the cooperative gain proportional to their expected non-cooperative gains. The agents would first maximize their expected joint payoff

$$ \begin{array}{l}{E}_{\theta_1,{\theta}_2,{\theta}_3;{\vartheta}_1,{\vartheta}_2,{\vartheta}_3}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\tau =1}^3}\left[{\alpha}_{\tau}^{\left({\sigma}_{\tau}\right)j}{K}_{\tau }-{c}_{\tau}^{\left({\sigma}_{\tau}\right)j}{\left({I}_{\tau}^j\right)}^2\right]{\left(1+r\right)}^{-\left(\tau -1\right)}\\ {}+{\displaystyle \sum_{j=1}^n}\left({q}^j{K}_4+{m}^j\right){\left(1+r\right)}^{-3}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(6.7)

subject to the stochastic dynamics (6.1).

Invoking Theorem 5.2, one can characterize the solution of the stochastic dynamic programming problem (6.1) and (6.7) as follows. In particular, a set of control strategies \( \Big\{{u}_t^{\left({\sigma}_t\right)i*}={\psi}_t^{\left({\sigma}_t\right)i*}(K) \), for \( t\in \left\{1,2,3\right\} \) and \( i\in N,{\sigma}_1=1,\;{\sigma}_{\tau}\in \left\{1,2,3,4\right\} \) for \( \tau \in \left\{2,3\right\}\Big\} \), provides an optimal solution to the problem (6.1) and (6.7) if there exist functions \( {W}^{\left({\sigma}_t\right)}\left(t,K\right) \), for \( t\in \left\{1,2,3\right\} \), such that the following recursive relations are satisfied:

$$ \begin{array}{l}{W}^{\left({\sigma}_t\right)}\left(t,K\right)=\underset{I_t^1,{I}_t^2\cdots, {I}_t^n}{ \max }{E}_{\vartheta_t}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\left[{\alpha}_t^{\left({\sigma}_t\right)j}K-{c}_t^{\left({\sigma}_t\right)j}{\left({I}_t^j\right)}^2\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}{W}^{\left({\sigma}_{t+1}\right)}\left[t+1,K+{\displaystyle \sum_{j=1}^n{I}_t^j-\delta K}+{\vartheta}_t\;\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\\ {}=\underset{I_t^i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\left[{\alpha}_t^{\left({\sigma}_t\right)j}K-{c}_t^{\left({\sigma}_t\right)j}{\left({I}_t^j\right)}^2\right]{\left(1+r\right)}^{-\left(t-1\right)}\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_t^y}{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}{W}^{\left({\sigma}_{t+1}\right)i}\left[t+1,K+{\displaystyle \sum_{j=1}^n{I}_t^j-\delta K}+{\vartheta}_t^y\;\right]\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em t\in \left\{1,2,3\right\};\end{array} $$
(6.8)
$$ {W}^{\left({\sigma}_4\right)}\left(4,K\right)={\displaystyle \sum_{j=1}^n}\left({q}^jK+{m}^j\right){\left(1+r\right)}^{-3} $$
(6.9)

Performing the indicated maximization in (6.8) yields:

$$ \begin{array}{l}{I}_t^i={\psi}_t^{\left({\sigma}_t\right)i*}(K)\\ {}=\frac{{\left(1+r\right)}^{t-1}}{2{c}_t^{\left({\sigma}_t\right)i}}{\displaystyle \sum_{y=1}^3{\gamma}_t^y}{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}{W}_{K_{t+1}}^{\left({\sigma}_{t+1}\right)}\left[t+1,K+{\displaystyle \sum_{j=1}^n{\psi}_t^{\left({\sigma}_t\right)j*}(K)-\delta K}+{\vartheta}_t^y\;\right],\end{array} $$
(6.10)

for \( i\in N,\;t\in \left\{1,2,3\right\},\;{\sigma}_1=1 \), and \( {\sigma}_{\tau}\in \left\{1,2,3,4\right\} \) for \( \tau \in \left\{2,3\right\} \).

The expected joint payoff under cooperation can be obtained as:

Proposition 6.2

The value function which represents the expected joint payoff is

$$ {W}^{\left({\sigma}_t\right)}\left(t,K\right)=\left[{A}_t^{\left({\sigma}_t\right)}K+{C}_t^{\left({\sigma}_t\right)}\right]{\left(1+r\right)}^{-\left(t-1\right)}, $$
(6.11)

for \( t\in \left\{1,2,3\right\},\;{\sigma}_1=1 \), and \( {\sigma}_{\tau}\in \left\{1,2,3,4\right\} \) for \( \tau \in \left\{2,3\right\} \);

where

$$ \begin{array}{l}{A}_3^{\left({\sigma}_3\right)}={\displaystyle \sum_{j=1}^n}{\alpha}_3^{\left({\sigma}_3\right)j}+{\displaystyle \sum_{j=1}^n}{q}^j\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_3^{\left({\sigma}_3\right)}=-{\displaystyle \sum_{j=1}^n}\frac{{\left({\displaystyle {\sum}_{h=1}^n}{q}^h{\left(1+r\right)}^{-1}\right)}^{\kern0.5em 2}}{4{c}_3^{\left({\sigma}_3\right)j}}+{\displaystyle \sum_{j=1}^n}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{q}^j\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\ell =1}^n}\frac{{\displaystyle {\sum}_{h=1}^n}{q}^h{\left(1+r\right)}^{-1}}{2{c}_3^{\left({\sigma}_3\right)\ell }}+{\varpi}_3\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{m}^j\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]{\left(1+r\right)}^{-1};\\ {}{A}_2^{\left({\sigma}_2\right)}={\displaystyle \sum_{j=1}^n}{\alpha}_2^{\left({\sigma}_2\right)j}+{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}{A}_3^{\left({\sigma}_3\right)}\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_2^{\left({\sigma}_2\right)}=-{\displaystyle \sum_{j=1}^n}\frac{1}{4{c}_2^{\left({\sigma}_2\right)j}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}{A}_3^{\left({\sigma}_3\right)}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_3^{\left({\sigma}_3\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_3=1}^4{\lambda}_3^{\rho_3}}\frac{A_3^{\left({\rho}_3\right)j}{\left(1+r\right)}^{-1}}{2{c}_2^{\left({\sigma}_2\right)j}}+{\varpi}_2\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_3^{\left({\sigma}_3\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\};\\ {}{A}_1^{\left({\sigma}_1\right)}={\displaystyle \sum_{j=1}^n}{\alpha}_1^{\left({\sigma}_1\right)j}+{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}{A}_2^{\left({\sigma}_2\right)}\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_1^{\left({\sigma}_1\right)}=-{\displaystyle \sum_{j=1}^n}\frac{1}{4{c}_1^{\left({\sigma}_1\right)j}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}{A}_2^{\left({\sigma}_2\right)}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_2^{\left({\sigma}_2\right)}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_2=1}^4{\lambda}_2^{\rho_2}}\frac{A_2^{\left({\rho}_2\right)}{\left(1+r\right)}^{-1}}{2{c}_1^{\left({\sigma}_1\right)j}}+{\varpi}_1\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_2^{\left({\sigma}_2\right)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}.\end{array} $$

Proof

Follow the proof of Proposition 6.1. ■

Using (6.10) and Proposition 6.2, the optimal cooperative strategies of the agents can be obtained as:

$$ \begin{array}{l}{\psi}_3^{\left({\sigma}_3\right)i*}(K)=\frac{{\displaystyle {\sum}_{h=1}^n}{q}^h{\left(1+r\right)}^{-1}}{2{c}_3^{\left({\sigma}_3\right)i}},\\ {}{\psi}_2^{\left({\sigma}_2\right)i*}(K)={\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}\frac{A_3^{\left({\sigma}_3\right)}{\left(1+r\right)}^{-1}}{2{c}_2^{\left({\sigma}_2\right)i}},\\ {}{\psi}_1^{\left({\sigma}_1\right)i*}(K)={\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}\frac{A_2^{\left({\sigma}_2\right)}{\left(1+r\right)}^{-1}}{2{c}_1^{\left({\sigma}_1\right)i}},\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N.\\ {}\end{array} $$
(6.12)

Substituting \( {\psi}_t^{\left({\sigma}_t\right)i*}(K) \) from (6.12) into (6.1) yields the optimal cooperative accumulation dynamics:

$$ {K}_{t+1}={K}_t+{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}\frac{A_{t+1}^{\left({\sigma}_{t+1}\right)}{\left(1+r\right)}^{-1}}{2{c}_t^{\left({\sigma}_t\right)j}}-\delta {K}_t+{\vartheta}_t,\;K{}_1{}={K}^0, $$
(6.13)

if \( {\theta}_t^{\sigma_t} \) occurs at stage t, for \( t\in \left\{1,2,3\right\} \).

6.4 Subgame Consistent Cooperative Solution

Given that the agents agree to share the cooperative gain proportional to their expected non-cooperative payoffs, an imputation

$$ \begin{array}{l}{\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)=\frac{V^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)}{{\displaystyle \sum_{j=1}^n}{V}^{\left({\sigma}_t\right)j}\left(t,{K}_t^{*}\right)}{W}^{\left({\sigma}_t\right)}\left(t,{K}_t^{*}\right)\\ {}=\frac{\left[{A}_t^{\left({\sigma}_t\right)i}{K}_t^{*}+{C}_t^{\left({\sigma}_t\right)i}\right]}{{\displaystyle \sum_{j=1}^n}\left[{A}_t^{\left({\sigma}_t\right)i}{K}_t^{*}+{C}_t^{\left({\sigma}_t\right)i}\right]}\left[{A}_t^{\left({\sigma}_t\right)}{K}_t^{*}+{C}_t^{\left({\sigma}_t\right)}\right]{\left(1+r\right)}^{-\left(t-1\right)},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N,\end{array} $$
(6.14)

if \( {\theta}_t^{\sigma_t} \) occurs at stage t for \( t\in \left\{1,2,3\right\} \) has to be maintained.

Invoking Theorem 5.3, if \( {\theta}_t^{\sigma_t} \) occurs and \( {K}_t^{*}\in {X}_t^{*} \) is realized at stage t a payment equaling

$$ \begin{array}{l}{B}_t^{\left({\sigma}_t\right)i}\left({K}_t^{*}\right)={\left(1+r\right)}^{\left(t-1\right)}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)-\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{y=1}^3{\gamma}_t^y}{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^{\left({\sigma}_{t+1}\right)i}\Big[t+1,{K}_t^{*}+{\displaystyle \sum_{h=1}^n{\psi}_t^{\left({\sigma}_t\right)h*}\left({K}_t^{*}\right)-\delta {K}_t^{*}+{\vartheta}_t^y}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}=\frac{\left[{A}_t^{\left({\sigma}_t\right)i}{K}_t^{*}+{C}_t^{\left({\sigma}_t\right)i}\right]}{{\displaystyle \sum_{j=1}^n}\left[{A}_t^{\left({\sigma}_t\right)i}{K}_t^{*}+{C}_t^{\left({\sigma}_t\right)i}\right]}\left[{A}_t^{\left({\sigma}_t\right)}{K}_t^{*}+{C}_t^{\left({\sigma}_t\right)}\right]\\ {}-{\displaystyle \sum_{y=1}^3{\gamma}_t^y}{\displaystyle \sum_{\sigma_{t+1}=1}^{\eta_{t+1}}{\lambda}_{t+1}^{\sigma_{t+1}}}\frac{\left[{A}_{t+1}^{\left({\sigma}_{t+1}\right)i}{K}_{t+1}\left({\sigma}_{t+1},{\vartheta}_t^y\right)+{C}_{t+1}^{\left({\sigma}_{t+1}\right)i}\right]}{{\displaystyle \sum_{j=1}^n}\left[{A}_{t+1}^{\left({\sigma}_{t+1}\right)i}{K}_{t+1}\left({\sigma}_{t+1},{\vartheta}_t^y\right)+{C}_{t+1}^{\left({\sigma}_{t+1}\right)i}\right]}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_{t+1}^{\left({\sigma}_{t+1}\right)}{K}_{t+1}\left({\sigma}_{t+1},{\vartheta}_t^y\right)\\ {}+{C}_{t+1}^{\left({\sigma}_t\right)}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]{\left(1+r\right)}^{-1},\end{array} $$
(6.15)

where \( {K}_{t+1}\left({\sigma}_{t+1},{\vartheta}_t^y\right)={K}_t^{*}+{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}}\frac{A_{t+1}^{\left({\sigma}_{t+1}\right)}{\left(1+r\right)}^{-1}}{2{c}_t^{\left({\sigma}_t\right)j}}-\delta {K}_t^{*}+{\vartheta}_t^y, \)

given to agent i at stage \( t\in \left\{1,2,3\right\} \) if \( {\theta}_t^{\sigma_t} \) occurs would lead to the realization of the imputation (6.14).

A subgame consistent solution and the corresponding payment schemes can be obtained using Propositions 5.1 and 5.2 and conditions (6.12, 6.13, 6.14 and 6.15).

Finally, since all agents are adopting the cooperative strategies, the payoff that agent i will directly receive at stage t is

$$ {\alpha}_t^{\left({\sigma}_t\right)i}{K}_t^{*}-\frac{1}{4{c}_t^{\left({\sigma}_t\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}{A}_{t+1}^{\left({\sigma}_{t+1}\right)}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2, $$
(6.16)

if \( {\theta}_t^{\sigma_t} \) occurs at stage t.

However, according to the agreed upon imputation , agent i is supposed to receive \( {\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right) \) in (6.15), therefore a transfer payment (which can be positive or negative) equalling

$$ {\pi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)={\xi}^{\left({\sigma}_t\right)i}\left(t,{K}_t^{*}\right)-{\alpha}_t^{\left({\sigma}_t\right)i}{K}_t^{*}+\frac{1}{4{c}_t^{\left({\sigma}_t\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_{t+1}=1}^4{\lambda}_{t+1}^{\sigma_{t+1}}{A}_{t+1}^{\left({\sigma}_{t+1}\right)}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2 $$
(6.17)

will be given to agent \( i\in N \) at stage t.

7 Appendices

Appendix A. Proof of Theorem 1.1

Invoking (1.11), one can obtain

$$ \begin{array}{l}{\xi}^i\left(\tau, {K}_{\tau}^{*}\right)=E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em \tau}^{\kern0.0em T}}{B}_i\left(s,{K}^{*}(s)\right){e}^{-rs}ds+{q}_i\left[{K}^{*}(T)\right]{e}^{-rT}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right|\left.{K}^{*}\left(\tau \right)={K}_{\tau}^{*}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}=E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em \tau}^{\kern0.0em \tau +\Delta t}{B}_i\left(s,{K}^{*}(s)\right){e}^{-rs}ds}\\ {}+{\xi}^{\left(\tau +\Delta t\right)i}\left(\tau +\Delta t,{K}_{\tau}^{*}+\Delta {K}_{\tau}^{*}\right)\kern1.44em \left|\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}{K}^{*}\left(\tau \right)={K}_{\tau}^{*}\right.\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(7.1)

\( i\in N \) and \( \tau \in \left[0,T\right] \),

where

$$ \Delta {K}_{\tau}^{*}=\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{\psi}_j^{*}\left(\tau, {K}_{\tau}^{*}\right)-\delta {K}_{\tau}^{*}}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\Delta t+\sigma {K}_{\tau}^{*}\Delta {z}_{\tau }+o\left(\Delta t\right),\kern0.24em \mathrm{and} $$

\( \Delta {z}_{\tau }=Z\left(\tau +\Delta t\right)-z\left(\tau \right) \), and \( {E}_{\tau}\left[o\left(\Delta t\right)\right]/\Delta t\to 0 \) as \( \Delta t\to 0 \).

Using (7.1), one obtains

$$ \begin{array}{l}E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle {\int}_{\kern0.0em \tau}^{\kern0.0em \tau +\Delta t}{B}_i\left(s,{K}^{*}(s)\right){e}^{-rs}ds}\left|\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}{K}^{*}\left(\tau \right)={K}_{\tau}^{*}\right.\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\\ {}=E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\xi}^i\left(\tau, {K}_{\tau}^{*}\right)-{\xi}^{\left(\tau +\Delta t\right)i}\left(\tau +\Delta t,{K}_{\tau}^{*}+\Delta {K}_{\tau}^{*}\right)\left|\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}{K}^{*}\left(\tau \right)={K}_{\tau}^{*}\right.\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{all}\ \tau \in \left[0,T\right]\ \mathrm{and}\ i\in N.\end{array} $$
(7.2)

If the imputation s ξ i(τ, K * τ ) are continuous and differentiable, as \( \Delta t\to 0 \), one can express condition (7.2) as:

$$ \begin{array}{l}E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{B}_i\left(s,{K}_s^{*}\right){e}^{-rt}\Delta t+o\left(\Delta t\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}=E\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.-{\xi}_{\tau}^i\left(\tau, {K}_{\tau}^{*}\right)\Delta t\hfill \\ {}-{\xi}_{K_{\tau}}^i\left(\tau, {K}_{\tau}^{*}\right)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n{\psi}_j^{*}\left(\tau, {K}_{\tau}^{*}\right)-\delta {K}_{\tau}^{*}}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\Delta t\hfill \\ {}-\frac{1}{2}{\xi}_{K_{\tau}}^i\left(\tau, {K}_{\tau}^{*}\right)\sigma {K}_{\tau}^{*}\Delta {z}_{\tau }-\frac{1}{2}{\xi}_{K_{\tau }{K}_{\tau}}^i\left(\tau, {K}_{\tau}^{*}\right){\sigma}^2{\left({K}_{\tau}^{*}\right)}^2\Delta to\left(\Delta t\right)\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\hfill \\ {}\mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N.\hfill \end{array} $$
(7.3)

Dividing (7.3) throughout by Δt, with \( \Delta t\to 0 \), and taking expectation yield (1.12). Thus the payoff distribution procedure in B i i (s, K * s ) in (1.12) would lead to the realization of ξ(s, K * s ) in (1.10). ■

Appendix B. Proof of Proposition 2.1

Using the value functions in Proposition 2.1 and the optimal strategies in (2.5) the Hamilton-Jacobi-Bellman equations (2.4) reduces to:

$$ \begin{array}{l}r\left[{A}_i(t)K+{C}_i(t)\right]-\left[{\overset{.}{A}}_i(t)K+{\overset{.}{C}}_i(t)\right]={\alpha}_iK-\frac{{\left[{A}_i(t)\right]}^2}{4{c}_i}+{A}_i(t)\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\frac{A_j(t)}{2{c}_j}-\delta K\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\\ {}\left[{A}_i(T)K+{C}_i(T)\right]={q}_1^iK+{q}_2^i,\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N;\end{array} $$
(7.4)

For (7.4) to hold it is required that

$$ {\overset{.}{A}}_i(t)=\left(r+\delta \right){A}_i(t)-{\alpha}_i,\kern0.36em {A}_i(T)={q}_1^i;\kern0.24em \mathrm{and} $$
(7.5)
$$ \begin{array}{l}{\overset{.}{C}}_i(t)=r{C}_i(t)+\frac{{\left[{A}_i(t)\right]}^2}{4{c}_i}-\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}\frac{A_i(t){A}_j(t)}{2{c}_j}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\;{C}_i(T)={q}_2^i;\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N.\end{array} $$
(7.6)

The differential equation system (7.5 and 7.6) is a block-recursive system with A i (t) in (7.5) being independent of A j (t) for \( j\ne i \) and all C j (t) for \( j\in N \).

Solving each of the n independent constant-coefficient linear differential equation in (7.5) yields:

$$ {A}_i(t)=\left({q}_1^i-\frac{\alpha_i}{r+\delta}\right){e}^{-\left(r+\delta \right)\left(T-t\right)}+\frac{\alpha_i}{r+\delta },\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N. $$
(7.7)

Substituting the explicit solution of A i (t) from (7.7) into (7.6) yields:

$$ \begin{array}{l}{\overset{.}{C}}_i(t)=r{C}_i(t)+\frac{1}{4{c}_i}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left({q}_1^i-\frac{\alpha_i}{r+\delta}\right){e}^{-\left(r+\delta \right)\left(T-t\right)}+\frac{\alpha_i}{r+\delta }{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]}^2\\ {}-{\displaystyle \sum_{j=1}^n}\frac{1}{2{c}_j}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left({q}_1^i-\frac{\alpha_i}{r+\delta}\right){e}^{-\left(r+\delta \right)\left(T-t\right)}+\frac{\alpha_i}{r+\delta}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\\ {}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left({q}_1^j-\frac{\alpha_j}{r+\delta}\right){e}^{-\left(r+\delta \right)\left(T-t\right)}+\frac{\alpha_j}{r+\delta}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\\ {}{C}_i(T)={q}_2^i,\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N,\end{array} $$
(7.8)

which is a system of independent linear differential equations in C i (t). Note that the coefficients are integrable functions; hence the solution of C i (t) could be readily obtained. Q.E.D.

Appendix C. Proof of Proposition 3.1

Invoking the fact that firms of the same type are identical, we have \( {\phi}_i^{(1)}\left(t,K\right)={\phi}_h^{(1)}\left(t,K\right) \) and \( {V}^{(1)i}\left(t,K\right)={V}^{(1)h}\left(t,K\right) \) for \( i,h\in {N}_1 \); and similarly \( {\phi}_j^{(2)}\left(t,K\right)={\phi}_{\ell}^{(2)}\left(t,K\right) \) and \( {V}^{(2)j}\left(t,K\right)={V}^{(2)\ell}\left(t,K\right) \) for \( j,\ell \in {N}_2 \). Using the value functions in Proposition 3.1 and the optimal strategies in (3.6 and 3.7), one can express Hamilton-Jacobi-Bellman equations (3.4 and 3.5) as:

$$ \begin{array}{l}r\left[{A}_1(t){K}^2+{B}_1(t)K+{C}_1(t)\right]-\left[{\overset{.}{A}}_1(t){K}^2+{\overset{.}{B}}_1(t)K+{\overset{.}{C}}_1(t)\right]-{A}_1(t){\sigma}^2{K}^2\\ {}=\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\alpha}_1K-{b}_1{K}^2-{\rho}_1\left[2{A}_1(t)K+{B}_1(t)-{\rho}_1\right]\\ {}-\left({c}_1/2\right){\left[2{A}_1(t)K+{B}_1(t)-{\rho}_1\right]}^2\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\\ {}+\left[2{A}_1(t)K+{B}_1(t)\right]\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{n}_1\left[2{A}_1(t)K+{B}_1(t)-{\rho}_1\right]\\ {}+{n}_2\left[2{A}_2(t)K+{B}_2(t)-{\rho}_2\right]-\delta K\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\\ {}\left[{A}_1(T){K}^2+{B}_1(T)K+{C}_1(T)\right]=\left[{q}_1{K}^2+{q}_2K+{q}_3\right];\\ {}r\left[{A}_2(t){K}^2+{B}_2(t)K+{C}_2(t)\right]-\left[{\overset{.}{A}}_2(t){K}^2+{\overset{.}{B}}_2(t)K+{\overset{.}{C}}_2(t)\right]-{A}_2(t){\sigma}^2{K}^2\\ {}=\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\alpha}_2K-{b}_2{K}^2-{\rho}_2\left[2{A}_2(t)K+{B}_2(t)-{\rho}_2\right]\\ {}-\left({c}_2/2\right){\left[2{A}_2(t)K+{B}_2(t)-{\rho}_2\right]}^2\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\}\\ {}+\left[2{A}_2(t)K+{B}_2(t)\right]\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{n}_1\left[2{A}_1(t)K+{B}_1(t)-{\rho}_1\right]\\ {}+{n}_2\left[2{A}_2(t)K+{B}_2(t)-{\rho}_2\right]-\delta K\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right],\\ {}\left[{A}_2(T){K}^2+{B}_2(T)K+{C}_2(T)\right]=\left[{q}_1{K}^2+{q}_2K+{q}_3\right].\end{array} $$
(7.9)

For system (7.9) to hold it is required that

  1. (i)

    the coefficients multiplying with K 2 and K have to agree with system, and

  2. (ii)

    the equalities of the other terms as indicated by the system.

These required conditions are given in (3.9, 3.10 and 3.11).

Hence Proposition 3.1 follows. Q.E.D.

Appendix D. Proof of Proposition 6.1

Consider first the last stage, that is stage 3, when \( {\theta}_3^{\sigma_3} \) occurs. Invoking that \( {V}^{\left({\sigma}_3\right)i}\left(3,K\right)=\left[{A}_3^{\left({\sigma}_3\right)i}K+{C}_3^{\left({\sigma}_3\right)i}\right]{\left(1+r\right)}^{-2} \) and \( {V}^{\left({\sigma}_4\right)i}\left(4,{K}_4\right)=\left({q}^iK+{m}^i\right){\left(1+r\right)}^{-3} \) from Proposition 6.1, the condition governing \( t=3 \) in equation (6.3) becomes

$$ \begin{array}{l}\left[{A}_3^{\left({\sigma}_3\right)i}K+{C}_3^{\left({\sigma}_3\right)i}\right]{\left(1+r\right)}^{-2}=\underset{I_3^i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{\alpha}_3^{\left({\sigma}_3\right)i}K-{c}_3^{\left({\sigma}_3\right)i}{\left({I}_3^i\right)}^2\right]{\left(1+r\right)}^{-2}\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_3^y}{\displaystyle \sum_{\sigma_4=1}^1{\lambda}_4^{\sigma_4}}\\ {}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{q}^i\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_3^{\left({\sigma}_3\right)j*}(K)+{I}_3^i-\delta K}+{\vartheta}_3^y\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{m}^i\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-3}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N.\end{array} $$
(7.10)

Performing the indicated maximization in (7.10) yields the game equilibrium strategies in stage 3 as:

$$ {\phi}_3^{\left({\sigma}_3\right)i*}(K)=\frac{q^i{\left(1+r\right)}^{-1}}{2{c}_3^{\left({\sigma}_3\right)i}},\kern0.48em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N. $$
(7.11)

Substituting (7.11) into (7.10) yields:

$$ \begin{array}{l}\left[{A}_3^{\left({\sigma}_3\right)i}K+{C}_3^{\left({\sigma}_3\right)i}\right]={\alpha}_3^{\left({\sigma}_3\right)i}K-\frac{{\left({q}^i\right)}^2{\left(1+r\right)}^{-2}}{4{c}_3^{\left({\sigma}_3\right)i}}\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_3^y}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{q}^i\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K+{\displaystyle \sum_{j=1}^n}\frac{q^j{\left(1+r\right)}^{-1}}{2{c}_3^{\left({\sigma}_3\right)j}}-\delta K+{\vartheta}_t^y\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{m}^i\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\end{array} $$
(7.12)

for \( i\in N \).

Note that both sides of equation (7.12) are linear expressions of K. For (7.12) to hold it is required that:

$$ \begin{array}{l}{A}_3^{\left({\sigma}_3\right)i}={\alpha}_3^{\left({\sigma}_3\right)i}+{q}^i\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_3^{\left({\sigma}_3\right)i}=-\frac{{\left({q}^i\right)}^2{\left(1+r\right)}^{-2}}{4{c}_3^{\left({\sigma}_3\right)i}}+\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{q}^i{\displaystyle \sum_{j=1}^n}\frac{q^j{\left(1+r\right)}^{-1}}{2{c}_3^{\left({\sigma}_3\right)j}}+{q}^i{\varpi}_3+{m}^i\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]{\left(1+r\right)}^{-1},\end{array} $$
(7.13)

for \( i\in N \).

Now we proceed to stage 2, using \( {V}^{\left({\sigma}_3\right)i}\left(3,K\right)=\left[{A}_3^{\left({\sigma}_3\right)i}K+{C}_3^{\left({\sigma}_3\right)i}\right]{\left(1+r\right)}^{-2} \) with \( {A}_3^{\left({\sigma}_3\right)i} \) and \( {C}_3^{\left({\sigma}_3\right)i} \) given in (7.13), the conditions in equation (6.3) become

$$ \begin{array}{l}\left[{A}_2^{\left({\sigma}_2\right)i}K+{C}_2^{\left({\sigma}_2\right)i}\right]{\left(1+r\right)}^{-1}=\underset{I_2^i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{\alpha}_2^{\left({\sigma}_2\right)i}K-{c}_2^{\left({\sigma}_2\right)i}{\left({I}_2^i\right)}^2\right]{\left(1+r\right)}^{-1}\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_2^y}{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}\\ {}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_3^{\left({\sigma}_3\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_2^{\left({\sigma}_2\right)j*}(K)+{I}_2^i-\delta K}+{\vartheta}_2^y\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_3^{\left({\sigma}_3\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-2}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\ i\in N.\end{array} $$
(7.14)

Performing the indicated maximization in (7.14) yields the game equilibrium strategies in stage 2 as:

$$ {\phi}_2^{\left({\sigma}_2\right)i*}(K)={\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}\frac{A_3^{\left({\sigma}_3\right)i}{\left(1+r\right)}^{-1}}{2{c}_2^{\left({\sigma}_2\right)i}},\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N. $$
(7.15)

Substituting (7.15) into (7.14) yields:

$$ \begin{array}{l}\left[{A}_2^{\left({\sigma}_2\right)i}K+{C}_2^{\left({\sigma}_2\right)i}\right]={\alpha}_2^{\left({\sigma}_2\right)i}K-\frac{1}{4{c}_2^{\left({\sigma}_2\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}{A}_3^{\left({\sigma}_3\right)i}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_2^y}{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_3^{\left({\sigma}_3\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K+{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_3=1}^4{\lambda}_3^{\rho_3}}\frac{A_3^{\left({\rho}_3\right)j}{\left(1+r\right)}^{-1}}{2{c}_2^{\left({\sigma}_2\right)j}}\\ {}-\delta K+{\vartheta}_2^y\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_3^{\left({\sigma}_3\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\;i\in N.\end{array} $$
(7.16)

Both sides of equation (7.16) are linear expressions of K. For (7.16) to hold it is required that:

$$ \begin{array}{l}{A}_2^{\left({\sigma}_2\right)i}={\alpha}_2^{\left({\sigma}_2\right)i}+{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}{A}_3^{\left({\sigma}_3\right)i}\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_2^{\left({\sigma}_2\right)i}=-\frac{1}{4{c}_2^{\left({\sigma}_2\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}{A}_3^{\left({\sigma}_3\right)i}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{\sigma_3=1}^4{\lambda}_3^{\sigma_3}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_3^{\left({\sigma}_3\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_3=1}^4{\lambda}_3^{\rho_3}}\frac{A_3^{\left({\rho}_3\right)j}{\left(1+r\right)}^{-1}}{2{c}_2^{\left({\sigma}_2\right)j}}+{\varpi}_2\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_3^{\left({\sigma}_3\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\ i\in N.\end{array} $$
(7.17)

Now we proceed to stage 1, using \( {V}^{\left({\sigma}_2\right)i}\left(2,K\right)=\left[{A}_2^{\left({\sigma}_2\right)i}K+{C}_2^{\left({\sigma}_2\right)i}\right]{\left(1+r\right)}^{-1} \) with \( {A}_2^{\left({\sigma}_2\right)i} \) and \( {C}_2^{\left({\sigma}_2\right)i} \) given in (7.17), the conditions in equation (6.3) become

$$ \begin{array}{l}\left[{A}_1^{\left({\sigma}_1\right)i}K+{C}_1^{\left({\sigma}_1\right)i}\right]=\underset{I_1^i}{ \max}\left\{\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.\left[{\alpha}_1^{\left({\sigma}_1\right)i}K-{c}_1^{\left({\sigma}_1\right)i}{\left({I}_1^i\right)}^2\right]\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_1^y}{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}\\ {}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_2^{\left({\sigma}_2\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K+{\displaystyle \sum_{\begin{array}{l}j=1\\ {}j\ne i\end{array}}^n{\phi}_1^{\left({\sigma}_1\right)j*}(K)+{I}_1^i-\delta K}+{\vartheta}_1^y\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_2^{\left({\sigma}_2\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\ i\in N.\end{array} $$
(7.18)

Performing the indicated maximization in (7.18) yields the game equilibrium strategies in stage 1 as:

$$ {\phi}_1^{\left({\sigma}_1\right)i*}(K)={\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}\frac{A_2^{\left({\sigma}_2\right)i}{\left(1+r\right)}^{-1}}{2{c}_1^{\left({\sigma}_1\right)i}},\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N $$
(7.19)

Substituting (7.19) into (7.18) yields:

$$ \begin{array}{l}\left[{A}_1^{\left({\sigma}_1\right)i}K+{C}_1^{\left({\sigma}_1\right)i}\right]={\alpha}_1^{\left({\sigma}_1\right)i}K-\frac{1}{4{c}_1^{\left({\sigma}_1\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}{A}_2^{\left({\sigma}_2\right)i}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{y=1}^3{\gamma}_1^y}{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_2^{\left({\sigma}_2\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.K+{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_2=1}^4{\lambda}_2^{\rho_2}}\frac{A_2^{\left({\rho}_2\right)j}{\left(1+r\right)}^{-1}}{2{c}_1^{\left({\sigma}_1\right)j}}\\ {}-\delta K+{\vartheta}_1^y\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_2^{\left({\sigma}_2\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\kern0.24em \mathrm{f}\mathrm{o}\mathrm{r}\kern0.24em i\in N.\end{array} $$
(7.20)

Both sides of equation (7.20) are linear expressions of K. For (7.20) to hold it is required that:

$$ \begin{array}{l}{A}_1^{\left({\sigma}_1\right)i}={\alpha}_1^{\left({\sigma}_1\right)i}+{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}{A}_2^{\left({\sigma}_2\right)i}\left(1-\delta \right){\left(1+r\right)}^{-1},\kern0.24em \mathrm{and}\\ {}{C}_1^{\left({\sigma}_1\right)i}=-\frac{1}{4{c}_1^{\left({\sigma}_1\right)i}}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{\sigma_1=1}^4{\lambda}_2^{\sigma_2}{A}_2^{\left({\sigma}_2\right)i}{\left(1+r\right)}^{-1}}{\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)}^2\\ {}+{\displaystyle \sum_{\sigma_2=1}^4{\lambda}_2^{\sigma_2}}\left[\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{A}_2^{\left({\sigma}_2\right)i}\left(\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right.{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{\rho_2=1}^4{\lambda}_2^{\rho_2}}\frac{A_2^{\left({\rho}_2\right)j}{\left(1+r\right)}^{-1}}{2{c}_1^{\left({\sigma}_1\right)j}}+{\varpi}_1\;\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right)+{C}_2^{\left({\sigma}_2\right)i}\left.\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right]\left.{\left(1+r\right)}^{-1}\begin{array}{c}\hfill \hfill \\ {}\hfill \hfill \end{array}\right\},\\ {}\mathrm{f}\mathrm{o}\mathrm{r}\ i\in N.\end{array} $$
(7.21)

Hence Proposition 6.1 follows. Q.E.D.

8 Chapter Notes

Though cooperative provision of public goods is the key to a socially optimal solution one may find it hard to be convinced that dynamic cooperation can offer a long-term solution unless the agreed-upon optimality principle can be maintained from the beginning to the end. The notion of public goods, which are non-rival and non-excludable, was first introduced by Samuelson (1954). Problems concerning private provision of public goods are studied in Bergstrom et al. (1986). Static analysis on provision of public goods are found in Chamberlin (1974), McGuire (1974) and Gradstein and Nitzan (1989). In many contexts, the provision and use of public goods are carried out in an intertemporal framework. Fershtman and Nitzan (1991) and Wirl (1996) considered differential games of public goods provision with symmetric agents. Wang and Ewald (2010) introduced stochastic elements into these games. Dockner et al. (2000) presented a game model with two asymmetric agents in which knowledge is a public good. These studies on dynamic game analysis focus on the noncooperative equilibria and the collusive solution that maximizes the joint payoffs of all agents.

This Chapter provides applications of cooperative provision of public goods with a subgame consistent cooperative scheme. The analysis can be readily extended into a multiple public capital goods paradigm. In addition, more complicated stochastic disturbances in the public goods dynamics, like \( \sigma \left[{I}_1(s),{I}_2(s),\cdots, {I}_n(s),K(s)\right] \), can be adopted.

9 Problems

  1. 1.

    Consider a 4-stage 3 asymmetric agents economic game in which the agents receive benefits from an existing public capital stock K t . The accumulation dynamics of the public capital stock is governed by the stochastic difference equation:

    $$ {K}_{t+1}={K}_t+{\displaystyle \sum_{j=1}^5{I}_t^j-0.1{K}_t}+{\vartheta}_t, \kern0.24em K{}_1{}=20,\kern0.36em \mathrm{f}\mathrm{o}\mathrm{r}\;t\in \left\{1,2,3,4\right\}, $$

    where ϑ t is a discrete random variable with range {1, 2, 3} and corresponding probabilities {0.7, 0.2, 0.1}.

    At stage 1, it is known that θ 11 has happened, and the payoffs of agents 1, 2 and 3 are respectively:

    $$ 5{K}_1-2{\left({I}_1\right)}^2,\;3{K}_1-{\left({I}_1\right)}^2\kern0.24em \mathrm{and}\kern0.24em 6{K}_1-3{\left({I}_1\right)}^2. $$

    At stage \( t\in \left\{2,3,4\right\} \), the payoffs of agent 1, 2 and 3 are respectively

    $$ 5{K}_1-2{\left({I}_1\right)}^2,\;3{K}_1-{\left({I}_1\right)}^2\;\mathrm{and}\;6{K}_1-3{\left({I}_1\right)}^2 $$

    if θ 1 t occurs; and the payoffs of agent 1, 2 and 3 are respectively

    $$ 6{K}_1-2{\left({I}_1\right)}^2,\;3{K}_1-2{\left({I}_1\right)}^2,\;4{K}_1-2{\left({I}_1\right)}^2 $$

    if θ 2 t occurs.

    The probability that θ 1 t would occur is 0.6 and the probability that θ 2 t would occur is 0.4.

    In stage 5, the terminal valuations of the agent 1, 2 and 3 are respectively:

    $$ \left(2{K}_5+10\right){\left(1+r\right)}^{-4},\kern0.24em \left({K}_5+15\right){\left(1+r\right)}^{-4}\;\mathrm{and}\;\left(3{K}_5+5\right){\left(1+r\right)}^{-4}. $$

    Characterize the feedback Nash equilibrium .

  2. 2.

    Obtain a group optimal solution that maximizes the joint expected profit.

  3. 3.

    Consider the case when the agents agree to share the cooperative gain proportional to their expected non-cooperative payoffs in providing the public good jointly. Derive a subgame consistent solution.