Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The provision of public goods constitutes a classic case of market failure. Examples of public goods include clean environment, national security, scientific knowledge, openly accessible public capital, technical know-how and public information. The non-exclusiveness and positive externalities of public goods constitutes major factors for markets to malfunction in their efficient provision. Problems concerning private provision of public goods are studied in Bergstrom et al. (1986). Static analysis on provision of public goods are found in Chamberlin (1974), McGuire (1974) and Gradstein and Nitzan (1989). Fershtman and Nitzan (1991) and Wirl (1996) studied differential games of voluntary public goods provision by symmetric agents. Wang and Ewald (2010) introduced stochasticity into the dynamics of public goods accumulation elements into these games. Dockner et al. (2000) presented a game model with two asymmetric agents in which knowledge is a public good. These studies on dynamic game analysis focus on the noncooperative equilibria and the collusive solution that maximizes the joint payoffs of all agents.

Cooperation suggests the possibility of socially optimal solutions to the public goods provision problem. However, one may find it hard to be convinced that dynamic cooperation can offer a long-term solution unless there is guarantee that participants will always be better off throughout the entire cooperation duration and the agreed-upon optimality principle be maintained from the beginning to the end. To enable a cooperation scheme to be sustainable throughout the agreement period, a stringent condition is needed—that of subgame consistency. This condition requires that the optimality principle agreed upon at the outset must remain effective in any subgame starting at a later starting time with a state brought about by prior optimal behavior. Hence the players do not have incentives to deviate from the cooperative scheme throughout the cooperative duration. Moreover, a subgame consistent solution must also satisfy individual rationality and group optimality. Individual rationality ensures that the payoff allocated to an agent under cooperation will be no less than his noncooperative payoff. Group optimality ensures that all potential gains from cooperation are exhausted. The notion of subgame consistency in cooperative stochastic differential games was originated by Yeung and Petrosyan (2004).

Yeung and Petrosyan (2013a) analyzed subgame consistent cooperative provision of public goods with transferable payoffs in a stochastic differential game framework in which the accumulation dynamics of the public capital is stochastic. Another, often more common, uncertainty facing decision makers is the uncertain changes in the payoff structures. This kind of uncertainties arises because the changes in preferences, technologies, demographic structures, institutional arrangements and political and legal frameworks are not known with certainty. Yeung (2001 and 2003) introduced the class of randomly furcating stochastic differential games which allows the future payoff structures of the game to furcate (branch-out) randomly in addition to the game’s stochastic dynamics. Yeung and Petrosyan (2013b) examined cooperative stochastic dynamic games with randomly furcating payoffs and presented a theorem characterizing their subgame consistent solutions. A continuous-time analog can be found in Petrosyan and Yeung (2007). The presence of random elements in future payoff structures and stock dynamics reflects an important element of reality in cooperative provision of public goods.

This paper considers subgame consistent cooperative solutions for public goods provision by asymmetric agents in a discrete-time stochastic dynamic game framework with randomly furcating future payoff structures. In addition, agents’ payoffs are transferable. The noncooperative game outcome is characterized and dynamic cooperation is considered. Group optimal strategies are derived and subgame consistent solutions are characterized. A “payoff distribution procedure” leading to subgame-consistent solutions is derived. An Illustration is presented to demonstrate the explicit derivation of subgame consistent solution for public goods provision game. This is the first time that subgame consistent solution on cooperative provision of public goods with stochastic dynamics and uncertain future payoffs is studied.

The chapter is organized as follows. The analytical framework and the non-cooperative outcome of public goods provision are provided in Sect. 2. Details of a Pareto optimal cooperative scheme are presented in Sect. 3. A payment mechanism ensuring subgame consistency is derived in Sect. 4 and an illustration is given in Sect. 5. Section 6 concludes the chapter.

2 Analytical Framework and Non-cooperative Outcome

Consider the case of the provision of a public good in which a group of n agents carry out a project by making contributions to the building up of the stock of a productive public good. The game involves T stages of operation and a terminal stage in which each agent received a terminal payment. We use K t to denote the level of the productive stock and \(I^{i}_{t}\) the public capital investment by agent i at stage t∈{1,2,…,T}. The stock accumulation dynamics is governed by the stochastic difference equation:

$$ K_{t+1} = K_t + \sum _{j=1}^n I_t^i - \delta K_t + \vartheta_t,\quad K_1 = K^0, $$
(1)

for t∈{1,2,…,T}, where δ is the depreciation rate and ϑ t is a sequence of statistically independent random variables.

The payoff of agent i at stage t is affected by a random variable θ t . In particular, the payoff to agent i at stage t is

$$ R^i (K_t, \theta_t) - C^i \bigl(I^i_t, \theta_t \bigr),\quad i \in N = \{1,2,\ldots ,n\}, $$
(2)

where R i(K t ,θ t ) is the revenue/payoff to agent i, \(C^{i} (I_{t}^{i},\theta_{t}) \) is the cost of investing \(I^{i}_{t} \in X^{i}\), and θ t for {1,2,…,T} are independent discrete random variables with range \(\{ \theta_{t}^{1}, \theta_{t}^{2}, \ldots, \theta_{t}^{n_{t}} \}\) and corresponding probabilities \(\{ \lambda_{t}^{1}, \lambda_{t}^{2}, \ldots, \lambda_{t}^{n_{t}} \}\), where n t is a positive integer for t∈{1,2,…,T}. In stage 1, it is known that θ 1 equals \(\theta_{1}^{1}\) with probability \(\lambda_{1}^{1}=1\).

Marginal revenue product of the productive stock is positive, that is ∂R i(K t ,θ)/∂K t >0, before a saturation level \(\bar{K}\) has been reached; and marginal cost of investment is positive and non-decreasing, that is \(\partial C^{i} (I_{t}^{i}, \theta_{t}) / \partial I_{t}^{i} > 0\) and \(\partial^{2} C^{i} (I_{t}^{i}, \theta_{t}) / {\partial I_{t}^{i}}^{2} > 0\).

The objective of agent iN is to maximize its expected net revenue over the planning horizon, that is

$$\begin{aligned} &E_{\theta_1, \theta_2, \ldots, \theta_T ; \vartheta_1, \vartheta_2, \ldots, \vartheta_T} \Biggl\{ \sum_{s=1}^T \bigl[ R^i( K_s, \theta_s) - C^i \bigl(I_s^i, \theta_s \bigr) \bigr] (1+r)^{-(s-1)} \\ &\quad{} + q^i (K_{T+1}) (1+r)^{-T} \Biggr\} \end{aligned}$$
(3)

subject to the stock accumulation dynamics (1), where \(E_{\theta_{1}, \theta_{2}, \ldots, \theta_{T} ; \vartheta_{1}, \vartheta_{2}, \ldots, \vartheta_{T}} \) is the expectation operation with respect to the random variables θ 1,θ 2,…,θ T and ϑ 1,ϑ 2,…,ϑ T ; r is the discount rate, and q i(K T )≥0 is an amount conditional on the productive stock that agent i would received at stage T+1. Since there is no uncertainty in stage T+1, we use \(\theta_{T+1}^{1}\) to denote the condition in stage T+1 with probability \(\lambda^{1}_{T+1} = 1\).

Acting for individual interests, the agents are involved in a stochastic dynamic game with randomly furcating payoffs (see Yeung and Petrosyan 2013b). Let \(I_{t}^{ (\sigma_{t} ) i} \) denote the strategy of agent i at stage t given that the realized random variable affecting the payoff function is \(\theta_{t}^{\sigma_{t}} \). In a stochastic dynamic game framework, a strategy space with state-dependent property has to be considered. In particular, a pre-specified class Γ i of mapping \(\phi_{t}^{ ( \sigma_{t} ) i} (\cdot): K \rightarrow I_{t}^{ ( \sigma_{t} ) i}\) with the property \(I_{t}^{ (\sigma_{t} ) i} = \phi_{t}^{ ( \sigma_{t} ) i } (K) \in\varGamma^{i} \) is the strategy space of agent i and each of its elements is a permissible strategy.

To solve the game, we follow Yeung and Petrosyan (2013b) and begin with the subgame starting at the last operating stage, that is stage T. If \(\theta_{T}^{ \sigma_{T} } \in\{ \theta_{T}^{1}, \theta_{T}^{2}, \ldots , \theta_{T}^{\eta_{T}} \} \) has occurred at stage T and the public capital stock is K T =K, the subgame becomes:

$$\begin{aligned} &\max_{I_T^i} E_{\vartheta_T} \bigl\{ \bigl[ R^i \bigl( K_T, \theta_T^{\sigma_T}\bigr) - C^i \bigl(I_T^i, \theta_T^{\sigma_T} \bigr) \bigr] (1+r)^{-(T-1)} \\ & \quad{} + q^i (K_{T+1}) (1+r)^{-T} \bigr\} \quad\mbox{for } i \in N \end{aligned}$$
(4)
$$\begin{aligned} & \mbox{subject to}\quad K_{T+1} = K_T + \mathop{\sum _{j=1 }}_{ j \ne i}^n I_T^j - \delta K_T + \vartheta_T,\quad K_T = K. \end{aligned}$$
(5)

The subgame (4)–(5) is a stochastic dynamic game. Invoking the standard techniques for solving stochastic dynamic games, a feedback Nash equilibrium solution can characterized as follows:

Lemma 1

A set of strategies

$$\phi_T^{ (\sigma_T )^*} (K) = \bigl\{ \phi_T^{ ( \sigma_T ) 1^*} (K), \phi_T^{ ( \sigma_T ) 2^*} (K), \ldots,\phi_T^{ ( \sigma_T ) n^*} (K) \bigr\} $$

provides a Nash equilibrium solution to the subgame (4)(5), if there exist functions \(V^{ ( \sigma_{T} ) i} (t,K)\), for iN and t∈{1,2}, such that the following conditions are satisfied:

$$\begin{aligned} & V^{ ( \sigma_T ) i} (T, K) = \max_{I_T^i} E_{\vartheta_T} \Biggl\{ \bigl[ R^i \bigl( K_T, \theta_T^{\sigma_T}\bigr) - C^i \bigl(I_T^i, \theta_T^{\sigma _T}\bigr) \bigr] (1+r)^{-(T-1)} \\ & \hphantom{V^{ ( \sigma_T ) i} (T, K) =} {} + V^{ (\sigma_{T+1} ) i} \Biggl[ T+1, K + \sum_{\substack{j = 1 \\ j \ne i}}^n \phi_T^{ (\sigma_T ) j^*} (K) + I_T^i - \delta K + \vartheta_T \Biggr] \Biggr\} , \\ & V^{ ( \sigma_{T+1} ) i} (T+1, K) = q^i (K) (1+r)^{-T} \quad\textit{for } i \in N. \end{aligned}$$
(6)

Proof

The system of equations in (6) satisfies the standard stochastic dynamic programming property and the Nash property for each agent iN. Hence a Nash equilibrium of the subgame (4)–(5) is characterized. Details of the proof of the results can be found in Theorem 6.10 in Başar and Olsder (1995). □

We sidestep the issue of multiple equilibria and focus on games in which there is a unique noncooperative Nash equilibrium in each subgame. Using Lemma 1, one can characterize the value functions \(V^{ ( \sigma_{T} ) i} (T,K) \) for all σ T ∈{1,2,…,η T } if they exist. In particular, \(V^{ ( \sigma_{T} ) i} (T,K) \) yields agent i’s expected game equilibrium payoff in the subgame starting at stage T given that \(\theta_{T}^{\sigma_{T}} \) occurs and K T =K.

Then we proceed to the subgame starting at stage T−1 when \(\theta_{T-1}^{\sigma_{T-1}} \in\{ \theta_{T-1}^{1} , \theta_{T-1}^{2}, \ldots, \theta_{T-1}^{\eta_{T-1}} \} \) occurs and K T−1=K. In this subgame, agent iN seeks to maximize his expected payoff

$$\begin{aligned} & E_{\theta_T; \vartheta_{T-1}, \vartheta_T} \Biggl\{ \sum_{s=T-1}^T \bigl[ R^i (K_s,\theta_s) - C^i \bigl(I_s^i, \theta _s \bigr) \bigr] (1+r)^{-(s-1)} \\ &\quad\quad{}+ q^i (K_{T+1}) (1+r)^{-T} \Biggr\} \\ & \quad = E_{\vartheta_{T-1}} \Biggl\{ \bigl[ R^i \bigl(K_{T-1}, \theta_{T-1}^{\sigma_{T-1}}\bigr) - C^i \bigl(I_{T-1}^i, \theta_{T-1}^{\sigma_{T-1}} \bigr) \bigr] (1+r)^{-(T-2)} \\ &\quad\quad{} + \sum_{\sigma_T = 1}^{\eta_T} \lambda_T^{\sigma_T} \bigl[ R^i \bigl(K_T, \theta_T^{\sigma_T}\bigr) - C^i \bigl(I_T^i, \theta_T^{\sigma _T}\bigr) \bigr] (1+r)^{-(T-2)} \\ &\quad\quad{} + q^i (K_{T+1}) (1+r)^{-T} \Biggr\} , \end{aligned}$$
(7)

subject to the capital accumulation dynamics

$$ K_{t+1} = K_t + \sum _{j=1}^n I_t^j - \delta K_t + \vartheta_t,\quad K_{T-1} = K \mbox{ for } t \in\{T-1, T\}. $$
(8)

If the functions \(V^{ ( \sigma_{T} ) i} (T, K) \) for all σ T ∈{1,2,…,η T } characterized in Lemma 1 exist, the subgame (7)–(8) can be expressed as a game in which agent i seeks to maximize the expected payoff

$$\begin{aligned} &E_{\vartheta_{T-1}} \Biggl\{ \bigl[ R^i (K_{T-1}, \theta_{T-1}) - C^i \bigl(I_{T-1}^i, \theta_{T-1}\bigr) \bigr] (1+r)^{-(T-2)} \\ &\quad {} + \sum_{\sigma_T = 1}^{\eta_T} \lambda_T^{\sigma_T} V^{ ( \sigma_T ) i} \Biggl[ T, K_{T-1} + \sum_{j=1}^n I_{T-1}^j - \delta K_{T-1} + \vartheta_{T-1} \Biggr] \Biggr\} , \\ & \quad\mbox{for } i \in N, \end{aligned}$$
(9)

using his control \(I_{T-1}^{i}\).

A Nash equilibrium of the subgame (9) can be characterized by the following lemma.

Lemma 2

A set of strategies

$$\phi_{T-1}^{ ( \sigma_{T-1} )^*} (K) = \bigl\{ \phi_{T-1}^{ ( \sigma_{T-1} ) 1^*} (K), \phi_{T-1}^{ ( \sigma_{T-1} ) 2^*} (K), \ldots , \phi_{T-1}^{ ( \sigma_{T-1} ) n^*} (K) \bigr\} $$

provides a Nash equilibrium solution to the subgame (9) if there exist functions \(V^{ ( \sigma_{T} ) i} (T, K_{T}) \) for iN and σ T ={1,2,…,η T } characterized in Lemma 1, and functions \(V^{ ( \sigma_{T-1} ) i} (T-1, K)\), for iN such that the following conditions are satisfied:

$$\begin{aligned} & V^{ ( \sigma_{T-1} ) i } (T-1, K) \\ & \quad = \max_{I_{T-1}^i} E_{\vartheta_{T-1}} \Biggl\{ \bigl[ R^i \bigl(K_{T-1}, \theta_{T-1}^{\sigma_{T-1}} \bigr) - C^i \bigl( I_{T-1}^i, \theta_{T-1}^{\sigma _{T-1}} \bigr) \bigr] (1+r)^{-(T-2)} \\ &\quad\quad {} + \sum_{\sigma_T = 1}^{\eta_T} \lambda_T^{\sigma_T} V^{ ( \sigma_T ) i} \biggl[T, K + \sum _{\substack{j = 1 \\ j \ne i}} \phi_{T-1}^{ ( \sigma_{T-1} ) j^*} (K) + I_{T-1}^i - \delta K + \vartheta_{T-1} \biggr] \Biggr\} \\ &\qquad \textit{for } i \in N. \end{aligned}$$
(10)

Proof

The conditions in Lemma 1 and the system of equations in (10) satisfies the standard discrete-time stochastic dynamic programming property and the Nash property for each agent iN. Hence a Nash equilibrium of the subgame (9) is characterized. □

Using Lemma 2 one can characterize the functions \(V^{ ( \sigma_{T} ) i} (T-1, K) \) for all \(\theta_{T-1}^{\sigma_{T-1}} \in\{\theta_{T-1}^{1}, \theta_{T-1}^{2}, \ldots, \theta_{T-1}^{\eta _{T-1}}\}\), if they exist. In particular, \(V^{ ( \sigma_{T-1} ) i} (T-1, K) \) yields agent i’s expected game equilibrium payoff in the subgame starting at stage T−1 given that \(\theta_{T-1}^{\sigma_{T-1}} \) occurs and K T−1=K.

Consider the subgame starting at stage t∈{T−2,T−3,…,1} when \(\theta_{t}^{\sigma_{t}} \in\{ \theta_{t}^{1}, \theta_{t}^{2}, \ldots, \theta _{t}^{\eta_{t}} \} \) occurs and K t =K, in which agent iN maximizes his expected payoff

$$\begin{aligned} &E_{\vartheta_t} \Biggl\{ \bigl[ R^i \bigl(K, \theta_t^{\sigma_t}\bigr) - C^i \bigl(I_t^i, \theta_t^{\sigma_t}\bigr) \bigr] (1+r)^{-(t-1)} \\ &\quad {} + \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} V^{ ( \sigma_{t+1} ) i} \Biggl[t+1, K+ \sum _{j=1}^n I_t^j - \delta K + \vartheta_t \Biggr] \Biggr\} ,\quad \mbox{for } i \in N, \end{aligned}$$
(11)

subject to the public capital accumulation dynamics

$$ K_{t+1} = K_t + \sum _{j = 1}^n I_t^j - \delta K_t + \vartheta_t,\quad K_t = K. $$
(12)

A Nash equilibrium solution for the game (1)–(3) can be characterized as follows:

Theorem 1

A set of strategies

$$\phi_i^{ ( \sigma_t )^*} (K) = \bigl\{ \phi_t^{ ( \sigma_t ) 1^*} (K), \phi_t^{ ( \sigma_t ) 1^*} (K), \ldots, \phi_t^{ ( \sigma_t ) n^*} (K) \bigr\} , $$

for σ t ∈{1,2,…,η t } and t∈{1,2,…,T}, constitutes a Nash equilibrium solution to the game (1)(3), if there exist functions \(V^{ ( \sigma_{t} ) i} (t,K)\), for σ t ∈{1,2,…,η t }, t∈{1,2,…,T}, and iN, such that the following recursive relations are satisfied:

$$ \begin{aligned} & V^{ ( \sigma_T ) i} (T+1, K) = q^i (K_{T+1}) (1+r)^{-T}, \\ & V^{ ( \sigma_t ) i} (t, K) \\ &\quad = \max_{I_t^i} E_{\vartheta_t} \Biggl\{ \bigl[R^i \bigl(K_t, \theta_t^{\sigma_t} \bigr) - C^i \bigl(I_t^i, \theta_t^{\sigma_t}\bigr) \bigr] (1+r)^{-(t-1)} \\ &\quad\quad {} + \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} V^{ ( \sigma_{t+1} ) i} \Biggl[ t+1, K+ \mathop{\sum _{j=1 }}_{ j \ne i}^n \phi_t^{ ( \sigma_t ) j^*} (K) + I_t^i - \delta K_t + \vartheta_t \Biggr] \Biggr\} , \\ & \quad\quad\textit{ for } \sigma_t \in\{1,2, \ldots, \eta_t\}, t \in\{1,2, \ldots, T \}, \textit{ and } i \in N. \end{aligned} $$
(13)

Proof

The results in (13) characterizing the game equilibrium in stage T and stage T−1 are proved in Lemma 1 and Lemma 2. Invoking the subgame in stage t∈{1,2,…,T−1} as expressed in (11)–(12), the results in (13) satisfy the optimality conditions in stochastic dynamic programming and the Nash equilibrium property for each agent in each of these subgames. Therefore, a feedback Nash equilibrium of the game (1)–(3) is characterized. □

Hence, the noncooperative outcome of the public capital provision game (1)–(3) can be obtained.

3 Pareto Optimal Cooperative Scheme

It is well-known that non-cooperative provision of public goods would, in general lead to inefficiency. Cooperation suggests the possibility of socially optimal and group efficient solutions. Now consider the case when the agents agree to cooperate and enhance their gains from cooperation. In particular, they act cooperatively to maximize their expected joint payoff and distribute the joint payoff among themselves according to an agreed-upon optimality principle. If any agent deviates from the cooperation scheme, all agents will revert to the noncooperative framework to counteract the free-rider problem in public goods provision. Moreover, group optimality, individual rationality and subgame consistency are three crucial properties that sustainable cooperative scheme has to satisfy.

3.1 Pareto Optimal Provision

To fulfill group optimality the agents would seek to maximize their expected joint payoff. In particular, they have to solve the discrete-time stochastic dynamic programming problem of maximizing

$$\begin{aligned} & E_{\theta_1, \theta_2, \ldots, \theta_T; \vartheta_1, \vartheta_2, \ldots, \vartheta_T} \Biggl\{ \sum_{j=1}^n \sum_{s=1}^T \bigl[ R^j (K_s, \theta_s) - C^j \bigl(I_s^j, \theta_s\bigr) \bigr] (1+r)^{-(s-1)} \\ &\quad {} + \sum_{j=1}^n q^j (K_{T+1}) (1+r)^{-T} \Biggr\} \end{aligned}$$
(14)

subject to dynamics (1).

To solve the dynamic programming problem (1) and (14), we first consider the problem starting at stage T. If \(\theta_{T}^{\sigma_{T}} \in\{ \theta_{T}^{1}, \theta_{T}^{2}, \ldots, \theta_{T}^{\eta_{T}} \}\) has occurred at stage T and the state K T =K, the problem becomes:

$$\begin{aligned} & \max_{I_T^1, I_T^2, \ldots, I_T^n} E_{\vartheta_T} \Biggl\{ \sum _{j=1}^n \bigl[ R^j \bigl(K, \theta_T^{\sigma_T}\bigr) - C^j \bigl(I_T^j, \theta_T^{\sigma_T}\bigr) \bigr] (1+r)^{-(T-1)} \\ &\quad {} + \sum_{j=1}^n q^j (K_{T+1}) (1+r)^{-T} \Biggr\} \end{aligned}$$
(15)
$$\begin{aligned} & \mbox{subject to}\quad K_{T+1} = K_T = \sum _{j=1}^n I_T^j - \delta K_T + \vartheta_T,\quad K_T = K. \end{aligned}$$
(16)

An optimal solution to the stochastic control problem (15)–(16) can be characterized by the following lemma.

Lemma 3

A set of controls

$$I_T^{ ( \sigma_T )^*} = \psi_T^{ ( \sigma_T )^*} (K) = \bigl\{ \psi_T^{ ( \sigma_T ) 1^*} (K), \psi_T^{ ( \sigma_T ) 2^*} (K), \ldots , \psi_T^{ ( \sigma_T ) n^*} (K) \bigr\} $$

provides an optimal solution to the stochastic control problem (15)(16), if there exist functions \(W^{ ( \sigma_{T+1} ) }(T, K) \) such that the following conditions are satisfied:

$$ \begin{aligned}& W^{ ( \sigma_T )} (T, K) \\ &\quad= \max_{I_T^{ ( \sigma_T ) 1}, I_T^{ ( \sigma_T ) 2}, \ldots, I_T^{ ( \sigma_T ) n}} E_{\vartheta_T} \Biggl\{ \sum _{j = 1}^n \bigl[ R^j \bigl(K, \theta_T^{\sigma_T}\bigr) - C^j \bigl(I_T^j, \theta_T^{\sigma_T}\bigr) \bigr] (1+r)^{-(T-1)} \\ &\quad\quad {} + \sum_{j=1}^n q^j \Biggl( K + \sum_{h=1}^n I_T^h - \delta K + \vartheta_T \Biggr) (1+r)^{-T} \Biggr\} , \\ & W^{ ( \sigma_{T+1} ) i} (T+1, K) = \sum_{j = 1}^n q^j (K) (1+r)^{-T}. \end{aligned} $$
(17)

Proof

The system of equations in (17) satisfies the standard discrete-time stochastic dynamic programming property. Details of the proof of the results can be found in Başar and Olsder (1995). □

Using Lemma 3, one can characterize the functions \(W^{ ( \sigma_{T} )} (T, K) \) for all \(\theta_{T}^{ \sigma_{T} } \in\{\theta_{T}^{1}, \theta_{T}^{2}, \ldots, \theta_{T}^{\eta_{T}} \}\), if they exist. In particular, \(W^{ ( \sigma_{T} )} (T, K) \) yields the expected cooperative payoff starting at stage T given that \(\theta _{T}^{\sigma_{T}} \) occurs and K T =K.

Following the analysis in Sect. 2, the control problem starting at stage t when \(\theta_{t}^{ \sigma_{t} } \in\{\theta_{t}^{1}, \theta _{t}^{2}, \ldots, \theta_{t}^{\eta_{t}} \} \) occurs and K t =K can be expressed as:

$$\begin{aligned} & \max_{I_t^{ ( \sigma_t ) 1},I_t^{ ( \sigma_t ) 2}, \ldots, I_t^{ ( \sigma_t ) n}} E_{\vartheta_t} \Biggl\{ \sum _{j=1}^n \bigl[ R^j \bigl(K, \theta_t^{\sigma_t}\bigr) - C^j \bigl(I_t^j, \theta _t^{\sigma_t} \bigr) \bigr] (1+r)^{-(t-1)} \\ &\quad {} + \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} W^{ ( \sigma_{t+1} )} \Biggl[ t+1, K+\sum _{h=1}^n I_t^h - \delta K + \vartheta_t \Biggr] \Biggr\} , \end{aligned}$$
(18)

where \(W^{ ( \sigma_{t+1} )} [ t+1, K+ \sum_{h=1}^{n} I_{t}^{h} - \delta K + \vartheta_{t} ] \) is the expected optimal cooperative payoff in the control problem starting at stage t+1 when \(\theta_{t+1}^{ \sigma_{t+1} } \in\{\theta_{t+1}^{1}, \theta_{t+1}^{2}, \ldots, \theta_{t+1}^{\eta_{t+1}} \} \) occurs.

An optimal solution for the stochastic control problem (14) can be characterized as follows.

Theorem 2

A set of controls

$$\psi_t^{ ( \sigma_t )^*} (K) = \bigl\{ \psi_t^{ ( \sigma_t ) 1^*} (K), \psi_t^{ ( \sigma_t ) 2^*} (K) , \ldots, \psi_t^{ ( \sigma_t ) n^*} (K) \bigr\} , $$

for σ t ∈{1,2,…,η t } and t∈{1,2,…,T}, provides an optimal solution to the stochastic control problem (1) and (14), if there exist functions \(W^{ ( \sigma _{t} )} (t, K)\), for σ t ∈{1,2,…,η t } and t∈{1,2,…,T}, such that the following recursive relations are satisfied:

$$ \begin{aligned} & W^{ ( \sigma_T )} (T+1, K) = \sum_{j=1}^n q^j (K) (1+r)^{-T}, \\ & W^{ ( \sigma_T )} (t, K) \\ &\quad = \max_{I_t^{ ( \sigma_t ) 1}, I_t^{ ( \sigma_t ) 2}, \ldots, I_t^{ ( \sigma_t ) n}} E_{\vartheta_t} \Biggl\{ \sum_{j=1}^n \bigl[ R^j \bigl(K, \theta_t^{\sigma_t}\bigr) - C^j \bigl(I_t^j, \theta _t^{\sigma_t}\bigr) \bigr] (1+r)^{-(t-1)} \\ & \quad\quad {} + \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} W^{ ( \sigma_{t+1} )} \Biggl[ t+1, K+\sum _{h=1}^n I_t^h - \delta K + \vartheta_t \Biggr] \Biggr\} , \end{aligned} $$
(19)

for σ t ∈{1,2,…,η t } and t∈{1,2,…,T}.

Proof

Invoking Lemma 3 and the specification of the control problem starting in stage t∈{1,2,…,T−1} as expressed in (18), the results in (19) satisfy the optimality conditions in discrete-time stochastic dynamic programming. Therefore, an optimal solution of the stochastic control problem is characterized in Theorem 2. □

Substituting the optimal control \(\{ \psi_{t}^{ ( \sigma_{t} ) i^{*}} \mbox{, for } t \in\{1,2,\ldots,T\} \mbox{ and } i \in N \} \) into (1), one can obtain the dynamics of the cooperative trajectory of public capital accumulation as:

$$ K_{t+1} = K_t + \sum _{j = 1}^n \psi_t^{ ( \sigma_t ) j^*} (K_t) - \delta K_t + \vartheta_t,\quad K_t = K, \mbox{ if } \theta_t^{\sigma_t} \mbox{ occurs at stage } t, $$
(20)

for t∈{1,2,…,T}, σ t ∈{1,2,…,η t }.

We use \(X_{t}^{*}\) to denote the set of realizable values of K t at stage t generated by (20). The term \(K_{t}^{*} \in X_{t}^{*} \) is used to denote an element in \(X_{t}^{*}\). The term \(W^{ ( \sigma_{t} )} (t, K_{t}^{*})\) gives the expected total cooperative payoff over the stages from t to T if \(\theta_{t}^{\sigma_{t}}\) occurs and \(K_{t}^{*} \in X_{t}^{*}\) is realized at stage t.

3.2 Individually Rational Condition

The agents then have to agree to an optimality principle in distributing the total cooperative payoff among them. For individual rationality to be upheld the expected payoffs an agent receives under cooperation have to be no less than his expected noncooperative payoff along the cooperative state trajectory \(\{ K_{t}^{*} \}_{t=1}^{T+1}\). For instance, the agents may (i) share the total expected cooperative payoff proportional to their expected noncooperative payoffs, or (ii) share the excess of the total expected cooperative payoff over the expected sum of individual noncooperative payoffs equally.

Let \(\xi^{ ( \sigma_{t} ) } (t, K_{t}^{*}) = [ \xi^{ ( \sigma _{t} ) 1} (t, K_{t}^{*}), \xi^{ ( \sigma_{t} ) 2} (t, K_{t}^{*}), \ldots, \xi^{ ( \sigma _{t} ) n} (t, K_{t}^{*})]\) denote the imputation vector guiding the distribution of the total expected cooperative payoff under the agreed-upon optimality principle along the cooperative trajectory given that \(\theta_{t}^{\sigma_{t}}\) has occurred in stage t, for σ t ∈{1,2,…,η t } and t∈{1,2,…,T}. In particular, the imputation \(\xi ^{ ( \sigma_{t} ) i} (t, K_{t}^{*}) \) gives the present value of expected cumulative payments that agent i will receive from stage t to stage T+1 under cooperation.

If for example, the optimality principle specifies that the agents share the expected total cooperative payoff proportional to their non-cooperative payoffs, then the imputation to agent i becomes:

$$\begin{aligned} \xi^{ ( \sigma_t ) i} \bigl(t, K_t^*\bigr) & = \frac{V^{ ( \sigma _t ) i} (t, K_t^*)}{ \sum_{j = 1}^n V^{ ( \sigma_t ) j} (t, K_t^*)} W^{ ( \sigma_t )} \bigl(t, K_t^*\bigr), \end{aligned}$$

for iN and t∈{1,2,…,T}.

For individual rationality to be guaranteed in every stage k∈{1,2,…,T}, it is required that the imputation satisfies:

$$\begin{aligned} \xi^{ ( \sigma_t ) i} \bigl(t, K_t^*\bigr) & \ge V^{ ( \sigma_t ) i} \bigl(t, K_t^*\bigr), \end{aligned}$$
(21)

for iN, σ t ∈{1,2,…,η t } and t∈{1,2,…,T}.

To ensure group optimality, the imputation vector has to satisfy

$$ W^{ ( \sigma_t )} \bigl(t, K_t^*\bigr) = \sum _{j = 1}^n \xi^{ ( \sigma_t ) j} \bigl(t, K_t^*\bigr), $$
(22)

for σ t ∈{1,2,…,η t } and t∈{1,2,…,T}.

Hence, a valid imputation scheme \(\xi^{ ( \sigma_{t} ) i} (t, K_{t}^{*})\), for iN, σ t ∈{1,2,…,η t } and t∈{1,2,…,T}, has to satisfy conditions (21)–(22).

4 Subgame Consistent Payment Mechanism

As demonstrated in Yeung and Petrosyan (2004 and 2013b), to guarantee dynamical stability in a stochastic dynamic cooperation scheme, the solution has to satisfy the property of subgame consistency in addition to group optimality and individual rationality. In particular, an extension of a subgame-consistent cooperative solution policy to a subgame starting at a later time with a feasible state brought about by prior optimal behavior would remain effective. Thus subgame consistency ensures that as the game proceeds agents are guided by the same optimality principle at each stage of the game, and hence they do not possess incentives to deviate from the agree-upon optimal behavior. For subgame consistency to be satisfied, the imputation according to the original optimality principle has to be maintained at all the T stages along the cooperative trajectory \(\{ K_{t}^{*} \}_{t=1}^{T}\). In other words, the imputation

$$ \xi^{ ( \sigma_t ) } \bigl(t, K_t^*\bigr) = \bigl[ \xi^{ ( \sigma _t ) 1} \bigl(t, K_t^*\bigr), \xi^{ ( \sigma_t ) 2} \bigl(t, K_t^*\bigr), \ldots, \xi^{ ( \sigma _t ) n} \bigl(t, K_t^*\bigr) \bigr] $$
(23)

has to be upheld for σ t ∈{1,2,…,η t }, t∈{1,2,…,T}, and \(K_{t}^{*} \in X_{t}^{*}\).

4.1 Payoff Distribution Procedure

Following the analysis of Yeung and Petrosyan (2013b), we formulate a Payoff Distribution Procedure (PDP) so that the agreed-upon imputation (23) can be realized. Let \(B_{t}^{ ( \sigma_{t} ) i} (K_{t}^{*}) \) denote the payment that agent i will received at stage t under the cooperative agreement, if \(\theta_{t}^{\sigma_{t}} \in\{ \theta_{t}^{1}, \theta_{t}^{2}, \ldots, \theta _{t}^{\eta_{t}} \} \) occurs and \(K_{t}^{*} \in X_{t}^{*} \) is realized at stage t∈{1,2,…,T}. The payment scheme \(\{B_{t}^{ ( \sigma_{t} ) i} (K_{t}^{*}) \mbox{ for } {i \in N} \mbox{ contingent}\mbox{ upon} \mbox{ the event } \theta_{t}^{\sigma_{t}}\mbox{ and state }K_{t}^{*}\mbox{, for } t \in\{1,2, \ldots, T \} \}\) constitutes a PDP in the sense that the imputation to agent i over the stages 1 to T can be expressed as:

$$\begin{aligned} &\xi^{ ( \sigma_1 ) i} \bigl(1, K^0\bigr)\\ &\quad= B_1^{ ( \sigma_1 ) i} \bigl(K^0\bigr) \\ &\qquad{} + E_{\theta_{2}, \ldots, \theta_T; \vartheta_1, \ldots, \vartheta_T} \Biggl( \sum_{\zeta= 2}^T B_\zeta^{ ( \sigma _\zeta ) i} \bigl(K_\zeta^*\bigr) + q^i \bigl(K_{T+1}^*\bigr) (1+r)^{-T} \Biggr) \quad\mbox{for } i \in N. \end{aligned}$$

Moreover, according to the agreed-upon optimality principle in (23), if \(\theta_{t}^{\sigma_{t}}\) occurs and \(K_{t}^{*} \in X_{t}^{*}\) is realized at stage t the imputation to agent i is \(\xi^{ ( \sigma_{t} ) i} (t, K_{T}^{*})\). Therefore the payment scheme \(B_{t}^{ ( \sigma_{t} )} (K_{t}^{*})\) has to satisfy the conditions

$$\begin{aligned} &\xi^{ ( \sigma_t ) i} \bigl(t, K_t^*\bigr) \\ &\quad= B_t^{ ( \sigma_t ) i} \bigl(K_t^*\bigr) \\ & \qquad{}+ E_{\theta_{t+1}, \theta_{t+2}, \ldots, \theta_T; \vartheta_t, \vartheta_{t+1}, \ldots, \vartheta_T} \Biggl( \sum_{\zeta= t+1}^T B_\zeta^{ ( \sigma _\zeta ) i} \bigl(K_\zeta^*\bigr) + q^i \bigl(K_{T+1}^*\bigr) (1+r)^{-T} \Biggr) \end{aligned}$$
(24)

for iN and all t∈{1,2,…,T}.

For notational convenience the term \(\xi^{ ( \sigma_{T+1} ) i} (T+1, K_{T+1}^{*})\) is used to denote \(q^{i} (K_{T+1}^{*}) (1+r)^{-T}\). Crucial to the formulation of a subgame consistent solution is the derivation of a payment scheme \(\{B_{t}^{ ( \sigma_{t} ) i} (K_{t}^{*}) \mbox{, for } i \in N, \sigma_{t} \in\{1,2,\ldots, \eta_{t} \}, K_{t}^{*} \in X_{t}^{*} \mbox{ and } t \in\{1,2, \ldots, T \} \}\) so that the imputation in (24) can be realized.

A theorem for the derivation of a subgame consistent payment scheme can be established as follows.

Theorem 3

A payment equaling

$$\begin{aligned} &B_t^{ ( \sigma_t ) i} \bigl(K_t^*\bigr)\\ & \quad= (1+r)^{(t-1)} \Biggl( \xi ^{ ( \sigma_t ) i} \bigl(t, K_t^* \bigr)\\ &\qquad {} - E_{\vartheta_t} \Biggl\{ \sum_{\sigma_{t+1} = 1}^{\eta_{t+1} } \lambda_{t+1}^{\sigma_{t+1}} \xi^{ ( \sigma_{t+1} ) i} \Biggl[ t+1, K_t^* + \sum_{h = 1}^n \psi_t^{ ( \sigma_t ) h^*} \bigl(K_t^*\bigr) - \delta K_t^* + \vartheta_t \Biggr] \Biggr\} \Biggr), \end{aligned}$$

given to agent iN at stage t∈{1,2,…,T}, if \(\theta_{t}^{\sigma_{t}}\) occurs and \(K_{t}^{*} \in X_{t}^{*}\), leads to the realization of the imputation in (24).

Proof

To construct the proof of Theorem 3, we first express the term

$$\begin{aligned} & E_{\theta_{t+1}, \theta_{t+2}, \ldots, \theta_T; \vartheta_{t}, \vartheta_{t+1}, \ldots, \vartheta_T} \Biggl\{ \sum_{\zeta= t +1}^T B_\zeta^{ ( \sigma_\zeta ) i } \bigl(K_\zeta^*\bigr) (1+r)^{-(\zeta-1)} \\ &\quad\quad{}+ q^i \bigl(K_{T+1}^*\bigr) (1+r)^{-T} \Biggr\} \\ &\quad = E_{\vartheta_{t+1}} \Biggl\{ \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} \Biggl[ B_{t+1}^{ ( \sigma_{t+1} ) i} \bigl(K_{t+1}^*\bigr) (1+r)^{-(t-1)} \\ & \quad\quad{} + E_{\theta_{t+2}, \theta_{t+3}, \ldots, \theta_T; \vartheta_{t+2}, \vartheta_{t+3}, \ldots, \vartheta_T} \Biggl( \sum_{\zeta= t +2}^T B_\zeta^{ ( \sigma_\zeta ) i } \bigl(K_\zeta^*\bigr) (1+r)^{-(\zeta-1)} \\ &\quad\quad{}+ q^i \bigl(K_{T+1}^*\bigr) (1+r)^{-T} \Biggr) \Biggr] \Biggr\} . \end{aligned}$$
(25)

Then, using (24) we can express the term \(\xi^{ ( \sigma_{t+1} ) i} (t+1, K_{t+1}^{*}) \) as

$$\begin{aligned} & \xi^{ ( \sigma_{t+1} ) i} \bigl(t+1, K_{t+1}^*\bigr) \\ &\quad= B_{t+1}^{ ( \sigma_{t+1} ) i} \bigl(K_{t+1}^*\bigr) (1+r)^{-t} \\ &\quad\quad {} + E_{ \theta_{t+2}, \theta_{t+3}, \ldots, \theta_T ; \vartheta_{t+2}, \vartheta_{t+3}, \ldots, \vartheta_T} \Biggl\{ \sum_{\zeta= t +2}^T B_\zeta^{ ( \sigma_\zeta ) i } \bigl(K_\zeta^*\bigr) + q^i \bigl(K_{T+1}^*\bigr) (1+r)^{-T} \Biggr\} . \end{aligned}$$
(26)

The expression on the right-hand-side of equation (26) is the same as the expression inside the square brackets of (25). Invoking equation (26) we can replace the expression inside the square brackets of (25) by \(\xi^{ ( \sigma_{t+1} ) i} (t+1, K_{t+1}^{*}) \) and obtain:

$$\begin{aligned} & E_{\theta_{t+1}, \theta_{t+2}, \ldots, \theta_T ; \vartheta_{t}, \vartheta_{t+1}, \ldots, \vartheta_T} \Biggl\{ \sum_{\zeta= t +1}^T B_\zeta^{ ( \sigma_\zeta ) i } \bigl(K_\zeta^*\bigr) (1+r)^{-(\zeta-1)} \\ &\quad\quad{}+ q^i \bigl(K_{T+1}^*\bigr) (1+r)^{-T} \Biggr\} \\ &\quad = E_{\vartheta_t } \Biggl\{ \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} \xi^{ ( \sigma_{t+1} ) i} \bigl[t+1, K_{t+1}^* \bigr] \Biggr\} \\ &\quad = E_{\vartheta_t } \Biggl\{ \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} \xi^{ ( \sigma_{t+1} ) i} \Biggl[ t+1, K_t^* + \sum_{h = 1}^n \psi^{ ( \sigma_t ) h^*} \bigl(K_t^*\bigr) - \delta K_t^* + \vartheta_t \Biggr] \Biggr\} . \end{aligned}$$

Substituting the term

$$\begin{aligned} &E_{\theta_{t+1}, \theta_{t+2}, \ldots, \theta_T; \vartheta_{t}, \vartheta_{t+1}, \ldots, \vartheta_T} \Biggl\{ \sum_{\zeta= t +1}^T B_\zeta^{ ( \sigma_\zeta ) i } \bigl(K_\zeta^*\bigr) (1+r)^{-(\zeta-1)} \\ &\quad{}+ q^i \bigl(K_{T+1}^*\bigr) (1+r)^{-T} \Biggr\} \end{aligned}$$

by

$$ E_{\vartheta_t } \Biggl\{ \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} \xi^{ ( \sigma_{t+1} ) i} \Biggl[ t+1, K_t^* + \sum_{h = 1}^n \psi^{ ( \sigma_t ) h^*} \bigl(K_t^*\bigr) - \delta K_t^* + \vartheta_t \Biggr] \Biggr\} $$

in (24) we can express (24) as:

$$\begin{aligned} &\xi^{ ( \sigma_t ) i} \bigl(t, K_t^*\bigr) \\ &\quad = B_t^{ ( \sigma_t ) i} \bigl(K_t^*\bigr) (1+r)^{-(t-1) } \\ & \qquad{} + E_{\vartheta_t} \Biggl\{ \sum_{\sigma_{t+1} = 1}^{\eta_{t+1} } \lambda_{t+1}^{\sigma_{t+1}}\xi^{ ( \sigma_{t+1} ) i} \Biggl[ t+1, K_t^* + \sum_{h=1}^n \psi_t^{ ( \sigma_t ) h^*} \bigl(K_t^*\bigr) - \delta K_t^* + \vartheta_t \Biggr] \Biggr\} . \end{aligned}$$
(27)

For condition (27), which is an alternative form of (24), to hold it is required that:

$$\begin{aligned} &B_t^{ ( \sigma_t ) i} \bigl(K_t^*\bigr)\\ &\quad = (1+r)^{t-1} \Biggl( \xi ^{ ( \sigma_t ) i} \bigl(t, K_t^* \bigr) \\ &\qquad {} - E_{\vartheta_t} \Biggl\{ \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} \xi^{ ( \sigma_{t+1} ) i } \Biggl[t+1, K_t^* + \sum_{h = 1}^n \psi_t^{ ( \sigma_t ) h^*} \bigl(K_t^*\bigr) - \delta K_t^* + \vartheta_t \Biggr] \Biggr\} \Biggr), \end{aligned}$$

for iN and t∈{1,2,…,T}.

Therefore by paying \(B_{t}^{ ( \sigma_{t} ) i} (K_{t}^{*}) \) to agent iN at stage t∈{1,2,…,T}, if \(\theta_{t}^{\sigma_{t}}\) occurs and \(K_{t}^{*} \in X_{t}^{*} \) is realized, leads to the realization of the imputation in (24). Hence Theorem 3 follows. □

For a given imputation vector

$$\xi^{ ( \sigma_t )} \bigl(t, K_t^*\bigr) = \bigl[ \xi^{ ( \sigma_t ) 1} \bigl(t, K_t^*\bigr), \xi^{ ( \sigma_t ) 2} \bigl(t, K_t^*\bigr), \ldots, \xi^{ ( \sigma_t ) n} \bigl(t, K_t^*\bigr) \bigr], $$

for σ t ∈{1,2,…,η t } and t∈{1,2,…,T}, Theorem 3 can be used to derive the PDP that leads to the realization this vector.

4.2 Transfer Payments

When all agents are using the cooperative strategies given that \(K_{t}^{*} \in X_{t}^{*}\), and \(\theta_{t}^{\sigma_{t}} \) occur, the payoff that agent i will directly received at stage t becomes

$$ \bigl[ R^i \bigl(K_t^*, \theta_t^{\sigma_t}\bigr) - C^i \bigl( \psi^{ ( \sigma_t ) i^*} \bigl(K_t^*\bigr), \theta_t^{\sigma_t} \bigr) \bigr] (1+r)^{-(t-1)} $$

However, according to the agreed upon imputation, agent i is supposed to received \(B_{t}^{ ( \sigma_{t} ) i} (K_{t}^{*}) \) at stage t as given in Theorem 3. Therefore a transfer payment (which can be positive or negative)

$$ \varpi_t^{ ( \sigma_t ) i} \bigl(K_t^* \bigr) = B_t^{ ( \sigma_t ) i} \bigl(K_t^*\bigr) - \bigl[ R^i \bigl(K_t^*, \theta_t^{\sigma_t} \bigr) - C^i \bigl(\psi_t^{ ( \sigma _t ) i^*} \bigl(K_t^*\bigr), \theta_t^{\sigma_t}\bigr) \bigr] (1+r)^{-(t-1)}, $$

for t∈{1,2,…,T} and iN, will be assigned to agent i to yield the cooperative imputation \(\xi^{ ( \sigma_{t} )} (t, K_{t}^{*})\).

5 An Illustration

In this section, we provide an illustration of the derivation of a subgame consistent solution of public goods provision under accumulation and payoff uncertainties in a multiple asymmetric agents situation. The basic game structure is a discrete-time analog of an example in Yeung and Petrosyan (2013a) but with the crucial addition of uncertain future payoff structures to reflect probable changes in preferences, technologies, demographic structures and institutional arrangements. This is the first time that an explicit dynamic game model on cooperative public good provision under uncertain future payoffs is presented.

5.1 Multiple Asymmetric Agents Public Capital Build-up

We consider an n asymmetric agents economic region in which the agents receive benefits from an existing public capital stock K t at each stage t∈{1,2,…,T}. The accumulation dynamics of the public capital stock is governed by the stochastic difference equation:

$$ K_{t+1} = K_t + \sum _{j = 1}^n I_t^j - \delta K_t + \vartheta_t, \quad K_1 = K^0, \mbox{ for } t \in\{1,2,3\}, $$
(28)

where ϑ t is a discrete random variable with non-negative range \(\{ \vartheta_{t}^{1}, \vartheta_{t}^{2}, \vartheta_{t}^{3} \} \) and corresponding probabilities \(\{\gamma_{t}^{1}, \gamma_{t}^{2}, \gamma_{t}^{3} \}\), and \(\sum_{j = 1}^{3} \gamma_{t}^{j} \vartheta_{t}^{j} = \varpi> 0\).

At stage 1, it is known that \(\theta_{1}^{\sigma_{1}} = \theta_{1}^{1} \) has happened with probability \(\lambda_{1}^{1} = 1\), and the payoff of agent i is

$$\alpha_1^{ ( \sigma_t ) i} K_1 - c_1^{ ( \sigma_1 ) i} \bigl( I_1^i \bigr)^2. $$

At stage t∈{2,3}, the payoff of agent i is

$$\alpha_t^{ ( \sigma_t ) i} K_t - c_t^{ ( \sigma_t ) i} \bigl( I_t^i \bigr)^2, $$

if \(\theta_{1}^{\sigma_{t}} \in\{ \theta_{t}^{1}, \theta_{t}^{2}, \theta_{t}^{3}, \theta_{t}^{4} \} \) occurs.

In particular, \(\alpha_{t}^{ (\sigma_{t} ) i} K_{t} \) gives the gain that agent i derives from the public capital at stage t∈{1,2,3}, and \(c_{t}^{ (\sigma_{t} ) i} (I_{t}^{i} )^{2} \) is the cost of investing \(I_{t}^{i} \) in the public capital.

The probability that \(\theta_{t}^{\sigma_{t} } \in \{ \theta_{t}^{1} ,\theta_{t}^{2} ,\theta_{t}^{3} ,\theta_{t}^{4} \} \) will occur at stage t∈{2,3} is \(\lambda_{t}^{\sigma_{t} } \in \{ \lambda_{t}^{1} ,\lambda_{t}^{2} ,\lambda_{t}^{3} ,\lambda _{t}^{4} \} \). In stage 4, a terminal payment contingent upon the size of the capital stock equaling (q i K 4+m i)(1+r)−3 will be paid to agent i. Since there is no uncertainty in stage 4, we use \(\theta_{4}^{1} \) to denote the condition in stage 4 with probability \(\lambda_{4}^{1} =1\).

The objective of agent iN is to maximize the expected payoff:

$$\begin{aligned} &E_{ \theta_{1} ,\theta_{2} ,\theta_{3}; \vartheta_{1} ,\vartheta_{2} ,\vartheta_{3} } \Biggl\{ \sum_{\tau=1}^{3} \bigl[\alpha_{\tau}^{ (\sigma_{\tau} ) i} K_{\tau} -c_{\tau}^{ (\sigma_{\tau} ) i} \bigl(I_{\tau}^{i} \bigr)^{2} \bigr] (1+r)^{-(\tau-1)} \\ &\quad{} +\bigl(q ^{i} K_{4} +m ^{i} \bigr) (1+r)^{-3} \Biggr\} , \end{aligned}$$
(29)

subject to the public capital accumulation dynamics (28).

The noncooperative outcome will be examined in the next subsection.

5.2 Noncooperative Outcome

Invoking Lemma 2, one can characterize the noncooperative Nash equilibrium strategies for the game (28)–(29) as follows. In particular, a set of strategies \(\{ I_{t}^{ (\sigma_{t} ) i^{*}} = \phi_{t}^{ (\sigma_{t} ) i^{*}} (K) \mbox{, for } \sigma_{1} \in\{ 1\}, \sigma_{2} ,\sigma_{3} \in\{ 1,2,3,4\}, t \in \{ 1,2,3\} \mbox{ and } i\in N\} \) provides a Nash equilibrium solution to the game (28)–(29), if there exist functions \(V^{ (\sigma_{t} ) i} (t,K)\), for iN and t∈{1,2,3}, such that the following recursive relations are satisfied:

$$ \begin{aligned} & V^{ ( \sigma_4 ) i} (4, K) = \bigl(q^i K+m_i \bigr) (1+r)^{-3}; \\ &V^{ (\sigma_t ) i} (t, K) = \max_{I_t^i } E_{\vartheta_t } \Biggl\{ \bigl[\alpha_t^{ (\sigma_t ) i} K - c_t^{ (\sigma _t ) i} \bigl( I_t^i \bigr)^2 \bigr] (1+r)^{-(t-1)} +\sum_{\sigma_{t+1} =1}^4 \lambda_{t+1}^{\sigma_{t+1} } \\ &\hphantom{V^{ (\sigma_t ) i} (t, K) =} {}\times V^{ (\sigma_{t+1} ) i} \Biggl[t+1, K+\mathop{\sum _{ j = 1 }}_{ j \ne i}^n \phi_t^{ (\sigma_t ) j^*} (K) + I_t^i -\delta K + \vartheta_t \Biggr] \Biggr\} \\ & \hphantom{V^{ (\sigma_t ) i} (t, K) } =\max_{I_t^i} \Biggl\{ \bigl[\alpha_t^{ (\sigma_t ) i} K - c_t^{ (\sigma_t ) i} \bigl(I_{t}^{i} \bigr)^2\bigr] (1+r)^{-(t-1)} \\ & \hphantom{V^{ (\sigma_t ) i} (t, K) =}{} +\sum_{y=1}^3 \gamma_t^y \sum_{\sigma_{t+1} =1}^4 \lambda_{t+1}^{\sigma_{t+1} } \\ &\hphantom{V^{ (\sigma_t ) i} (t, K) =}{}\times V^{ (\sigma_{t+1} ) i} \Biggl[t+1, K+\mathop{\sum _{ j = 1 }}_{ j \ne i}^n \phi_t^{ (\sigma_t ) j^*} (K) + I_t^i - \delta K +\vartheta_t^y \Biggr] \Biggr\} , \\ & \hphantom{V^{ (\sigma_t ) i} (t, K) =} \mbox{ for } t \in\{ 1,2,3\}. \end{aligned} $$
(30)

Performing the indicated maximization in (30) yields:

$$\begin{aligned} I_t^i =&\phi_t^{ (\sigma_t ) i^*} (K) \\ =&\frac{(1+r)^{t-1} }{2c_t^{ (\sigma_t ) i} } \sum_{y=1}^3 \gamma_t^y \sum_{\sigma_{t+1} =1}^4 \lambda_{t+1}^{\sigma_{t+1} } \\ &{}\times V_{K_{t+1} }^{ (\sigma_{t+1} ) i} \Biggl[t+1, K+\sum_{j = 1}^n \phi_t^{ (\sigma_t ) j^*} (K) - \delta K +\vartheta_t^y \Biggr], \end{aligned}$$
(31)

for iN, t∈{1,2,3}, σ 1=1, and σ τ ∈{1,2,3,4} for τ∈{2,3}.

Proposition 1

The value function which represents the expected payoff of agent i can be obtained as:

$$ V^{ (\sigma_t ) i} (t, K) = \bigl[A_t^{ (\sigma_t ) i} K + C_t^{ (\sigma_t ) i}\bigr] (1+r)^{-(t-1)} , $$

for iN, t∈{1,2,3}, σ 1=1, and σ τ ∈{1,2,3,4} for τ∈{2,3}, where

$$\begin{aligned} &A_3^{ (\sigma_3 ) i} = \alpha_3^{ ( \sigma_3 ) i} + q^i (1-\delta) (1+r)^{-1}, \quad\textit{and} \\ &C_3^{ (\sigma_3 ) i} = -\frac{ (q^i )^2 (1+r)^{-2} }{ 4c_3^{ (\sigma_3 ) i} } + \Biggl[q^i \sum_{j = 1}^n \frac{q^j (1+r)^{-1}}{ 2 c_3^{ (\sigma_3 ) j} } +q^i \varpi_3 +m^i \Biggr] (1+r)^{-1} ; \\ &A_2^{ (\sigma_2 ) i} = \alpha_2^{ (\sigma_2 ) i} + \sum_{\sigma_3 =1}^4 \lambda_3^{\sigma_3 } A_3^{ ( \sigma_3 ) i} (1-\delta) (1+r)^{-1}, \quad\textit{and} \\ &C_2^{ ( \sigma_2 ) i} = -\frac{1}{4c_2^{ (\sigma_2 ) i} } \Biggl( \sum _{\sigma_3 =1}^4 \lambda_3^{\sigma_3 } A_3^{ ( \sigma _3 ) i} (1+r)^{-1} \Biggr)^2 \\ &\hphantom{C_2^{ ( \sigma_2 ) i}=} {} +\sum_{\sigma_3 =1}^4 \lambda_3^{\sigma_3 } \Biggl[ A_3^{ (\sigma_3 ) i} \Biggl( \sum_{j = 1}^n \sum _{\hat{\sigma}_3 =1}^4 \lambda_3^{\hat{\sigma}_3} \frac{A_3^{ (\hat{\sigma}_3 ) j} (1+r)^{-1} }{ 2c_2^{ (\sigma_2 ) j} } + \varpi_2 \Biggr) + C_3^{ (\sigma_3 ) i} \Biggr]\\ &\hphantom{C_2^{ ( \sigma_1 ) i}=} {}\times (1+r)^{-1}; \\ &A_1^{ (\sigma_1 ) i} =\alpha_1^{ (\sigma_1 ) i} + \sum_{\sigma_2 =1}^4 \lambda_2^{\sigma_2 } A_2^{ (\sigma_2 ) i} (1-\delta) (1+r)^{-1}, \quad\textit{and} \\ &C_1^{ (\sigma_1 ) i} = -\frac{1}{4 c_1^{ ( \sigma_1 ) i} } \Biggl( \sum _{\sigma_2 = 1}^4 \lambda_2^{\sigma_2 } A_2^{ (\sigma_2 ) i} (1+r)^{-1} \Biggr)^2 \\ & \hphantom{C_1^{ (\sigma_1 ) i}=} {} +\sum_{\sigma_2 =1}^4 \lambda_2^{\sigma_2 } \Biggl[ A_2^{ (\sigma_2 ) i} \Biggl( \sum_{j=1}^n \sum _{\hat{\sigma}_2 =1}^4 \lambda_2^{\hat{\sigma}_2} \frac{A_2^{ (\hat{\sigma}_2 ) j} (1+r)^{-1} }{ 2 c_1^{ (\sigma_1 ) j} } +\varpi_1 \Biggr)+C_2^{ (\sigma_2 ) i} \Biggr] \\ &\hphantom{C_2^{ ( \sigma_1 ) i}=} {}\times(1+r)^{-1}; \end{aligned}$$

for iN.

Proof

See Appendix. □

Substituting the relevant derivatives of the value functions in Proposition 1 into the game equilibrium strategies (31) yields a noncooperative Nash equilibrium solution of the game (28)–(29).

5.3 Cooperative Provision of Public Capital

Now we consider the case when the agents agree to cooperate and seek to enhance their gains. They agree to maximize their expected joint gain and distribute the cooperative gain proportional to their expected non-cooperative gains. The agents would first maximize their expected joint payoff

$$\begin{aligned} & E_{\theta_1 ,\theta_2 ,\theta_3 ;\vartheta_1 ,\vartheta_2 ,\vartheta_3 } \Biggl\{ \sum_{j=1}^n \sum_{\tau=1}^3 \bigl[\alpha_{\tau}^{ (\sigma _\tau ) j} K_\tau - c_\tau^{ (\sigma_\tau ) j} \bigl(I_\tau^j \bigr)^2 \bigr] (1+r)^{-(\tau-1)} \\ &\quad {} +\sum_{j=1}^n \bigl(q^j K_4 +m^j \bigr) (1+r)^{-3} \Biggr\} , \end{aligned}$$
(32)

subject to the stochastic dynamics (28).

Invoking Theorem 2, one can characterize the solution of the stochastic dynamic programming problem (28) and (32) as follows. In particular, a set of control strategies \(\{u_{t}^{ (\sigma_{t} ) i^{*}} = \psi_{t}^{ ( \sigma_{t} ) i^{*}} (K) \mbox{, for } t \in\{1,2,3\}\mbox{ and } i \in N, \sigma_{1} = 1, \sigma_{\tau}\in\{ 1,2,3,4\}\}\) for τ∈{2,3}, provides an optimal solution to the problem (28) and (32), if there exist functions \(W^{ ( \sigma_{t} )} (t, K)\), for t∈{1,2,3}, such that the following recursive relations are satisfied:

$$ \begin{aligned} &W^{ (\sigma_4 )} (4, K) = \sum_{j=1}^{n} \bigl(q^j K + m^j \bigr) (1+r)^{-3} ; \\ &W^{ (\sigma_t )} (t, K) = \max_{I_t^1 ,I_t^2, \ldots,I_t^n } E_{\vartheta_t } \Biggl\{ \sum_{j=1}^n \bigl[\alpha_t^{ (\sigma_t ) j} K - c_t^{ (\sigma_t ) j} \bigl(I_t^j \bigr)^2 \bigr](1+r)^{-(t-1)} \\ &\hphantom{W^{ (\sigma_t )} (t, K) =} {} +\sum_{\sigma_{t+1} =1}^4 \lambda_{t+1}^{\sigma_{t+1} } W^{ (\sigma_{t+1} )} \Biggl[t+1, K+ \sum _{j=1}^n I_t^j - \delta K +\vartheta_t \Biggr] \Biggr\} \\ & \hphantom{W^{ (\sigma_t )} (t, K) }= \max_{I_t^i } \Biggl\{ \sum _{j=1}^n \bigl[\alpha_t^{ (\sigma_t ) j} K - c_t^{ (\sigma_t ) j} \bigl(I_t^j \bigr)^2 \bigr] (1+r)^{-(t-1)} \\ & \hphantom{W^{ (\sigma_t )} (t, K) =} {} +\sum_{y=1}^3 \gamma_t^y \sum_{\sigma_{t+1} =1}^4 \lambda_{t+1}^{\sigma_{t+1} } W^{ (\sigma_{t+1} ) i} \Biggl[t+1, K+ \sum _{j=1}^n I_t^j - \delta K +\vartheta_t^y \Biggr] \Biggr\} \\ & \hphantom{W^{ (\sigma_t )} (t, K) =} \mbox{ for } t\in\{ 1,2,3\}. \end{aligned} $$
(33)

Performing the indicated maximization in (33) yields:

$$\begin{aligned} I_t^i =&\psi_t^{ (\sigma_t ) i^*} (K) \\ =&\frac{(1+r)^{t-1} }{2 c_t^{ (\sigma_t ) i} } \sum_{y=1}^3 \gamma_t^y \sum_{\sigma_{t+1} = 1}^4 \lambda_{t+1}^{\sigma_{t+1} } \\ &{}\times W_{K_{t+1} }^{ (\sigma_{t+1} )} \Biggl[t+1, K+\sum_{j=1}^n \psi_t^{ (\sigma_t ) j^*} (K) -\delta K +\vartheta_t^y \Biggr], \end{aligned}$$
(34)

for iN, t∈{1,2,3}, σ 1=1, and σ τ ∈{1,2,3,4} for τ∈{2,3}.

Proposition 2

The value function which represents the expected joint payoff of agents can be obtained as:

$$ W^{ (\sigma_t )} (t, K) = \bigl[A_t^{ (\sigma_t )} K +C_t^{ (\sigma_t )} \bigr] (1+r)^{-(t-1)} , $$

for t∈{1,2,3}, σ 1=1, and σ τ ∈{1,2,3,4} for τ∈{2,3}, where

$$\begin{aligned} &A_3^{ (\sigma_3 )} = \sum_{j=1}^n \alpha_3^{ (\sigma_3 ) j} + \sum_{j=1}^n q^j (1-\delta) (1+r)^{-1},\quad \textit{and} \\ &C_3^{ (\sigma_3 )} = - \sum _{j=1}^n \frac{ ( \sum_{h=1}^n q^h (1+r)^{-1} )^{ 2} }{4 c_3^{ ( \sigma_3 ) j} } \\ &\hphantom{C_3^{ (\sigma_3 )} =} {} +\sum_{j=1}^n \Biggl[ q^j \Biggl( \sum_{\ell= 1}^n \frac{\sum_{h=1}^n q^h (1+r)^{-1} }{2 c_3^{ ( \sigma_3 ) \ell} } +\varpi_3 \Biggr)+ m^j \Biggr] (1+r)^{-1} ; \\ &A_2^{ (\sigma_2 )} = \sum _{j=1}^n \alpha_2^{ (\sigma_2 ) j} + \sum_{\sigma_3 = 1}^4 \lambda_3^{\sigma_3} A_3^{ (\sigma_3 )} (1-\delta) (1+r)^{-1}, \quad \textit{and} \\ &C_2^{ (\sigma_2 )} = - \sum _{j = 1}^n \frac{1}{4 c_2^{ (\sigma_2 ) j}} \Biggl( \sum _{\sigma_3 = 1}^4 \lambda_3^{\sigma_3} A_3^{ (\sigma_3 )} (1+r)^{-1} \Biggr)^{2} \\ & \hphantom{C_2^{ (\sigma_2 )} =} {} +\sum_{\sigma_3 = 1}^4 \lambda_3^{\sigma_3} \Biggl[ A_3^{ (\sigma_3 ) i} \Biggl( \sum_{j = 1}^n \sum _{\hat{\sigma}_3 = 1}^4 \lambda_3^{\hat{\sigma}_3} \frac{A_3^{ ( \hat{\sigma}_3 ) j} (1+r)^{-1} }{ 2 c_2^{ (\sigma_2 ) j}} + \varpi_2 \Biggr) + C_3^{ ( \sigma_3 ) i} \Biggr] \\ & \hphantom{C_2^{ (\sigma_2 )} =} {}\times(1+r)^{-1}; \\ &A_1^{ (\sigma_1 )} = \sum _{j = 1}^n \alpha_1^{ (\sigma _1 ) j} + \sum_{\sigma_2 = 1}^4 \lambda_2^{\sigma_2} A_2^{ (\sigma_2 )} (1-\delta) (1+r)^{-1}, \quad\textit{and} \\ &C_1^{ (\sigma_1 )} = - \sum _{j = 1}^n \frac{1}{ 4 c_1^{ (\sigma_1 ) j}} \Biggl( \sum _{\sigma_2 = 1}^4 \lambda_2^{\sigma_2} A_2^{ (\sigma_2 )} (1+r)^{-1} \Biggr)^{2} \\ &\hphantom{C_1^{ (\sigma_1 )} =} {} + \sum_{\sigma_2 = 1}^4 \lambda_2^{\sigma_2} \Biggl[ A_{2}^{ (\sigma_2 )} \Biggl( \sum_{j = 1}^n \sum _{\hat{\sigma}_2 = 1}^4 \lambda_2^{\hat{\sigma}_2} \frac{A_2^{ (\hat{\sigma}_2 )} (1+r)^{-1}}{ 2 c_1^{ (\sigma_1 ) j}} + \varpi_1 \Biggr) + C_2^{ (\sigma_2 )} \Biggr] \\ &\hphantom{C_1^{ (\sigma_1 )} =} {}\times(1+r)^{-1}. \end{aligned}$$

Proof

Follow the proof of Proposition 1. □

Using (34) and Proposition 2, the optimal cooperative strategies of the agents can be obtained as:

$$\begin{aligned} \psi_3^{ (\sigma_3 ) i^*} (K) =& \frac{\sum_{h = 1}^n q^h (1+r)^{-1} }{ 2 c_3^{ (\sigma_3 ) i}} , \\ \psi_2^{ (\sigma_2 )i^*} (K) =& \sum _{\sigma_3 = 1}^4 \lambda_3^{\sigma_3} \frac{A_3^{ (\sigma_3 )} (1+r)^{-1}}{ 2 c_2^{ (\sigma_2 ) i}} , \\ \psi_1^{ (\sigma_1 ) i^*} (K) =& \sum _{\sigma_2 =1}^4 \lambda_2^{\sigma_2} \frac{A_2^{ (\sigma_2 )} (1+r)^{-1}}{ 2 c_1^{ (\sigma_1 ) i} }, \quad\mbox{for } i\in N. \end{aligned}$$
(35)

Substituting \(\psi_{t}^{ (\sigma_{t} ) i^{*}} (K)\) from (35) into (28) yields the optimal cooperative accumulation dynamics:

$$ K_{t+1} = K_t + \sum _{j = 1}^n \sum_{\sigma_{t+1} = 1}^4 \lambda_{t+1}^{\sigma_{t+1}} \frac{A_{t+1}^{ (\sigma_{t+1} )} (1+r)^{-1}}{2 c_t^{ (\sigma_t ) j}} - \delta K_{t} + \vartheta_t,\quad K_1 =K^0, $$
(36)

if \(\theta_{t}^{\sigma_{t} } \) occurs at stage t, for t∈{1,2,3}.

5.4 Subgame Consistent Cooperative Solution

Given that the agents agree to share the cooperative gain proportional to their expected non-cooperative payoffs, an imputation

$$\begin{aligned} \xi^{ (\sigma_t ) i} \bigl(t, K_t^* \bigr) =& \frac{V^{ (\sigma_t ) i} (t, K_t^* )}{\sum_{j = 1}^n V^{ (\sigma_t ) j} (t, K_t^* )} W^{ (\sigma_t )} \bigl(t, K_t^* \bigr) \\ =& \frac{[A_t^{ (\sigma_t ) i} K_t^* + C_t^{ (\sigma_t ) i} ]}{ \sum_{j = 1}^n [A_t^{ ( \sigma_t ) i} K_t^* + C_t^{ (\sigma_t ) i} ]} \bigl[ A_t^{ (\sigma_t )} K_t^* + C_t^{ (\sigma_t )} \bigr] (1+r)^{-(t-1)}, \\ &{}\mbox{ for } i\in N, \end{aligned}$$
(37)

if \(\theta_{t}^{\sigma_{t} } \) occurs at stage t for t∈{1,2,3} has to be maintained.

Invoking Theorem 3, if \(\theta_{t}^{\sigma_{t} } \) occurs and \(K_{t}^{*} \in X_{t}^{*} \) is realized at stage t a payment equaling

$$\begin{aligned} B_t^{ (\sigma_t ) i} \bigl(K_t^*\bigr) =& (1+r)^{(t-1)} \Biggl\{ \xi^{ (\sigma_t ) i} \bigl(t, K_t^* \bigr) \\ & {}- \Biggl[\sum_{y=1}^3 \gamma_t^y \sum_{\sigma_{t+1} = 1}^{\eta_{t+1}} \lambda_{t+1}^{\sigma_{t+1}} \\ &{}\times\Biggl(\xi^{ (\sigma_{t+1} ) i} \Biggl[t+1, K_t^* + \sum_{h=1}^n \psi_{t}^{ (\sigma_t ) h^*} \bigl(K_t^*\bigr) - \delta K_t^* + \vartheta_t^y \Biggr] \Biggr) \Biggr] \Biggr\} \\ =& \frac{A_t^{ (\sigma_t ) i} K_t^* + C_t^{ (\sigma_t ) i} }{ \sum_{j = 1}^n [A_t^{ (\sigma_t ) i} K_t^* + C_t^{ (\sigma_t ) i} ]} \bigl[A_t^{ (\sigma_t )} K_t^* + C_t^{ (\sigma_t )} \bigr] \\ & {} - \sum_{y=1}^3 \gamma_t^y \sum_{\sigma_{t+1} = 1}^{\eta_{t+1} } \lambda_{t+1}^{\sigma_{t+1}} \frac{A_{t+1}^{ (\sigma_{t+1} ) i} K_{t+1} (\sigma_{t+1}, \vartheta_t^y ) + C_{t+1}^{ (\sigma _{t+1} ) i} }{ \sum_{j = 1}^n [A_{t+1}^{ (\sigma_{t+1} ) i} K_{t+1} (\sigma_{t+1}, \vartheta_t^y) + C_{t+1}^{ (\sigma_{t+1} ) i} ]} \\ & {}\times\bigl[ A_{t+1}^{ (\sigma_{t+1} )} K_{t+1} \bigl(\sigma_{t+1}, \vartheta_t^y\bigr) +C_{t+1}^{ (\sigma_t )} \bigr] (1+r)^{-1}, \end{aligned}$$
(38)

where

$$K_{t+1} \bigl(\sigma_{t+1}, \vartheta_t^y \bigr) = K_t^* + \sum_{j = 1}^n \sum_{\sigma_{t+1} = 1}^4 \lambda_{t+1}^{\sigma_{t+1}} \frac{A_{t+1}^{ (\sigma_{t+1} )} (1+r)^{-1} }{ 2 c_t^{ (\sigma_t ) j}} - \delta K_t^* + \vartheta_t^y, $$

given to agent i at stage t∈{1,2,3} if \(\theta_{t}^{\sigma_{t}} \) occurs would lead to the realization of the imputation (37).

A subgame consistent solution and the corresponding payment schemes can be obtained using Propositions 1 and 2 and conditions (35)–(38).

Finally, since all agents are adopting the cooperative strategies, the payoff that agent i will directly received at stage t is

$$\alpha_t^{ (\sigma_t ) i} K_t^* - \frac{1}{4 c_t^{ (\sigma_t ) i}} \Biggl( \sum_{\sigma_{t+1} = 1}^4 \lambda_{t+1}^{\sigma_{t+1}} A_{t+1}^{ (\sigma_{t+1} )} (1+r)^{-1} \Biggr)^{ 2}, $$

if \(\theta_{t}^{\sigma_{t}} \) occurs at stage t.

However, according to the agreed upon imputation, agent i is to receive \(\xi^{ (\sigma_{t} ) i} (t, K_{t}^{*})\) in (38), therefore a transfer payment (which can be positive or negative) equaling

$$ \pi^{ (\sigma_t ) i} \bigl(t, K_t^* \bigr)= \xi^{ (\sigma_t ) i} \bigl(t, K_t^*\bigr) - \alpha_t^{ (\sigma_t ) i} K_t^* + \frac{1}{4 c_t^{ (\sigma_t ) i} } \Biggl( \sum _{\sigma_{t+1} = 1}^4 \lambda_{t+1}^{\sigma_{t+1} } A_{t+1}^{ (\sigma_{t+1} )} (1+r)^{-1} \Biggr)^{2} $$

will be given to agent iN at stage t.

6 Concluding Remarks

An essential characteristic of decision making over time is that though the decision-maker gathered all past and present information available, the precise state of the future, in general, could not be foreseen with absolute certainty. An empirically meaningful theory must therefore incorporate relevant uncertainties in an appropriate manner. This paper resolves the classical problem of market failure in the provision of public goods with a subgame consistent cooperative scheme taking into consideration two types of commonly observed uncertainties—stochastic stock accumulation dynamics and uncertain future payoff structures. A scheme that guarantees the agreed-upon optimality principle be maintained in any subgame and provides the basis for sustainable cooperation is derived. A “payoff distribution procedure” leading to subgame-consistent solutions is developed. An illustrative example is presented to demonstrate the derivation of subgame consistent solution for public goods provision game under these uncertainties. The analysis can be readily extended into a multiple public capital goods paradigm. This is the first time that subgame consistent cooperative provision of public goods is analysed under uncertainties in both the accumulation dynamics and future payoff structures. Further research and applications are expected.