Keywords

1 Introduction

A static stochastic decentralized optimization problem where a team consisting of two decision makers/players is at work is considered. The cost function is

$$\displaystyle\begin{array}{rcl} J = J(u,v,\zeta )& &{}\end{array}$$
(1)

where \(u \in {R}^{m_{u}}\) and \(v \in {R}^{m_{v}}\) are the two players’ respective decision variables/controls and the state of nature, \(\zeta \in {R}^{n}\), n ≥ 2, is a random variable whose p.d.f. f(ζ) is known to both players. This is the players’ prior information—it is public information. The random variable ζ is partitioned

$$\displaystyle\begin{array}{rcl} \zeta = {(\zeta _{1},\zeta _{2})}^{T}& & {}\\ \end{array}$$

and the information pattern is as follows. At decision time the component ζ 1 is known to the player whose control is u, the u-player, and the component ζ 2 is known to the player whose control is v, the v-player. Thus, both players have imperfect information. The u-player is oblivious of the ζ 2 component of the random variable, which is the v-player’s private information, and consequently the strategy of the u-player is u = u(ζ 1). The v-player is oblivious of the ζ 1 component of the random variable, which is the u-player’s private information, and consequently his or her strategy is v = v(ζ 2). The players have partial, or incomplete, information.

To obtain the optimal solution/strategies of the team/decentralized optimization problem, the following optimization problem in Hilbert space must be solved.

$$\displaystyle\begin{array}{rcl}{ J}^{{\ast}}& =& \min _{ u(\zeta _{1}),v(\zeta _{2})}\ E_{\zeta }(\ J(u(\zeta _{1}),v(\zeta _{2}),\zeta )) \\ & =& \min _{u(\zeta _{1}),v(\zeta _{2})}\ \int _{\zeta _{1}}\int _{\zeta _{2}}J(u(\zeta _{1}),v(\zeta _{2}),(\zeta _{1},\zeta _{2}))f(\zeta _{1},\zeta _{2})d\zeta _{1}d\zeta _{2}{}\end{array}$$
(2)

The instance where the u-player is interested in minimizing the cost function (1) whereas the v-player strives to maximize the cost (1) calls for the formulation of a stochastic zero-sum game with incomplete information, where a saddle point in pure strategies, in Hilbert space, is sought: the value of the game, if it exists, is

$$\displaystyle\begin{array}{rcl}{ J}^{{\ast}}& =& \min _{ u(\zeta _{1})}\max _{v(\zeta _{2})}\ E_{\zeta }(\ J(u(\zeta _{1}),v(\zeta _{2}),\zeta )\ ) \\ & =& \min _{u(\zeta _{1})}\max _{v(\zeta _{2})}\int _{\zeta _{1}}\int _{\zeta _{2}}J(u(\zeta _{1}),v(\zeta _{2}),(\zeta _{1},\zeta _{2}))f(\zeta _{1},\zeta _{2})d\zeta _{1}d\zeta _{2}{}\end{array}$$
(3)

This static zero-sum game in Hilbert space is in normal form.

In both the decentralized optimization problem posed in (2) and in the zero-sum static game formulation (3), the u- and v-players have partial information. And in both the decentralized optimization problem and in the zero-sum game, the players decide on their respective strategies u( ⋅) and v( ⋅), knowing the type of information that will become available to them, but before the information is actually received. In (2) and (3), the players’ strategies are of priorcommitment type. This is the reason why, although the players have partial information and consequently it stands to reason that their respective costs are conditional on their private information and therefore they have different costs, the game (3) is nevertheless zero-sum. And for the same reason, the solution of the decentralized optimization problem (2) entails the minimization of just one cost functional.

The decentralized stochastic static optimization problem in Hilbert space (2), referred to as a team decision problem, was addressed by Radner in his pioneering paper [1]. The present work could aptly be named “variations on a team by Radner.” Since a strong interest in Witsenhausen’s counterexample from 1968 [2] persists to this day, it is important to revisit Radner’s 1962 paper. Indeed, after the appearance of Radner’s paper and until the publication of Witsenhausen’s counterexample, it was widely believed in the controls community that the linear quadratic Gaussian (LQG) paradigm is a guarantor of the applicability of the separation, or, certainty equivalence, principle, and, as in LQG optimal control, the state is Gaussian distributed so that the sufficient statistics are linear in the measurements/information and are provided by linear Kalman filters. Consequently, the players’ optimal strategies will be linear in the sufficient statistic, and in particular, the linear state estimate. However, Radner showed in [1] that in the static Quadratic Gaussian (QG) optimization problem with incomplete information, although the players’ optimal strategies are affine in the information, the separation, or, certainty equivalence, principle does not apply. And in [2] Witsenhausen showed that in the simplest decentralized dynamic LQG optimal control problem neither does the separation principle apply, nor are the optimal strategies linear in the measurements. The bottom line: Radner’s paper [1] relates to Witsenhausen’s paper [2] like the Statics and Dynamics fields in Mechanical Engineering. Thus, with a view to also obtaining a better understanding of Witsenhausen’s counterexample, it is instructive to revisit Radner’s work and closely examine the informational and game theoretic aspects of the decentralized static QG optimization problem/team decision problem.

The article is organized as follows. In Sect. 2 the decentralized optimization problem is analyzed using the concept of delayedcommitment strategies and necessary conditions for the existence of a solution are obtained. The necessary conditions derived in Sect. 2 are used in Sect. 3 to directly obtain the solution of the decentralized static multivariate QG optimization problem. The applicability of the separation principle/certainty equivalence is discussed in Sect. 4. The necessary and sufficient conditions for the existence of a solution of the decentralized static multivariate QG optimization problem are discussed in Sect. 5. The solution of the decentralized static multivariate QG optimization problem using the concept of prior commitment strategies is presented in Sect. 6. It is shown that although in the static case the delayed commitment and prior commitment strategies are equivalent, when the concept of prior commitment strategies is used, the strategies are harder to derive. Finally, in Sect. 7 the decentralized static multivariate QG optimization problem where the players’ information is asymmetric is solved. The structure of the optimal solutions for cases of extreme informational asymmetry yields interesting insights into decentralized optimal control. Conclusions are presented in Sect. 8.

2 Analysis

The solution of the static team/decentralized optimization problem pursued in this paper is based on the following approach. Rather than tackling the Hilbert space optimization problem (2) head on, we instead opt for a game theoretic analysis of the decision problem on hand.

Consider first the decision problem faced by the u-player after he has received the information ζ 1, but before anyone has acted. His or Her cost is evaluated as follows.

$$\displaystyle\begin{array}{rcl}{ J}^{(u)}(u,v(\cdot );\zeta _{ 1})& =& E_{\zeta }(\ J(u,v(\zeta _{2}),\zeta )\mid \zeta _{1}\ ) {}\\ & =& E_{\zeta _{2}}(\ J(u,v(\zeta _{2}),(\zeta _{1},\zeta _{2}))\mid \zeta _{1}\ ) {}\\ & =& \int _{\zeta _{2}}J(u,v(\zeta _{2}),(\zeta _{1},\zeta _{2}))f(\zeta _{2}\mid \zeta _{1})d\zeta _{2}\ \rightarrow \ \min _{u} {}\\ \end{array}$$

Similar considerations apply to the v-player: having received the ζ 2 information, the cost which the v-player strives to minimize is

$$\displaystyle\begin{array}{rcl}{ J}^{(v)}(u(\cdot ),v;\zeta _{ 2})& =& E_{\zeta }(\ J(u(\zeta _{1}),v,\zeta )\mid \zeta _{2}\ ) {}\\ & =& E_{\zeta _{1}}(\ J(u(\zeta _{1}),v,(\zeta _{1},\zeta _{2}))\mid \zeta _{2}\ ) {}\\ & =& \int _{\zeta _{1}}J(u(\zeta _{1}),v,(\zeta _{1},\zeta _{2}))f(\zeta _{1}\mid \zeta _{2})d\zeta _{1}\ \rightarrow \ \min _{v} {}\\ \end{array}$$

Now, the u and v-players’ strategies are of delayedcommitment type. Consequently, although both players strive to minimize the cost function (1), since they have partial information, their expected costs will be conditional on their private information and will not be the same—each player minimizes his or her own cost functional. The static team problem/decentralized optimal control problem (2) has been reformulated as a stochastic nonzero-sum game with incomplete information. Hence, a Nash equilibrium is sought. Using delayedcommitment type strategies has highlighted informational issues which are apparent in extensive-form games but are suppressed in normal form games.

If a solution to the team/decentralized control problem in the form of a Nash equilibrium exists, it can be obtained as follows.

The u-player’s valuefunction is

$$\displaystyle\begin{array}{rcl}{ ({J}^{(u)}(\zeta _{ 1};{v}^{{\ast}}(\cdot )))}^{{\ast}} =\min _{ u}\int _{\zeta _{2}}J(u,{v}^{{\ast}}(\zeta _{ 2}),(\zeta _{1},\zeta _{2}))f(\zeta _{2}\mid \zeta _{1})d\zeta _{2}& &{}\end{array}$$
(4)

and his or her optimal strategy is obtained as follows: the u-player calculates the vector in \({R}^{m_{u}}\)

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1}) = \mathrm{arg}\ \min _{u}\int _{\zeta _{2}}J(u,{v}^{{\ast}}(\zeta _{ 2}),(\zeta _{1},\zeta _{2}))f(\zeta _{2}\mid \zeta _{1})d\zeta _{2}\ \ \forall \ \zeta _{1}& & {}\\ \end{array}$$

The v-player’s valuefunction is

$$\displaystyle\begin{array}{rcl}{ ({J}^{(v)}(\zeta _{ 2};{u}^{{\ast}}(\cdot )))}^{{\ast}} =\min _{ v}\int _{\zeta _{1}}J({u}^{{\ast}}(\zeta _{ 1}),v,(\zeta _{1},\zeta _{2}))f(\zeta _{1}\mid \zeta _{2})d\zeta _{1}& &{}\end{array}$$
(5)

and his or her optimal strategy is obtained as follows: the v-player calculates the vector in \({R}^{m_{v}}\)

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2}) = \mathrm{arg}\ \min _{v}\int _{\zeta _{1}}J({u}^{{\ast}}(\zeta _{ 1}),v,(\zeta _{1},\zeta _{2}))f(\zeta _{1}\mid \zeta _{2})d\zeta _{1}\ \ \forall \ \zeta _{2}& & {}\\ \end{array}$$

Hence, in order to determine the players’ optimal strategies, that is, the functions u  ∗ ( ⋅) and v  ∗ ( ⋅), the equation in \(u \in {R}^{m_{u}}\),

$$\displaystyle\begin{array}{rcl} \int \frac{\partial } {\partial u}J(u,{v}^{{\ast}}(\zeta _{ 2}),(\zeta _{1},\zeta _{2}))f(\zeta _{2}\mid \zeta _{1})d\zeta _{2} = 0\ \forall \zeta _{1}& &{}\end{array}$$
(6)

must be solved ∀ζ 1, and in this way the u-player’s strategy u  ∗ (ζ 1) is obtained. At the same time the equation in \(v \in {R}^{m_{v}}\)

$$\displaystyle\begin{array}{rcl} \int \frac{\partial } {\partial v}J({u}^{{\ast}}(\zeta _{ 1}),v,(\zeta _{1},\zeta _{2}))f(\zeta _{1}\mid \zeta _{2})d\zeta _{1} = 0\ \forall \zeta _{2}& &{}\end{array}$$
(7)

must be solved ∀ζ 2, and in this way the v-player’s strategy v  ∗ (ζ 2) is obtained. In addition, the following second-order conditions/inequalities must hold

$$\displaystyle\begin{array}{rcl} \int \frac{{\partial }^{2}} {\partial {u}^{2}}J(u,{v}^{{\ast}}(\zeta _{ 2}),(\zeta _{1},\zeta _{2}))\mid _{{u}^{{\ast}}(\zeta _{1})}f(\zeta _{2}\mid \zeta _{1})d\zeta _{2} > 0\ \forall \zeta _{1}& &{}\end{array}$$
(8)

and

$$\displaystyle\begin{array}{rcl} \int \frac{{\partial }^{2}} {\partial {v}^{2}}J({u}^{{\ast}}(\zeta _{ 1}),v,(\zeta _{1},\zeta _{2}))\mid _{{v}^{{\ast}}(\zeta _{2})}f(\zeta _{1}\mid \zeta _{2})d\zeta _{1} > 0\ \forall \zeta _{2}& &{}\end{array}$$
(9)

A set of two coupled functional equations (6) and (7) has been derived whose solution, if it exists, yields the u- and v-players’ respective Nash strategies u  ∗ (ζ 1) and v  ∗ (ζ 2). Evidently, the solution of static team/decentralized optimization problems and/or nonzero-sum stochastic games calls for the solution of a somewhat nonconventional mathematical problem, (6) and (7). The culprit is the partial information pattern.

At this juncture it is apparent that the solution concept advanced for the original team/decentralized control problem is a Nash equilibrium in the nonzero-sum stochastic game (4) and (5). Using delayed commitment strategies, a Person-By-Person Satisfactory (PBPS) minimization is pursued: the strategy u  ∗ ( ⋅) of the u-player is best, given that the v-player uses the strategy v  ∗ ( ⋅), and the strategy v  ∗ ( ⋅) of the v-player must be best, given that the u-player uses the strategy u  ∗ ( ⋅). Thus, the derived strategies \(({u}^{{\ast}}(\cdot ),{v}^{{\ast}}(\cdot ))\) are person-by-person minimal. This is so because the players’ outcomes provided by \(({u}^{{\ast}}(\cdot ),{v}^{{\ast}}(\cdot ))\) cannot be improved by unilaterally changing, say, u  ∗ ( ⋅) alone; and, vice versa, the strategy \(({u}^{{\ast}}(\cdot ),{v}^{{\ast}}(\cdot ))\) cannot be improved by changing v  ∗ ( ⋅) alone—this being the essence of a Nash equilibrium. Now, in nonzero-sum games, the calculated Nash equilibrium better be unique, for the solution to be applicable. However, in the absence of conflict of interest, as is the case in our original team/decentralized optimization problem (2), uniqueness of the Nash equilibrium solution is not an issue and the players will naturally settle on that particular Nash equilibrium \(({u}^{{\ast}}(\cdot ),{v}^{{\ast}}(\cdot ))\) which yields the minimal expected cost—we here refer to the calculated expected cost (2), namely

$$\displaystyle\begin{array}{rcl}{ J}^{{\ast}} = J({u}^{{\ast}}(\cdot ),{v}^{{\ast}}(\cdot ))& =& E_{\zeta }(\ J({u}^{{\ast}}(\zeta _{ 1}),{v}^{{\ast}}(\zeta _{ 2}),\zeta )\ ) \\ & =& \int _{\zeta _{1}}\int _{\zeta _{2}}J({u}^{{\ast}}(\zeta _{ 1}),{v}^{{\ast}}(\zeta _{ 2}),(\zeta _{1},\zeta _{2}))d\zeta _{1}d\zeta _{2}{}\end{array}$$
(10)

Uniqueness of the obtained Nash equilibrium follows if the cost function (1) is convex in u and in v. This is so because the weighted sum of convex functions is convex—see (4) and (5).

Clearly, the optimal solution of the original team/decentralized optimization problem (2), if it exists, is PBPS, that is, it is a Nash equilibrium. However, having found an even unique Nash equilibrium of the nonzero-sum stochastic game (4) and (5) does not guarantee optimality in the original team/decentralized control problem, where one is interested in the expected cost (2). To answer the question of the existence of an optimal solution of the original team/decentralized control problem, the optimization problem (2) must be considered in a Hilbert space setting, as in [1], and convexity in (u, v) of the cost function (1) is required.

In summary, if a solution of the team/decentralized optimization problem exists, the above outlined solution of the attendant nonzero-sum stochastic game (4) and (5) will yield its optimal solution. However, should the cost function (1) be convex in u and v, but not in (u, v), then, while a Nash equilibrium in the nonzero-sum game (4) and (5) might exist, a solution of the decentralized optimization problem (2) might not exist.

3 Static Quadratic Gaussian Team

Using the theory developed in Sect. 2, the complete solution of the multivariate QG team decision/decentralized optimization problem is now derived.

The payoff function (1) is quadratic:

$$\displaystyle\begin{array}{rcl} J(u,v,\zeta ) = -{u}^{T}{R}^{(u)}u - {v}^{T}{R}^{(v)}v + 2{v}^{T}{R}^{(u,v)}u + 2({u}^{T},{v}^{T})\left (\begin{array}{c} \zeta _{1}\\ \zeta _{ 2} \end{array} \right )& & {}\\ \end{array}$$

and the components of the random variable ζ are \(\zeta _{1} \in {R}^{m}\), \(\zeta _{2} \in {R}^{n-m}\). The u- and v-players’ control variables are u ∈ R m and v ∈ R n − m and the respective controls’ effort weighing matrices

$$\displaystyle\begin{array}{rcl}{ R}^{(u)} > 0,\ \ {R}^{(v)} > 0;& & {}\\ \end{array}$$

R (u, v) is an (n − m) ×m coupling matrix.

We calculate the v-player’s payoff

$$\displaystyle\begin{array}{rcl}{ J}^{(v)}(v,\zeta _{ 2};u(\cdot ))& =& 2{v}^{T}\zeta _{ 2} - {v}^{T}{R}^{(v)}v + 2{v}^{T}{R}^{(u,v)}E_{\zeta _{ 1}}(\ u(\zeta _{1})\mid \zeta _{2}\ ) \\ & & +E_{\zeta _{1}}(\ 2{u}^{T}(\zeta _{ 1})\zeta _{1} - {u}^{T}(\zeta _{ 1}){R}^{(u)}u(\zeta _{ 1})\mid \zeta _{2}\ ) {}\end{array}$$
(11)

Differentiation in v yields the unique optimal control response to the u-player’s strategy u(ζ 1),

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2}) = {({R}^{(v)})}^{-1}\zeta _{ 2} + {({R}^{(v)})}^{-1}{R}^{(u,v)}E_{\zeta _{ 1}}(\ u(\zeta _{1})\mid \zeta _{2}\ )\ \forall \zeta _{2}& &{}\end{array}$$
(12)

The u-player’s payoff is

$$\displaystyle\begin{array}{rcl}{ J}^{(u)}(u,\zeta _{ 1};v(\cdot ))& =& 2{u}^{T}\zeta _{ 1} - {u}^{T}{R}^{(u)}u + 2{u}^{T}{({R}^{(u,v)})}^{T}E_{\zeta _{ 2}}(\ v(\zeta _{2})\mid \zeta _{1}\ ) \\ & & +E_{\zeta _{2}}(\ 2{v}^{T}(\zeta _{ 2})\zeta _{2} - {v}^{T}(\zeta _{ 2}){R}^{(v)}v(\zeta _{ 2})\mid \zeta _{1}\ ) {}\end{array}$$
(13)

and differentiation in u yields the unique optimal control response to the v-player’s strategy v(ζ 2),

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1}) = {({R}^{(u)})}^{-1}\zeta _{ 1} + {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}E_{\zeta _{ 2}}(\ v(\zeta _{2})\mid \zeta _{1}\ )\ \forall \zeta _{1}& &{}\end{array}$$
(14)

Furthermore, the positive definiteness of the controls’ effort weighing matrices guarantees that the conditions (8) and (9) hold.

At this point we assume that the p.d.f. f of the random variable ζ is a multivariate normal distribution, that is,

$$\displaystyle\begin{array}{rcl} f(\zeta ) ={ \frac{1} {\sqrt{{(2\pi )}^{n } \mid \det (P)\mid }}\exp }^{-\frac{1} {2} {(\zeta -\overline{\zeta })}^{T}{P}^{-1}(\zeta -\overline{\zeta }) }& & {}\\ \end{array}$$

and the covariance matrix P is real, symmetric, and positive definite. In other words, the random variable

$$\displaystyle\begin{array}{rcl} \zeta = \left (\begin{array}{c} \zeta _{1}\\ \zeta _{ 2} \end{array} \right ) \sim \mathcal{N}\Bigg(\left (\begin{array}{c} \overline{\zeta }_{1}\\ \overline{\zeta } _{ 2} \end{array} \right ),\left [\begin{array}{cc} P_{1,1} & P_{1,2} \\ P_{1,2}^{T}&P_{2,2} \end{array} \right ]\Bigg)& &{}\end{array}$$
(15)

In the special case of a bivariate normal distribution with \(\zeta _{1},\zeta _{2} \in {R}^{1}\),

$$\displaystyle\begin{array}{rcl} \zeta \sim \mathcal{N}\Bigg(\left (\begin{array}{c} \overline{\zeta }_{1}\\ \overline{\zeta } _{ 2} \end{array} \right ),\left [\begin{array}{cc} \sigma _{1}^{2} & \rho \sigma _{1}\sigma _{2} \\ \rho \sigma _{1}\sigma _{2} & \sigma _{2}^{2} \end{array} \right ]\Bigg)& &{}\end{array}$$
(16)

and − 1 < ρ < 1.

The following is well known.

Lemma 1.

Consider the multivariate normal distribution (15). The distribution of ζ 1 conditional on ζ 2 is

$$\displaystyle\begin{array}{rcl} \zeta _{1} \sim \mathcal{N}(\overline{\zeta }_{1} + P_{1,2}P_{2,2}^{-1}(\zeta _{ 2} -\overline{\zeta }_{2}),P_{1,1} - P_{1,2}P_{2,2}^{-1}P_{ 1,2}^{T})& &{}\end{array}$$
(17)

and the distribution of ζ 2 conditional on ζ 1 is

$$\displaystyle\begin{array}{rcl} \zeta _{2} \sim \mathcal{N}(\overline{\zeta }_{2} + P_{1,2}^{T}P_{ 1,1}^{-1}(\zeta _{ 1} -\overline{\zeta }_{1}),P_{2,2} - P_{1,2}^{T}P_{ 1,1}^{-1}P_{ 1,2})& &{}\end{array}$$
(18)

The marginal p.d.f.s f 1 1 ) and f 2 2 ) are also Gaussian, that is,

$$\displaystyle\begin{array}{rcl} \zeta _{1} \sim \mathcal{N}(\overline{\zeta }_{1},P_{1,1})& &{}\end{array}$$
(19)

and

$$\displaystyle\begin{array}{rcl} \zeta _{2} \sim \mathcal{N}(\overline{\zeta }_{2},P_{2,2})& &{}\end{array}$$
(20)

In the special case of a bivariate normal distribution (16), the distribution of ζ 1 conditional on ζ 2 is

$$\displaystyle\begin{array}{rcl} \zeta _{1} \sim \mathcal{N}\Big(\overline{\zeta }_{1} +\rho \frac{\sigma _{1}} {\sigma _{2}}(\zeta _{2} -\overline{\zeta }_{2}),(1 {-\rho }^{2})\sigma _{ 1}^{2}\Big)& &{}\end{array}$$
(21)

and the distribution of ζ 2 conditional on ζ 1 is

$$\displaystyle\begin{array}{rcl} \zeta _{2} \sim \mathcal{N}\Big(\overline{\zeta }_{2} +\rho \frac{\sigma _{2}} {\sigma _{1}}(\zeta _{1} -\overline{\zeta }_{1}),(1 {-\rho }^{2})\sigma _{ 2}^{2}\Big)& &{}\end{array}$$
(22)

The marginal p.d.f.s f 1 1 ) and f 2 2 ) are

$$\displaystyle\begin{array}{rcl} \zeta _{1} \sim \mathcal{N}(\overline{\zeta }_{1},\sigma _{1}^{2})& &{}\end{array}$$
(23)

and

$$\displaystyle\begin{array}{rcl} \zeta _{2} \sim \mathcal{N}(\overline{\zeta }_{2},\sigma _{2}^{2})& &{}\end{array}$$
(24)

Inserting (18) into (14) yields

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1})& =& {({R}^{(u)})}^{-1}\zeta _{ 1} {}\\ & & +{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}E_{ w_{1}}(\ v(\overline{\zeta }_{2} + P_{1,2}^{T}P_{ 1,1}^{-1}(\zeta _{ 1} -\overline{\zeta }_{1}) + w_{1})\ ) {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} w_{1} \sim \mathcal{N}(0,P_{2,2} - P_{1,2}^{T}P_{ 1,1}^{-1}P_{ 1,2})& & {}\\ \end{array}$$

and inserting (17) into (12) yields

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2})& =& {({R}^{(v)})}^{-1}\zeta _{ 2} {}\\ & & +{({R}^{(v)})}^{-1}{R}^{(u,v)}E_{ w_{2}}(\ u(\overline{\zeta }_{1} + P_{1,2}P_{2,2}^{-1}(\zeta _{ 2} -\overline{\zeta }_{2}) + w_{2})\ ) {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} w_{2} \sim \mathcal{N}(0,P_{1,1} - P_{1,2}P_{2,2}^{-1}P_{ 1,2}^{T})& & {}\\ \end{array}$$

Using the convolution notation we obtain

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1})& =& {({R}^{(u)})}^{-1}\zeta _{ 1} {}\\ & & +{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}G_{ P_{2,2}-P_{1,2}^{T}P_{1,1}^{-1}P_{1,2}} {\ast} v(P_{1,2}^{T}P_{ 1,1}^{-1}\zeta _{ 1} + \overline{\zeta }_{2} - P_{1,2}^{T}P_{ 1,1}^{-1}\overline{\zeta }_{ 1}){}\\ \end{array}$$

where the function \(G_{P_{2,2}-P_{1,2}^{T}P_{1,1}^{-1}P_{1,2}}\) is the p.d.f. of the Gaussian random variable w 1. Similarly

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2})& =& {({R}^{(v)})}^{-1}\zeta _{ 2} {}\\ & & +{({R}^{(v)})}^{-1}{R}^{(u,v)}G_{ P_{1,1}-P_{1,2}P_{2,2}^{-1}P_{1,2}^{T}} {\ast} u(P_{1,2}P_{2,2}^{-1}\zeta _{ 2} + \overline{\zeta }_{1} - P_{1,2}P_{2,2}^{-1}\overline{\zeta }_{ 2}){}\\ \end{array}$$

where the function \(G_{P_{1,1}-P_{1,2}P_{2,2}^{-1}P_{1,2}^{T}}\) is the p.d.f. of the Gaussian random variable w 2. Hence, the optimal strategies satisfy the equations

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1})& =& {({R}^{(u)})}^{-1}\zeta _{ 1} \\ & & +{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}G_{ P_{2,2}-P_{1,2}^{T}P_{1,1}^{-1}P_{1,2}} {\ast} {v}^{{\ast}}(P_{ 1,2}^{T}P_{ 1,1}^{-1}\zeta _{ 1} + \overline{\zeta }_{2} - P_{1,2}^{T}P_{ 1,1}^{-1}\overline{\zeta }_{ 1}){}\end{array}$$
(25)

and

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2})& =& {({R}^{(v)})}^{-1}\zeta _{ 2} \\ & & +{({R}^{(v)})}^{-1}{R}^{(u,v)}G_{ P_{1,1}-P_{1,2}P_{2,2}^{-1}P_{1,2}^{T}} {\ast} {u}^{{\ast}}(P_{ 1,2}P_{2,2}^{-1}\zeta _{ 2}+\overline{\zeta }_{1}-P_{1,2}P_{2,2}^{-1}\overline{\zeta }_{ 2}){}\end{array}$$
(26)

Equations (25) and (26) constitute a linear system of two convolution-type Fredholm integral equations of the second kind with Gaussian kernels, in the unknown functions/optimal strategies u  ∗ ( ⋅) and v  ∗ ( ⋅). Moreover, the forcing functions are linear in their arguments. In view of these observations, we apply

Ansatz 2.

The u- and v-players’ optimal strategies are affine, that is,

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1}) = {K}^{(u)}\zeta _{ 1} + {c}^{(u)}& &{}\end{array}$$
(27)

and

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2}) = {K}^{(v)}\zeta _{ 2} + {c}^{(v)}& &{}\end{array}$$
(28)

 □ 

Inserting the strategies (27) and (28) into the respective (25) and (26), we calculate

$$\displaystyle\begin{array}{rcl}{ K}^{(v)}\zeta _{ 2} + {c}^{(v)}& =& {({R}^{(v)})}^{-1}\zeta _{ 2} + {({R}^{(v)})}^{-1}{R}^{(u,v)}{K}^{(u)}(P_{ 1,2}P_{2,2}^{-1}\zeta _{ 2} + \overline{\zeta }_{1} - P_{1,2}P_{2,2}^{-1}\overline{\zeta }_{ 2}) {}\\ & & +{({R}^{(v)})}^{-1}{R}^{(u,v)}{c}^{(u)}\ \forall \ \zeta _{ 2} {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl}{ K}^{(u)}\zeta _{ 1} + {c}^{(u)}& =& {({R}^{(u)})}^{-1}\zeta _{ 1} + {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{K}^{(v)}(P_{ 1,2}^{T}P_{ 1,1}^{-1}\zeta _{ 1} + \overline{\zeta }_{2} - P_{1,2}^{T}P_{ 1,1}^{-1}\overline{\zeta }_{ 1}) {}\\ & & +{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{c}^{(v)}\ \forall \ \zeta _{ 1} {}\\ \end{array}$$

We conclude that the following four linear equations in the four unknowns \(K_{m\times m}^{(u)}\), \(K_{(n-m)\times (n-m)}^{(v)}\), \({c}^{(u)} \in {R}^{m}\) and \({c}^{(v)} \in {R}^{n-m}\) hold:

$$\displaystyle\begin{array}{rcl}{ K}^{(v)}& =& {({R}^{(v)})}^{-1}(I + {R}^{(u,v)}{K}^{(u)}P_{ 1,2}P_{2,2}^{-1})\,{}\end{array}$$
(29)
$$\displaystyle\begin{array}{rcl}{ K}^{(u)}& =& {({R}^{(u)})}^{-1}(I + {({R}^{(u,v)})}^{T}{K}^{(v)}P_{ 1,2}^{T}P_{ 1,1}^{-1})\,{}\end{array}$$
(30)
$$\displaystyle\begin{array}{rcl}{ c}^{(v)}& =& {({R}^{(v)})}^{-1}{R}^{(u,v)}{K}^{(u)}(\overline{\zeta }_{ 1} - P_{1,2}P_{2,2}^{-1}\overline{\zeta }_{ 2}) + {({R}^{(v)})}^{-1}{R}^{(u,v)}{c}^{(u)}\,{}\end{array}$$
(31)

and

$$\displaystyle\begin{array}{rcl}{ c}^{(u)} = {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{K}^{(v)}(\overline{\zeta }_{ 2} - P_{1,2}^{T}P_{ 1,1}^{-1}\overline{\zeta }_{ 1}) + {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{c}^{(v)}& &{}\end{array}$$
(32)

Combining (29) and (30) yields the respective linear, Lyapunov type, matrix equations for K (u) and K (v),

$$\displaystyle\begin{array}{rcl} & & {R}^{(u)}{K}^{(u)}P_{ 1,1} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)}{K}^{(u)}P_{ 1,2}P_{2,2}^{-1}P_{ 1,2}^{T} \\ & & \quad = P_{1,1} + {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}P_{ 1,2}^{T} {}\end{array}$$
(33)

and

$$\displaystyle\begin{array}{rcl}{ R}^{(v)}{K}^{(v)}P_{ 2,2} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{K}^{(v)}P_{ 1,2}^{T}P_{ 1,1}^{-1}P_{ 1,2} = P_{2,2} + {R}^{(u,v)}{({R}^{(u)})}^{-1}P_{ 1,2}& &{}\end{array}$$
(34)

Solving the linear Lyapunov-type matrix equations (33) and (34) yields the optimal gains K (u) and K (v), whereupon the constant vectors \({c}^{(u)} \in {R}^{m_{u}}\) and \({c}^{(v)} \in {R}^{m_{v}}\) are

$$\displaystyle\begin{array}{rcl} \left (\begin{array}{c} {c}^{(u)} \\ {c}^{(v)} \end{array} \right ) ={ \left [\begin{array}{cc} {R}^{(u)} & - {({R}^{(u,v)})}^{T} \\ - {R}^{(u,v)} & {R}^{(v)} \end{array} \right ]}^{-1}\left (\begin{array}{c} {({R}^{(u,v)})}^{T}{K}^{(v)}(\overline{\zeta }_{ 2} - P_{1,2}^{T}P_{ 1,1}^{-1}\overline{\zeta }_{ 1}) \\ {R}^{(u,v)}{K}^{(u)}(\overline{\zeta }_{1} - P_{1,2}P_{2,2}^{-1}\overline{\zeta }_{2}) \end{array} \right )& & {}\\ \end{array}$$

Concerning the calculation of the intercepts c (u) and c (v), the following holds.

A necessary condition for the existence of a solution to the multivariate decentralized QG optimization problem is that the Schur complements \({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)}\) and \({R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}\) are nonsingular.

In the special case where the controls are scalars and the p.d.f. of the random variable ζ is the bivariate normal distribution (16), the optimal gains are

$$\displaystyle\begin{array}{rcl}{ K}^{(u)} = \frac{{R}^{(v)} +\rho \frac{\sigma _{2}} {\sigma _{1}} {R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} {-\rho }^{2}{({R}^{(u,v)})}^{2}}& &{}\end{array}$$
(35)

and

$$\displaystyle\begin{array}{rcl}{ K}^{(v)} = \frac{{R}^{(u)} +\rho \frac{\sigma _{1}} {\sigma _{2}} {R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} {-\rho }^{2}{({R}^{(u,v)})}^{2}}& &{}\end{array}$$
(36)

The intercepts are the solution of the linear system

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{cc} {R}^{(u)} & - {R}^{(u,v)} \\ {R}^{(u,v)} & - {R}^{(v)} \end{array} \right ]\left (\begin{array}{c} {c}^{(u)} \\ {c}^{(v)} \end{array} \right )& =& {R}^{(u,v)}\left (\begin{array}{c} (\overline{\zeta }_{2} -\rho \frac{\sigma _{2}} {\sigma _{1}} \overline{\zeta }_{1}){K}^{(v)} \\ - (\overline{\zeta }_{1} -\rho \frac{\sigma _{1}} {\sigma _{2}} \overline{\zeta }_{2}){K}^{(u)} \end{array} \right ) {}\\ & =& \frac{{R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} {-\rho }^{2}{({R}^{(u,v)})}^{2}} {}\\ & & \left (\begin{array}{c} (\overline{\zeta }_{2} -\rho \frac{\sigma _{2}} {\sigma _{1}} \overline{\zeta }_{1})({R}^{(u)} +\rho \frac{\sigma _{1}} {\sigma _{2}} {R}^{(u,v)}) \\ - (\overline{\zeta }_{1} -\rho \frac{\sigma _{1}} {\sigma _{2}} \overline{\zeta }_{2})({R}^{(v)} +\rho \frac{\sigma _{2}} {\sigma _{1}} {R}^{(u,v)}) \end{array} \right ) {}\\ \end{array}$$

so that

$$\displaystyle\begin{array}{rcl}{ c}^{(u)}& =& \frac{{R}^{(u,v)}} {({R}^{(u)}{R}^{(v)} {-\rho }^{2}{({R}^{(u,v)})}^{2})({({R}^{(u,v)})}^{2} - {R}^{(u)}{R}^{(v)})} \\ & & \Big\{\Big[{(\rho }^{2} - 1){R}^{(u,v)}{R}^{(v)} -\rho \frac{\sigma _{2}} {\sigma _{1}}({({R}^{(u,v)})}^{2} \\ & & \quad - {R}^{(u)}{R}^{(v)})\Big]\overline{\zeta }_{ 1} + {[\rho }^{2}{({R}^{(u,v)})}^{2} - {R}^{(u)}{R}^{(v)}]\overline{\zeta }_{ 2}\Big\} {}\end{array}$$
(37)

and

$$\displaystyle\begin{array}{rcl}{ c}^{(v)}& =& \frac{{R}^{(u,v)}} {({R}^{(u)}{R}^{(v)} {-\rho }^{2}{({R}^{(u,v)})}^{2})({({R}^{(u,v)})}^{2} - {R}^{(u)}{R}^{(v)})} \\ & & \Big\{\Big[{(\rho }^{2} - 1){R}^{(u,v)}{R}^{(u)} -\rho \frac{\sigma _{1}} {\sigma _{2}}({({R}^{(u,v)})}^{2} \\ & & \quad - {R}^{(u)}{R}^{(v)})\Big]\overline{\zeta }_{ 2} + {[\rho }^{2}{({R}^{(u,v)})}^{2} - {R}^{(u)}{R}^{(v)}]\overline{\zeta }_{ 1}\Big\} {}\end{array}$$
(38)

The following holds.

Proposition 3.

The necessary and sufficient conditions for the existence of a solution of the scalar decentralized QG optimization problem using delayed commitment strategies are

$$\displaystyle\begin{array}{rcl} {R}^{(u)}& >& 0, {}\\ {R}^{(v)}& >& 0, {}\\ {R}^{(u)}{R}^{(v)}& \neq & {({R}^{(u,v)})}^{2}, {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl}{ R}^{(u)}{R}^{(v)}{\neq \rho }^{2}{({R}^{(u,v)})}^{2}& & {}\\ \end{array}$$

The u- and v-players’ optimal strategies are specified in (35)–(38) and are determined by the scalar problem parameters R (u) , R (v) , R (u,v) , \(\overline{\zeta }_{1}\) , \(\overline{\zeta }_{2}\) , σ 1 , σ 2 , and ρ. The optimal solution (35)–(38) is symmetric.□

Corollary 4.

In the special scalar case where the random variable’s components ζ 1 and ζ 2 are uncorrelated and ρ = 0, the optimal strategies are

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1}) = \frac{1} {{R}^{(u)}}\zeta _{1} + \frac{{R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left (\frac{{R}^{(u,v)}} {{R}^{(u)}} \overline{\zeta }_{1} + \overline{\zeta }_{2}\right )& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2}) = \frac{1} {{R}^{(v)}}\zeta _{2} + \frac{{R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left (\frac{{R}^{(u,v)}} {{R}^{(v)}} \overline{\zeta }_{2} + \overline{\zeta }_{1}\right )& & {}\\ \end{array}$$

Also, in the special case where in the quadratic cost function there is no coupling and R (u,v) = 0, the optimal strategies are linear:

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1}) = \frac{1} {{R}^{(u)}}\zeta _{1}& &{}\end{array}$$
(39)

and

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2}) = \frac{1} {{R}^{(v)}}\zeta _{2}& &{}\end{array}$$
(40)

4 Certainty Equivalence

We briefly digress and first examine the centralized static QG optimization problem.

4.1 Centralized QG Optimization Problem

In the centralized static QG optimization problem where both players have complete knowledge of the state of nature \({(\zeta _{1},\zeta _{2})}^{T}\), a necessary and sufficient condition for the existence of an optimal solution is

$$\displaystyle\begin{array}{rcl} M \equiv \left [\begin{array}{cc} {R}^{(u)} & - {({R}^{(u,v)})}^{T} \\ - {R}^{(u,v)} & {R}^{(v)} \end{array} \right ] > 0& & {}\\ \end{array}$$

and the optimal controls \({({u}^{{\ast}},{v}^{{\ast}})}^{T}\) are

$$\displaystyle\begin{array}{rcl} \left (\begin{array}{c} {u}^{{\ast}} \\ {v}^{{\ast}}\end{array} \right ) ={ \left [\begin{array}{cc} {R}^{(u)} & - {({R}^{(u,v)})}^{T} \\ - {R}^{(u,v)} & {R}^{(v)} \end{array} \right ]}^{-1}\left (\begin{array}{c} \zeta _{1}\\ \zeta _{ 2} \end{array} \right )& & {}\\ \end{array}$$

We shall require the following.

Lemma 5.

Consider the blocked symmetric matrix

$$\displaystyle\begin{array}{rcl} M = \left [\begin{array}{cc} M_{1,1} & M_{1,2} \\ M_{1,2}^{T}&M_{2,2} \end{array} \right ]& & {}\\ \end{array}$$

and let

$$\displaystyle\begin{array}{rcl} N \equiv {M}^{-1}& & {}\\ \end{array}$$

Assuming the required matrix inverses exit, the inverse matrix

$$\displaystyle\begin{array}{rcl} N = \left [\begin{array}{cc} N_{1,1} & N_{1,2} \\ N_{1,2}^{T}&N_{2,2} \end{array} \right ]& & {}\\ \end{array}$$

where the blocks

$$\displaystyle\begin{array}{rcl} N_{1,1}& =& M_{1,1}^{-1}[I + M_{ 1,2}{(M_{2,2} - M_{1,2}^{T}M_{ 1,1}^{-1}M_{ 1,2})}^{-1}M_{ 1,2}^{T}M_{ 1,1}^{-1}] {}\\ N_{1,2}& =& M_{1,1}^{-1}M_{ 1,2}{(M_{1,2}^{T}M_{ 1,1}^{-1}M_{ 1,2} - M_{2,2})}^{-1} {}\\ N_{2,2}& =& -{(M_{1,2}^{T}M_{ 1,1}^{-1}M_{ 1,2} - M_{2,2})}^{-1} {}\\ \end{array}$$

An alternative representation in blocked form of the inverse matrix N is

$$\displaystyle\begin{array}{rcl} N_{1,1}& =& {(M_{1,1} - M_{1,2}M_{2,2}^{-1}M_{ 1,2}^{T})}^{-1} {}\\ N_{1,2}& =& {(M_{1,2}M_{2,2}^{-1}M_{ 1,2}^{T} - M_{ 1,1})}^{-1}M_{ 1,2}M_{2,2}^{-1} {}\\ N_{2,2}& =& M_{2,2}^{-1} + M_{ 2,2}^{-1}M_{ 1,2}^{T}{(M_{ 1,1} - M_{1,2}M_{2,2}^{-1}M_{ 1,2}^{T})}^{-1}M_{ 1,2}M_{2,2}^{-1} {}\\ \end{array}$$

Proof.

By inspection, and the application of the Matrix Inversion Lemma. □ 

We shall also require

Lemma 6.

The real symmetric matrix

$$\displaystyle\begin{array}{rcl} M = \left [\begin{array}{cc} {R}^{(u)} & - {({R}^{(u,v)})}^{T} \\ - {R}^{(u,v)} & {R}^{(v)} \end{array} \right ]& & {}\\ \end{array}$$

is positive definite iff the matrices R (v) > 0, R (u) > 0 and their respective Schur complements are positive definite, that is,

$$\displaystyle\begin{array}{rcl}{ R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)} > 0& & {}\\ {R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T} > 0& & {}\\ \end{array}$$

In view of Lemmas 5 and 6, the following holds.

$$\displaystyle\begin{array}{rcl}{ \left [\begin{array}{cc} {R}^{(u)} & - {({R}^{(u,v)})}^{T} \\ - {R}^{(u,v)} & {R}^{(v)} \end{array} \right ]}^{-1} = \left [\begin{array}{cc} N_{1,1} & N_{1,2} \\ N_{1,2}^{T}&N_{2,2} \end{array} \right ]& & {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} N_{1,1}& =& {({R}^{(u)})}^{-1}[I + {({R}^{(u,v)})}^{T}{({R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T})}^{-1}{R}^{(u,v)}{({R}^{(u)})}^{-1}] {}\\ N_{1,2}& =& {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{({R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T})}^{-1} {}\\ N_{2,2}& =& {({R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T})}^{-1} {}\\ \end{array}$$

or, alternatively,

$$\displaystyle\begin{array}{rcl} N_{1,1}& =& {({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})}^{-1} {}\\ N_{1,2}& =& {({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})}^{-1}{({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1} {}\\ N_{2,2}& =& {({R}^{(v)})}^{-1} + {({R}^{(v)})}^{-1}{R}^{(u,v)}{({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})}^{-1}{({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{}\\ \end{array}$$

Hence, in the centralized scenario the explicit formulae for the optimal controls are

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1},\zeta _{2})& =& {({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})}^{-1}(\zeta _{ 1} + {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}\zeta _{ 2}){}\end{array}$$
(41)

and

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 1},\zeta _{2})& =& {({R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T})}^{-1}({R}^{(u,v)}{({R}^{(u)})}^{-1}\zeta _{ 1} +\zeta _{2}){}\end{array}$$
(42)

Corollary 7.

In the special case where the controls are scalars, the necessary and sufficient conditions for the existence of an optimal solution are

$$\displaystyle\begin{array}{rcl} & & {R}^{(u)} > 0, {}\\ & & {R}^{(v)} > 0, {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl}{ R}^{(u)}{R}^{(v)} > {({R}^{(u,v)})}^{2}& & {}\\ \end{array}$$

The optimal controls are linear and the solution is symmetric:

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1},\zeta _{2})& =& \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}({R}^{(v)}\zeta _{ 1} + {R}^{(u,v)}\zeta _{ 2}){}\end{array}$$
(43)
$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 1},\zeta _{2})& =& \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}({R}^{(u,v)}\zeta _{ 1} + {R}^{(u)}\zeta _{ 2}){}\end{array}$$
(44)

4.2 Separation Principle

We now return to the decentralized QG optimization problem and ascertain the applicability of certaintyequivalence, a.k.a., the separation principle. We confine our attention to the scalar case and a bivariate Gaussian random variable (16).

When the information available to the u-player is restricted to the ζ 1 component of the state of nature, then, according to Lemma 1, his or her Maximum Likelihood (ML) estimate of the ζ 2 component of the state of nature will be

$$\displaystyle\begin{array}{rcl} \hat{\zeta }_{2} = \overline{\zeta }_{2} +\rho \frac{\sigma _{2}} {\sigma _{1}}(\zeta _{1} -\overline{\zeta }_{1})& & {}\\ \end{array}$$

Similarly, when the information available to the v-player is restricted to the ζ 2 component of the state of nature, then, according to Lemma 1, his or her ML estimate of the ζ 1 component of the state of nature will be

$$\displaystyle\begin{array}{rcl} \hat{\zeta _{1}} = \overline{\zeta }_{1} +\rho \frac{\sigma _{1}} {\sigma _{2}}(\zeta _{2} -\overline{\zeta }_{2})& & {}\\ \end{array}$$

Replacing ζ 2 in the centralized solution given by Corollary 7, (43), by the u-player’s ML estimate \(\hat{\zeta _{2}}\) of ζ 2 yields the u-player’s certainty equivalence-based affine strategy

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1})& =& \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left \{{R}^{(v)}\zeta _{ 1} + {R}^{(u,v)}\left [\overline{\zeta }_{ 2} +\rho \frac{\sigma _{2}} {\sigma _{1}}(\zeta _{1} -\overline{\zeta }_{1})\right ]\right \} \\ & =& \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left [\left ({R}^{(v)} +\rho \frac{\sigma _{2}} {\sigma _{1}}{R}^{(u,v)}\right )\zeta _{ 1} + {R}^{(u,v)}\left (\overline{\zeta }_{ 2} -\rho \frac{\sigma _{2}} {\sigma _{1}}\overline{\zeta }_{1}\right )\right ]{}\end{array}$$
(45)

and replacing ζ 1 in the centralized solution given by Corollary 7, (44), by the v-player’s ML estimate \(\hat{\zeta }_{1}\) of ζ 1 yields the v-player’s affine strategy

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2})& =& \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left \{{R}^{(u,v)}\left [\overline{\zeta }_{ 1} +\rho \frac{\sigma _{1}} {\sigma _{2}}(\zeta _{2} -\overline{\zeta }_{2})\right ] + {R}^{(u)}\zeta _{ 2}\right \} \\ & =& \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left [\left ({R}^{(u)} +\rho \frac{\sigma _{1}} {\sigma _{2}}{R}^{(u,v)}\right )\zeta _{ 2} + {R}^{(u,v)}\left (\overline{\zeta }_{ 1} -\rho \frac{\sigma _{1}} {\sigma _{2}}\overline{\zeta }_{2}\right )\right ]{}\end{array}$$
(46)

In the special case where the random variable’s components ζ 1 and ζ 2 are not correlated, that is, ρ = 0, the players’ certainty equivalence-based affine strategies are

$$\displaystyle\begin{array}{rcl} u(\zeta _{1}) = \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}({R}^{(v)}\zeta _{ 1} + {R}^{(u,v)}\overline{\zeta }_{ 2})& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} v(\zeta _{2}) = \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}({R}^{(u)}\zeta _{ 2} + {R}^{(u,v)}\overline{\zeta }_{ 1})& & {}\\ \end{array}$$

In the special case where there is no coupling in the quadratic payoff function and R (u, v) = 0, the players’ certainty equivalence-based strategies are (39) and (40).

5 Discussion

Similar to the optimal strategies in the decentralized control problem, also the certainty equivalence-based strategies (45) and (46) are affine and symmetric. However, a comparison of the u-player’s optimal strategy which is specified in (35) and (37), and his or her certainty equivalence-based strategy (45), and similarly, a comparison of the v-player’s optimal strategy which is specified in (36) and (38), and his or her certainty equivalence-based strategy (46), leads one to conclude that certainty equivalence does not hold. This is so even when there is no correlation and the parameter ρ = 0. Certainty equivalence holds only in the special case where there is no coupling in the quadratic payoff function and R (u, v) = 0. This state of affairs is attributable to the partial information pattern.

It is also interesting to contrast the conditions for the existence of a solution of the centralized QG optimization problem and the conditions for the existence of a solution of the decentralized QG optimization problem. We note that the solution (41) and (42) of the centralized optimization problem can be formally derived using the PBPS solution concept. For this we need

$$\displaystyle\begin{array}{rcl} & & {R}^{(u)} > 0 {}\\ & & {R}^{(v)} > 0 {}\\ \end{array}$$

and the Schur complements must be nonsingular, that is,

$$\displaystyle\begin{array}{rcl} \det ({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})\neq 0& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \det ({R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}({R}^{(u,v)})T)\neq 0& & {}\\ \end{array}$$

At the same time, we know that an optimal solution of the centralized optimization problem exists iff the matrix M is positive definite. Hence, in view of Lemma 6, we conclude that the positive definiteness of the Schur complements of the positive definite matrices R (u) and R (v) is a necessary condition for the existence of an optimal solution of the centralized optimization problem. At the same time, the invertibility of the Schur complements, while not sufficient to guarantee the existence of a solution of the centralized optimal control problem, is sufficient to allow a solution which conforms to the PBPS solution concept-based decentralized optimization problem—we have obtained a unique Nash solution and in the scalar case the respective u and v-players’ Nash strategies are determined by (35), (37), and (36), (38), respectively.

Now, in view of [1], the positive definiteness of M is sufficient for the existence of an optimal solution of the decentralized optimization problem (2): the necessary and sufficient condition for the existence of a solution of the centralized optimization problem is a sufficient condition for the existence of an optimal solution of the decentralized problem, and moreover, the u- and v-players’ Nash strategies determined by (35), (37), and (36), (38), respectively, are then optimal. However, if the matrix M is not positive definite but the matrices R (u) and R (v) are positive definite and their Schur complements are nonsingular, then while an optimal solution to the centralized optimization problem does not exist, in the decentralized control problem a PBPS solution concept-based unique Nash equilibrium exists.

6 Decentralized Static Quadratic Gaussian Optimization Problem

The original formulation of the decentralized optimization problem with a quadratic payoff functional, as formulated by Radner, (2), is considered in the special context of the multivariate QG optimization problem:

$$\displaystyle\begin{array}{rcl} J(u(\zeta _{1}),v(\zeta _{2}),\zeta )& =& \int _{\zeta _{1}}\int _{\zeta _{2}}\left [ - {u}^{T}(\zeta _{ 1}){R}^{(u)}u(\zeta _{ 1}) - {v}^{T}(\zeta _{ 2}){R}^{(v)}v(\zeta _{ 2})\right. \\ & & \qquad \qquad + 2{v}^{T}(\zeta _{ 2}){R}^{(u,v)}u(\zeta _{ 1}) \\ & & \qquad \qquad \left.+2({u}^{T}(\zeta _{ 1}),{v}^{T}(\zeta _{ 2}))\left (\begin{array}{c} \zeta _{1}\\ \zeta _{ 2} \end{array} \right )\right ]f(\zeta _{1},\zeta _{2})d\zeta _{1}d\zeta _{2} \\ & =& \int _{\zeta _{1}}[-{u}^{T}(\zeta _{ 1}){R}^{(u)}u(\zeta _{ 1}) + 2{u}^{T}(\zeta _{ 1})\zeta _{1}]f_{1}(\zeta _{1})d\zeta _{1} \\ & & +\int _{\zeta _{2}}[-{v}^{T}(\zeta _{ 2}){R}^{(v)}v(\zeta _{ 2}) + 2{v}^{T}(\zeta _{ 2})\zeta _{2}]f_{2}(\zeta _{2})d\zeta _{2} \\ & & +2\int _{\zeta _{1}}\int _{\zeta _{2}}{v}^{T}(\zeta _{ 2}){R}^{(u,v)}u(\zeta _{ 1})f(\zeta _{1},\zeta _{2})d\zeta _{1}d\zeta _{2} {}\end{array}$$
(47)

From [1] we know that optimal prior commitment strategies u  ∗ ( ⋅) and v  ∗ ( ⋅) exist and they are affine, provided the quadratic cost function is convex, that is, the matrix M is positive definite. Thus, the u- and v-players’ strategies are parameterized as follows:

$$\displaystyle\begin{array}{rcl} u(\zeta _{1}) = K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}& &{}\end{array}$$
(48)

and

$$\displaystyle\begin{array}{rcl} v(\zeta _{2}) = K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)}& &{}\end{array}$$
(49)

The subscript p indicates that now the strategies are of the priorcommitment type.

Inserting the expressions (48) and (49) into (47) yields

$$\displaystyle\begin{array}{rcl} J(K_{p}^{(u)},K_{ p}^{(v)},c_{ p}^{(u)},c_{ p}^{(v)})& =& -E_{\zeta _{ 1}}(\ {(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}{R}^{(u)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}) \\ & & +2{(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}\zeta _{ 1}\ ) \\ & & -E_{\zeta _{2}}(\ {(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(v)}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)}) \\ & & +2{(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}\zeta _{ 2}\ ) \\ & & +2E_{\zeta }(\ {(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})){}\end{array}$$
(50)

The payoff (50) is a function of the parameters \(K_{p}^{(u)}\), \(K_{p}^{(v)}\), \(c_{p}^{(u)}\), and \(c_{p}^{(v)}\).

The payoff function is differentiated in the parameters and the derivatives are set equal to zero. We can interchange the order of integration and differentiation. We shall use the notation.

e i is the unit vector in the Euclidian spaces R m or R n − m, all of whose entries are zeroes except entry number i.

The following calculations are needed.

Lemma 8.

$$\displaystyle\begin{array}{rcl} & & \frac{\partial } {\partial (K_{p}^{(u)})_{i,j}}({(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}{R}^{(u)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})) {}\\ & & \quad = 2\zeta _{1}^{T}e_{ j}e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}\zeta _{ 1} + 2\zeta _{1}^{T}e_{ j}e_{i}^{T}{R}^{(u)}c_{ p}^{(u)} {}\\ \end{array}$$

and consequently, using the properties of the Trace operator and the fact that the marginal p.d.f. of ζ 1 is Gaussian with expectation \(\overline{\zeta }_{1}\) and covariance P 1,1 , we calculate

$$\displaystyle\begin{array}{rcl} E_{\zeta _{1}}\left ( \frac{\partial } {\partial (K_{p}^{(u)})_{i,j}}({(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}{R}^{(u)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}))\right )& =& 2e_{ i}^{T}{R}^{(u)}K_{ p}^{(u)}P_{ 1,1}e_{j} {}\\ & & +2\overline{\zeta }_{1}^{T}e_{ j}e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1} {}\\ & & +2\overline{\zeta }_{1}^{T}e_{ j}e_{i}^{T}{R}^{(u)}c_{ p}^{(u)} {}\\ & =& 2e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}P_{ 1,1}e_{j} {}\\ & & +2e_{j}^{T}\overline{\zeta }_{ 1} \cdot e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1} {}\\ & & +2e_{j}^{T}\overline{\zeta }_{ 1} \cdot e_{i}^{T}{R}^{(u)}c_{ p}^{(u)}, {}\\ i& =& 1,\ldots,m,\ j = 1,\ldots,m {}\\ \end{array}$$

Similarly,

$$\displaystyle\begin{array}{rcl} & & \frac{\partial } {\partial (K_{p}^{(v)})_{i,j}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(v)}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})) {}\\ & & \quad = 2\zeta _{2}^{T}e_{ j}e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}\zeta _{ 2} + 2\zeta _{2}^{T}e_{ j}e_{i}^{T}{R}^{(v)}c_{ p}^{(v)} {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta _{2}}\left (\ \frac{\partial } {\partial (K_{p}^{(v)})_{i,j}}{(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(v)}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})\right )& =& 2e_{ i}^{T}{R}^{(v)}K_{ p}^{(v)}P_{ 2,2}e_{j} {}\\ & & +2\overline{\zeta }_{2}^{T}e_{ j}e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2} {}\\ & & +2\overline{\zeta }_{2}^{T}e_{ j}e_{i}^{T}{R}^{(v)}c_{ p}^{(v)} {}\\ & =& 2e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}P_{ 2,2}e_{j} {}\\ & & +2e_{j}^{T}\overline{\zeta }_{ 2} \cdot e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2} {}\\ & & +2e_{j}^{T}\overline{\zeta }_{ 2} \cdot e_{i}^{T}{R}^{(v)}c_{ p}^{(v)}\, {}\\ i& =& 1,\ldots,n - m,\ j = 1,\ldots,n - m{}\\ \end{array}$$

In addition

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial (K_{p}^{(u)})_{i,j}}(\zeta _{1}^{T}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})) =\zeta _{ 1}^{T}e_{ i}e_{j}^{T}\zeta _{ 1}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta _{1}}\left (\ \frac{\partial } {\partial (K_{p}^{(u)})_{i,j}}(\zeta _{1}^{T}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}))\right )& =& e_{ j}^{T}P_{ 1,1}e_{i} + e_{i}^{T}\overline{\zeta }_{ 1} \cdot e_{j}^{T}\overline{\zeta }_{ 1}\, {}\\ i& =& 1,\ldots,m,\ j = 1,\ldots,m {}\\ \end{array}$$

Similarly,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial (K_{p}^{(v)})_{i,j}}(\zeta _{2}^{T}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})) = e_{ i}^{T}\zeta _{ 2} \cdot e_{j}^{T}\zeta _{ 2}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta _{2}}\left (\ \frac{\partial } {\partial (K_{p}^{(v)})_{i,j}}(\zeta _{2}^{T}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)}))\right )& =& e_{ j}^{T}P_{ 2,2}e_{i} + e_{i}^{T}\overline{\zeta }_{ 2} \cdot e_{j}^{T}\overline{\zeta }_{ 2}\,\ \ {}\\ i& =& 1,\ldots,n - m,\ j = 1,\ldots,n - m {}\\ \end{array}$$

Also,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial (K_{p}^{(u)})_{i,j}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})) = {(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}e_{ i} \cdot e_{j}^{T}\zeta _{ 1}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta }\left (\ \frac{\partial } {\partial (K_{p}^{(u)})_{i,j}}\ ({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}))\right )& =& e_{ j}^{T}\overline{\zeta }_{ 1} \cdot e_{i}^{T}{({R}^{(u,v)})}^{T}c_{ p}^{(v)} {}\\ & & +e_{j}^{T}\overline{\zeta }_{ 1} \cdot e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} {}\\ & & +e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(v)}P_{ 2,1}e_{j}\,\ \ {}\\ i& =& 1,\ldots,m,\ j = 1,\ldots,m {}\\ \end{array}$$

Similarly,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial (K_{p}^{(v)})_{i,j}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})) = {(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}{R}^{(u,v)}e_{ i}e_{j}^{T}\zeta _{ 2}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta }\left (\ \frac{\partial } {\partial (K_{p}^{(v)})_{i,j}}\ ({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}))\ \right )& =& \overline{\zeta }_{ 2}^{T}e_{ j}e_{i}^{T}{({R}^{(u,v)})}^{T}c_{ p}^{(u)} {}\\ & & +\overline{\zeta }_{2}^{T}e_{ j}e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(u)}\overline{\zeta }_{ 1} {}\\ & & +e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(u)}P_{ 1,2}e_{j} {}\\ & =& e_{j}^{T}\overline{\zeta }_{ 2} \cdot e_{i}^{T}{({R}^{(u,v)})}^{T}c_{ p}^{(u)} {}\\ & & +e_{j}^{T}\overline{\zeta }_{ 2} \cdot e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(u)}\overline{\zeta }_{ 1} {}\\ & & +e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(u)}P_{ 1,2}e_{j}\, {}\\ i& =& 1,\ldots,n - m,\ j = 1,\ldots,n - m {}\\ \end{array}$$

Furthermore,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial (c_{p}^{(u)})}({(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}{R}^{(u)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})) = 2{R}^{(u)}c_{ p}^{(u)} + 2{R}^{(u)}K_{ p}^{(u)}\zeta _{ 1}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta _{1}}\left (\ \frac{\partial } {\partial c_{p}^{(u)}}({(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}{R}^{(u)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}))\right ) = 2{R}^{(u)}c_{ p}^{(u)} + 2{R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1}& & {}\\ \end{array}$$

Similarly,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial (c_{p}^{(v)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(v)}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})) = 2{R}^{(v)}c_{ p}^{(v)} + 2{R}^{(v)}K_{ p}^{(v)}\zeta _{ 2}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta _{2}}\left (\ \frac{\partial } {\partial c_{p}^{(v)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(v)}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)}))\right ) = 2{R}^{(v)}c_{ p}^{(v)} + 2{R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2}& & {}\\ \end{array}$$

In addition,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial c_{p}^{(u)}}({(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}\zeta _{ 1}) =\zeta _{1}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta _{1}}\left (\ \frac{\partial c_{p}^{(u)}} {\partial } ({(K_{p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})}^{T}\zeta _{ 1})\right ) = \overline{\zeta }_{1}& & {}\\ \end{array}$$

Similarly,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial c_{p}^{(v)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}\zeta _{ 2}) =\zeta _{2}& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta _{2}}\left (\ \frac{\partial } {\partial c_{p}^{(v)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}\zeta _{ 2})\right ) = \overline{\zeta }_{2}& & {}\\ \end{array}$$

Finally,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial c_{p}^{(u)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})) = {({R}^{(u,v)})}^{T}(K_{ p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta }\left (\ \frac{\partial } {\partial c_{p}^{(u)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}))\right ) = {({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} + {({R}^{(u,v)})}^{T}c_{ p}^{(v)}& & {}\\ \end{array}$$

Similarly,

$$\displaystyle\begin{array}{rcl} \frac{\partial } {\partial c_{p}^{(v)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})) = {R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)})& & {}\\ \end{array}$$

and consequently

$$\displaystyle\begin{array}{rcl} E_{\zeta }\left (\ \frac{\partial } {\partial c_{p}^{(v)}}({(K_{p}^{(v)}\zeta _{ 2} + c_{p}^{(v)})}^{T}{R}^{(u,v)}(K_{ p}^{(u)}\zeta _{ 1} + c_{p}^{(u)}))\right ) = {R}^{(u,v)}K_{ p}^{(u)}\overline{\zeta }_{ 1} + {R}^{(u,v)}c_{ p}^{(u)}& & {}\\ \end{array}$$

The optimality conditions and Lemma 8 yield the system of \(n(n + 1) - 2m(n - m)\) linear equations

$$\displaystyle\begin{array}{rcl} & & e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}P_{ 1,1}e_{j} + (e_{j}^{T}\overline{\zeta }_{ 1}) \cdot e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1} \\ & & \qquad \quad + (e_{j}^{T}\overline{\zeta }_{ 1}) \cdot e_{i}^{T}{R}^{(u)}c_{ p}^{(u)} - (e_{ j}^{T}\overline{\zeta }_{ 1}) \cdot e_{i}^{T}{({R}^{(u,v)})}^{T}c_{ p}^{(v)} \\ & & \qquad \quad - (e_{j}^{T}\overline{\zeta }_{ 1}) \cdot e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} - e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(v)}P_{ 2,1}e_{j} \\ & & \quad = e_{j}^{T}P_{ 1,1}e_{i} + (e_{i}^{T}\overline{\zeta }_{ 1}) \cdot (e_{j}^{T}\overline{\zeta }_{ 1}) {}\end{array}$$
(51)

where \(e_{i},e_{j} \in {R}^{m}\) and i = 1, , m, j = 1, , m,

$$\displaystyle\begin{array}{rcl} & & e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}P_{ 2,2}e_{j} + (e_{j}^{T}\overline{\zeta }_{ 2}) \cdot e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2} \\ & & \qquad \quad + (e_{j}^{T}\overline{\zeta }_{ 2}) \cdot e_{i}^{T}{R}^{(v)}c_{ p}^{(v)} - (e_{ j}^{T}\overline{\zeta }_{ 2}) \cdot e_{i}^{T}{R}^{(u,v)}c_{ p}^{(u)} \\ & & \qquad \quad - (e_{j}^{T}\overline{\zeta }_{ 2}) \cdot e_{i}^{T}{R}^{(u,v)}K_{ p}^{(u)}\overline{\zeta }_{ 1} - e_{i}^{T}{R}^{(u,v)}K_{ p}^{(u)}P_{ 1,2}e_{j} \\ & & \quad = e_{j}^{T}P_{ 2,2}e_{i} + (e_{i}^{T}\overline{\zeta }_{ 2}) \cdot (e_{j}^{T}\overline{\zeta }_{ 2}) {}\end{array}$$
(52)

where \(e_{i},e_{j} \in {R}^{n-m}\) and \(i = 1,\ldots,n - m\), \(j = 1,\ldots,n - m\),

$$\displaystyle\begin{array}{rcl}{ ({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} + {({R}^{(u,v)})}^{T}c_{ p}^{(v)} + \overline{\zeta }_{ 1} = {R}^{(u)}c_{ p}^{(u)} + {R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1},& &{}\end{array}$$
(53)

and

$$\displaystyle\begin{array}{rcl}{ R}^{(u,v)}K_{ p}^{(u)}\overline{\zeta }_{ 1} + {R}^{(u,v)}c_{ p}^{(u)} + \overline{\zeta }_{ 2} = {R}^{(v)}c_{ p}^{(v)} + {R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2}& &{}\end{array}$$
(54)

The unknowns are \(K_{p}^{(u)}\), an m ×m matrix, \(K_{p}^{(v)}\), an \((n - m) \times (n - m)\) matrix, \(c_{p}^{(u)} \in {R}^{m}\) and \(c_{p}^{(v)} \in {R}^{n-m}\), a total of \(n(n + 1) - 2m(n - m)\) unknowns.

Using (53) and (54) we express the intercepts \(c_{p}^{(u)}\) and \(c_{p}^{(v)}\) as linear functions of \(K_{p}^{(u)}\) and \(K_{p}^{(v)}\):

$$\displaystyle\begin{array}{rcl} \left (\begin{array}{c} c_{p}^{(u)} \\ c_{p}^{(v)} \end{array} \right )& =&{ \left [\begin{array}{cc} {R}^{(u)} & - {({R}^{(u,v)})}^{T} \\ - {R}^{(u,v)} & {R}^{(v)} \end{array} \right ]}^{-1}\left (\begin{array}{c} \overline{\zeta }_{1} + {({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} - {R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1} \\ \overline{\zeta }_{2} + {R}^{(u,v)}K_{p}^{(u)}\overline{\zeta }_{1} - {R}^{(v)}K_{p}^{(v)}\overline{\zeta }_{2} \end{array} \right ){}\\ \end{array}$$

Hence,

$$\displaystyle\begin{array}{rcl} c_{p}^{(u)}& =& {({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})}^{-1}[\overline{\zeta }_{ 1} + {({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} - {R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1}] {}\\ & & +{({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})}^{-1}{({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1} {}\\ & & [\overline{\zeta }_{2} + {R}^{(u,v)}K_{ p}^{(u)}\overline{\zeta }_{ 1} - {R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2}]\, {}\\ c_{p}^{(v)}& =& {({R}^{(v)})}^{-1}{R}^{(u,v)}{({R}^{(u)} - {({R}^{(u,v)})}^{T}{({R}^{(v)})}^{-1}{R}^{(u,v)})}^{-1} {}\\ & & [\overline{\zeta }_{1} + {({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} - {R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1}] {}\\ & & +{({R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T})}^{-1}[\overline{\zeta }_{ 2} + {R}^{(u,v)}K_{ p}^{(u)}\overline{\zeta }_{ 1} - {R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2}] {}\\ \end{array}$$

Substituting these expressions into (51) and (52) yields a reduced linear system of \({n}^{2} - 2m(n - m)\) equations in the \({n}^{2} - 2m(n - m)\) unknowns which populate the matrices \(K_{p}^{(u)}\) and \(K_{p}^{(v)}\). Note that if \(\overline{\zeta }_{1} = 0\) and \(\overline{\zeta }_{2} = 0\), \(c_{p}^{(u)} = 0\), \(c_{p}^{(v)} = 0\) and the equations for \(K_{p}^{(u)}\) and \(K_{p}^{(v)}\) are

$$\displaystyle\begin{array}{rcl} & & e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}P_{ 1,1}e_{j} + (e_{j}^{T}\overline{\zeta }_{ 1}) \cdot e_{i}^{T}{R}^{(u)}K_{ p}^{(u)}\overline{\zeta }_{ 1} - (e_{j}^{T}\overline{\zeta }_{ 1}) \cdot e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(v)}\overline{\zeta }_{ 2} {}\\ & & \quad - e_{i}^{T}{({R}^{(u,v)})}^{T}K_{ p}^{(v)}P_{ 2,1}e_{j} = e_{j}^{T}P_{ 1,1}e_{i} + (e_{i}^{T}\overline{\zeta }_{ 1}) \cdot (e_{j}^{T}\overline{\zeta }_{ 1}) {}\\ & & e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}P_{ 2,2}e_{j} + (e_{j}^{T}\overline{\zeta }_{ 2}) \cdot e_{i}^{T}{R}^{(v)}K_{ p}^{(v)}\overline{\zeta }_{ 2} - (e_{j}^{T}\overline{\zeta }_{ 2}) \cdot e_{i}^{T}{R}^{(u,v)}K_{ p}^{(u)}\overline{\zeta }_{ 1} {}\\ & & \quad - e_{i}^{T}{R}^{(u,v)}K_{ p}^{(u)}P_{ 1,2}e_{j} = e_{j}^{T}P_{ 2,2}e_{i} + (e_{i}^{T}\overline{\zeta }_{ 2}) \cdot (e_{j}^{T}\overline{\zeta }_{ 2}) {}\\ \end{array}$$

Example.

In the special case of scalar controls and a bivariate normal distribution we obtain a system of four linear equations for the four scalar unknowns \(K_{p}^{(u)}\), \(K_{p}^{(v)}\), \(c_{p}^{(u)}\), and \(c_{p}^{(v)}\):

$$\displaystyle\begin{array}{rcl} & & (\sigma _{1}^{2} + \overline{\zeta }_{ 1}^{2}){R}^{(u)}K_{ p}^{(u)} - (\rho \sigma _{ 1}\sigma _{2} + \overline{\zeta }_{1}\overline{\zeta }_{2}){R}^{(u,v)}K_{ p}^{(v)} \\ & & \quad + \overline{\zeta }_{1}{R}^{(u)}c_{ p}^{(u)} -\overline{\zeta }_{ 1}{R}^{(u,v)}c_{ p}^{(v)} =\sigma _{ 1}^{2} + \overline{\zeta }_{ 1}^{2} {}\end{array}$$
(55)
$$\displaystyle\begin{array}{rcl} & & (\sigma _{2}^{2} + \overline{\zeta }_{ 2}^{2}){R}^{(v)}K_{ p}^{(v)} - (\rho \sigma _{ 1}\sigma _{2} + \overline{\zeta }_{1}\overline{\zeta }_{2}){R}^{(u,v)}K_{ p}^{(u)} \\ & & \quad + \overline{\zeta }_{2}{R}^{(v)}c_{ p}^{(v)} -\overline{\zeta }_{ 2}{R}^{(u,v)}c_{ p}^{(u)} =\sigma _{ 2}^{2} + \overline{\zeta }_{ 2}^{2} {}\end{array}$$
(56)
$$\displaystyle\begin{array}{rcl} & & \quad c_{p}^{(u)} = \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}({R}^{(v)}\overline{\zeta }_{ 1} + {R}^{(u,v)}\overline{\zeta }_{ 2}) -\overline{\zeta }_{1}K_{p}^{(u)}{}\end{array}$$
(57)
$$\displaystyle\begin{array}{rcl} & & \quad c_{p}^{(v)} = \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}({R}^{(u,v)}\overline{\zeta }_{ 1} + {R}^{(u)}\overline{\zeta }_{ 2}) -\overline{\zeta }_{2}K_{p}^{(v)}{}\end{array}$$
(58)

Compare the optimal prior commitment strategies specified in (55)–(58) and the delayed commitment strategies explicitly specified in (35)–(38). The optimization problem is static and therefore the prior commitment and delayed commitment strategies are all the same:

$$\displaystyle\begin{array}{rcl} K_{p}^{(u)} = {K}^{(u)},\ K_{ p}^{(v)} = {K}^{(v)},\ c_{ p}^{(u)} = {c}^{(u)},\ c_{ p}^{(v)} = {c}^{(v)}& & {}\\ \end{array}$$

So the two sets of formulae (35)–(38) and (55)–(58) give rise to interesting identities. In particular, in the multivariate case new matrix identities will be obtained.

Taking a game theoretic approach naturally leads to the concept of delayed commitment strategies. Although the prior commitment strategies and delayed commitment strategies are equivalent, the above example illustrates that it is much easier to calculate the latter.

7 Asymmetric Players

Scenarios where one team member is strongly informationally disadvantaged relative to the second team member are investigated.

7.1 Asymmetric Players: Case 1

Assume the u-player has perfect information, that is, he is privy to the state of nature ζ = (ζ 1, ζ 2), whereas the v-player has access to ζ 2 only. At the same time, the u-player knows that the v-player has the prior information \(\overline{\zeta }_{1}\), \(\overline{\zeta }_{2}\), ρ, σ 1, and σ 2; in fact, and in the best tradition of Bayesian games, it is tacitly assumed that both players are simultaneously presented the prior information before the game starts—the prior information is public information.

In this case the u-player’s payoff is

$$\displaystyle\begin{array}{rcl}{ J}^{(u)}(u,v(\cdot );\zeta )& =& E_{\zeta }(J(u,v(\zeta _{ 2});\zeta )\mid \zeta ) {}\\ & =& J(u,v(\zeta _{2});\zeta ), {}\\ \end{array}$$

that is, in the case of perfect information the u-player need not calculate an expectation; v(ζ 2) is the unknown input of the v-player.

If the payoff function J is quadratic,

$$\displaystyle\begin{array}{rcl}{ J}^{(u)}(u,v(\cdot );\zeta ) = -{u}^{T}{R}^{(u)}u - {v}^{T}(\zeta _{ 2}){R}^{(v)}v(\zeta _{ 2}) + 2{v}^{T}(\zeta _{ 2}){R}^{(u,v)}u + 2{u}^{T}\zeta _{ 1} + 2{v}^{T}(\zeta _{ 2})\zeta _{2}& & {}\\ \end{array}$$

and differentiation in u yields the relationship

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1},\zeta _{2}) = {({R}^{(u)})}^{-1}[{({R}^{(u,v)})}^{T}{v}^{{\ast}}(\zeta _{ 2}) +\zeta _{1}]& & {}\\ \end{array}$$

The v-player’s payoff function is

$$\displaystyle\begin{array}{rcl}{ J}^{(v)}(u(\cdot ),v;\zeta )& =& -{v}^{T}{R}^{(v)}v + 2{v}^{T}\zeta _{ 2} + E_{\zeta }(-{u}^{T}(\zeta ){R}^{(u)}u(\zeta ) {}\\ & & \quad + 2{v}^{T}{R}^{(u,v)}u(\zeta _{ 2}) + 2{u}^{T}\zeta _{ 1}\mid \zeta _{2}) {}\\ \end{array}$$

and differentiating it in v yields the relationship

$$\displaystyle\begin{array}{rcl}{ R}^{(v)}{v}^{{\ast}}(\zeta _{ 2})& =& \zeta _{2} + {R}^{(u,v)}E_{\zeta }({u}^{{\ast}}(\zeta )\mid \zeta _{ 2}) {}\\ & =& \zeta _{2} + {R}^{(u,v)}E_{\zeta }({({R}^{(u)})}^{-1}[{({R}^{(u,v)})}^{T}{v}^{{\ast}}(\zeta _{ 2}) +\zeta _{1}]\mid \zeta _{2}) {}\\ & =& \zeta _{2} + {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{v}^{{\ast}}(\zeta _{ 2}) + {R}^{(u,v)}{({R}^{(u)})}^{-1}E_{\zeta }(\zeta _{ 1}\mid \zeta _{2}) {}\\ & =& \zeta _{2} + {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{v}^{{\ast}}(\zeta _{ 2}) + {R}^{(u,v)}{({R}^{(u)})}^{-1}E_{\zeta _{ 1}}(\zeta _{1}\mid \zeta _{2}) {}\\ & =& \zeta _{2} + {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{v}^{{\ast}}(\zeta _{ 2}) + {R}^{(u,v)}{({R}^{(u)})}^{-1} {}\\ & & (\overline{\zeta }_{1} + P_{1,2}P_{2,2}^{-1}(\zeta _{ 2} -\overline{\zeta }_{2})) {}\\ \end{array}$$

Hence,

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2})& =& {[{R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}]}^{-1}[I + P_{ 1,2}P_{2,2}^{-1}{R}^{(u,v)}{({R}^{(u)})}^{-1}]\zeta _{ 2} {}\\ & & +{[{R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}]}^{-1}{R}^{(u,v)}{({R}^{(u)})}^{-1}\left (\overline{\zeta }_{ 1} - P_{1,2}P_{2,2}^{-1}\overline{\zeta }_{ 2}\right ){}\\ \end{array}$$

In the special case of scalar inputs and a bivariate normal distribution (16), the optimal strategy of the v-player is

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}}(\zeta _{ 2}) = \frac{{R}^{(u)} +\rho \frac{\sigma _{1}} {\sigma _{2}} {R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\zeta _{2} + \frac{{R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left (\overline{\zeta }_{1} -\rho \frac{\sigma _{1}} {\sigma _{2}}\overline{\zeta }_{2}\right ),& & {}\\ \end{array}$$

provided that R (u, v) is not the geometric mean of R (u) and R (v)—which is the case if the quadratic payoff function is concave in the control variable (u, v), whereupon \({R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2} > 0\). The optimal strategy of the u-player is

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1},\zeta _{2})& =& \frac{1} {{R}^{(u)}}\zeta _{1} + {R}^{(u,v)} \frac{1 +\rho \frac{\sigma _{1}} {\sigma _{2}} \frac{{R}^{(u,v)}} {{R}^{(u)}} } {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\zeta _{2} {}\\ & & + \frac{\frac{{({R}^{(u,v)})}^{2}} {{R}^{(u)}} } {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left (\overline{\zeta }_{1} -\rho \frac{\sigma _{1}} {\sigma _{2}}\overline{\zeta }_{2}\right ) {}\\ \end{array}$$

Interestingly, although the u-player has complete state of nature information, his or her optimal strategy is affine and he also uses the public prior information. Concerning the strategy of the informationally disadvantaged v-player: certainty equivalence holds.

7.2 Asymmetric Players: Case 2

As in Sects. 16, the private information of the u-player is the ζ 1 component of the state of nature vector ζ. However, we now assume the v-player has no private information and he is totally dependent on the public prior information. As in Sect. 7.1, the u-player is aware that the public information is available to the v-player and he also knows that the v-player is “blind.”

The v-player’s payoff is

$$\displaystyle\begin{array}{rcl}{ J}^{(v)}(u(\cdot ),v)& =& 2{v}^{T}E_{\zeta }(\zeta _{ 2}) - {v}^{T}{R}^{(v)}v + 2{v}^{T}{R}^{(u,v)}E_{\zeta }(u(\zeta _{ 1})) {}\\ & & +E_{\zeta _{1}}(2{u}^{T}(\zeta _{ 1})\zeta _{1} - {u}^{T}(\zeta _{ 1}){R}^{(u)}u(\zeta _{ 1})) {}\\ \end{array}$$

and differentiation in v yields the unique optimal control response to the u-player’s strategy u(ζ 1),

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}} = {({R}^{(v)})}^{-1}E_{\zeta }(\zeta _{ 2}) + {({R}^{(v)})}^{-1}{R}^{(u,v)}E_{\zeta }(u(\zeta _{ 1}))& &{}\end{array}$$
(59)

The expectation E ζ (ζ 2) in (59) is calculated as follows.

$$\displaystyle\begin{array}{rcl} E_{\zeta }(\zeta _{2})& =& \int _{-\infty }^{\infty }\int _{ -\infty }^{\infty }\zeta _{ 2}f(\zeta _{1},\zeta _{2})d\zeta _{1}d\zeta _{2} {}\\ & =& \int _{-\infty }^{\infty }\zeta _{ 2}\left (\int _{-\infty }^{\infty }f(\zeta _{ 1},\zeta _{2})d\zeta _{1}\right )d\zeta _{2} {}\\ & =& \int _{-\infty }^{\infty }\zeta _{ 2}f_{\mathrm{m}}(\zeta _{2})d\zeta _{2} {}\\ & =& \overline{\zeta }_{2}, {}\\ \end{array}$$

where \(f(\zeta _{1},\zeta _{2})\) is the p.d.f. of the state of nature Gaussian random variable ζ and f m(ζ 2) is a marginal Gaussian p.d.f. of f(ζ 1, ζ 2). Recall that to obtain the marginal distribution over a subset of the components of a multivariate normal random variable, one only needs to drop the irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. For example, in the bivariate normal case the (Gaussian) marginal p.d.f. \(f_{\mathrm{m}}(\zeta _{1})\) is characterized by the parameters \((\overline{\zeta }_{1},\sigma _{1})\) and the marginal p.d.f. f m(ζ 2) is characterized by the parameters \((\overline{\zeta }_{2},\sigma _{2})\). Similarly,

$$\displaystyle\begin{array}{rcl} E_{\zeta }(u(\zeta _{1})) =\int _{ -\infty }^{\infty }u(\zeta _{ 1})f_{\mathrm{m}}(\zeta _{1})d\zeta _{1}& & {}\\ \end{array}$$

Thus,

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}} = {({R}^{(v)})}^{-1}\overline{\zeta }_{ 2} + {({R}^{(v)})}^{-1}{R}^{(u,v)}\int _{ -\infty }^{\infty }u(\zeta _{ 1})f_{\mathrm{m}}(\zeta _{1})d\zeta _{1}& &{}\end{array}$$
(60)

The u-player’s payoff is

$$\displaystyle\begin{array}{rcl}{ J}^{(u)}(u,v;\zeta _{ 1}) = 2{u}^{T}\zeta _{ 1} - {u}^{T}{R}^{(u)}u + 2{u}^{T}{({R}^{(u,v)})}^{T}v - {v}^{T}{R}^{(v)}v + 2{v}^{T}E_{\zeta _{ 2}}(\zeta _{2}\mid \zeta _{1})& & {}\\ \end{array}$$

Note: Now, as far as the u-player is concerned, the v-player does not employ a strategy, therefore the v-player’s input v is no longer a random variable and one need not compute an expectation: the u-player knows that the v-player is “blind.”

Differentiation in u yields the unique optimal control response to the v-player’s input v

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1}) = {({R}^{(u)})}^{-1}\zeta _{ 1} + {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}v& &{}\end{array}$$
(61)

Combining (60) and (61) yields the relationship

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}} = {({R}^{(v)})}^{-1}\overline{\zeta }_{ 2} + {({R}^{(v)})}^{-1}{R}^{(u,v)}[{({R}^{(u)})}^{-1}\overline{\zeta }_{ 1} + {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{v}^{{\ast}}],& & {}\\ \end{array}$$

that is, the v-player’s optimal control is

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}} = {[{R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}]}^{-1}[{R}^{(u,v)}{({R}^{(u)})}^{-1}\overline{\zeta }_{ 1} + \overline{\zeta }_{2}]& & {}\\ \end{array}$$

and the u-player’s optimal strategy is

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1})& =& {({R}^{(u)})}^{-1}\zeta _{ 1} + {({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}{[{R}^{(v)} - {R}^{(u,v)}{({R}^{(u)})}^{-1}{({R}^{(u,v)})}^{T}]}^{-1} {}\\ & & [{R}^{(u,v)}{({R}^{(u)})}^{-1}\overline{\zeta }_{ 1} + \overline{\zeta }_{2}] {}\\ \end{array}$$

If the controls are scalars,

$$\displaystyle\begin{array}{rcl}{ u}^{{\ast}}(\zeta _{ 1})& =& \frac{1} {{R}^{(u)}}\zeta _{1} + \frac{{R}^{(u,v)}} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}\left (\frac{{R}^{(u,v)}} {{R}^{(u)}} \overline{\zeta }_{1} + \overline{\zeta }_{2}\right ) {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl}{ v}^{{\ast}} = \frac{1} {{R}^{(u)}{R}^{(v)} - {({R}^{(u,v)})}^{2}}({R}^{(u,v)}\overline{\zeta }_{ 1} + {R}^{(u)}\overline{\zeta }_{ 2})& & {}\\ \end{array}$$

In conclusion, in the case where the v-player is “blind,” the strategy of the u-player is as if there would be no correlation, that is, the parameter ρ = 0—as in Corollary 4. As far as the v-player is concerned, certainty equivalence holds. A little bit of thought will convince the reader that these results are expected.

8 Conclusion

The static decentralized decision problem has been analyzed. Special attention is given to the multivariate Quadratic Gaussian (QG) payoff function. The optimization problem is static, yet the players have partial information and as such, this is a small step away from the celebrated LQG paradigm. Informational issues, prior commitment strategies vs. delayed commitment strategies, as well as Nash equilibria solution concepts, are discussed. Necessary and sufficient conditions for the existence of a solution are provided and the optimal strategies are calculated. Extreme cases of informational asymmetry are also explored. This work lays the groundwork for gaining a better understanding of optimization problems with partial information where also dynamics are at play.