1 Introduction

Linear-quadratic stochastic differential games (LQ games, in short) are games in which players are allowed to affect a linear stochastic differential equation with an additive control, with the aim of minimizing a cost functional which is quadratic both in the state and in the control. For example, we can consider the game in which, for \(i=1,...,N\), player i can choose a control \(\alpha ^i\) and faces the control problem:

$$\begin{aligned} \text {(LQ game)}&\mathop {\textrm{Minimize}}\limits \limits _{{\alpha ^i}} \ \mathbb {E} \bigg [ \int _0^T \big ( X_t Q^i_t X_t + c_t^i (\alpha _t^i)^2 \big ) dt + X_T Q^i_T X_T \bigg ], \\&\text {subject to } dX_t^{j}= ( a^j_t + b^j_t X_t^{j} + \alpha ^j_t)dt + \sigma _t^j dW_{t}^j, \ X_{0}^{j}=x_0^j, \ j=1,...,N. \end{aligned}$$

Here, \(X=(X^1,...,X^N)\) denotes the vector of states of the players, which is affected by the Brownian motions \((W^j)_j\), while, for \(y\in \mathbb {R}^N\), \(yQ_t^i y\) denotes the product \( \sum _{k,j} Q_t^{k,j;i} y^k y^j\).

LQ games have received a huge attention in the literature. Indeed, this class of models represents a precious and rare example of explicitly solvable games (see Carmona 2016) and therefore it serves as a benchmark theoretical tool for describing systems of interacting agents in many applications, ranging from economics, engineering, finance, biology and so on (see Carmona 2016 and the references therein).

However, assuming the cost to be quadratic in the control is rather unrealistic in many applications, where the price of an intervention would rather be linear in its size. We refer to Section 3, Chapter 11 in Dixit and Pindyck (1994) for a more detailed discussion on the classification of adjustment costs and on the criticism around strictly convex costs. Moreover, the quadratic costs lead to equilibrium strategies which are absolutely continuous, even if there is empirical evidence that lump sum-strategies may arise. For example, in the context of investment behaviours (Stokey 2008) highlights the significance of lumpy adjustments by quoting Establishment-level data gathered by the U.S. Census Bureau on 13,700 manufacturing plants from 1972–1988: A large share of the firms in the sample display an episode of significant adjustment (37% or more), and one-fourth of aggregate investment concentrates on plants that are increasing their capital stock by more than 30%.Footnote 1

To illustrate the relevance of linear costs, we can mention models of capacity expansion in oligopoly markets. Consider N companies producing a certain good and selling it in the market. Each company can adjust its production capacity to follow the market fluctuation of the demand, in order to maximize a net profit. Such a reward is given by the revenues obtained by the selling, which correspond to the individual production multiplied by the market price (which is affected by the production of all the firms in the market), and by the investment expenditures that are necessary to adjust the firm’s production capacity. Thus, in this case it is reasonable to assume the cost of an investment to be linear in the capacity expansion (as it done in Back and Paulsen 2009; Grenadier 2002; Steg 2012). Other examples in which the costs of intervention is not quadratic arise in resource allocation problems (see Gao et al. 2018; Georgiadis et al. 2006), inventory management (see Federico et al. 2023), operations research (see Guo et al. 2011; Harrison and Taksar 1983), queuing theory (see Krichagina and Taksar 1992), mathematical biology (see Alvarez and Shepp 1998) and so on.

All these examples represent the main motivation to study in a systematic way stochastic differential games in which each player can control a linear stochastic differential equation in order to minimize a cost which is quadratic in the state and linear in the control. From the mathematical point of view, replacing the quadratic cost \(c_t^i (\alpha _t^i)^2\) with a linear one (say, \(c_t^i |\alpha _t^i|\)), requires to introduce the so-called singular controls: namely, to replace the additive control term \(\int _0^t\alpha _s^i ds\) with some càdlàg (i.e., right continuous with left limits) bounded variation process \(v_t^i\) (i.e., the singular control). Thus, the LQ game considered above is replaced with the game in which each player i can choose a bounded variation control \(v^i\) and faces the singular control problem:

$$\begin{aligned}&\text {(LQS game)} \ \mathop {\textrm{Minimize}}\limits \limits _{{v^i}} \ \mathbb {E} \bigg [ \int _0^T X_t Q^i_t X_t dt + X_T Q^i_T X_T + \int _{[0,T]} c_t^i d|v^i|_t \bigg ], \\&\text {subject to } dX_t^{j}= ( a^j_t + b^j_t X_t^{j} )dt + \sigma _t^j dW_{t}^j + dv_t^j, \ X_{0-}^{j}=x_0^j, \ j=1,...,N. \end{aligned}$$

Here \(|v^j|\) denotes the total variation of the process \(v^j\). We call these games linear-quadratic-singular stochastic differential games (LQS games, in short).

The main objective of this paper is to show, under fairly general assumptions, the existence of open-loop Nash equilibria for LQS games. The proof this result hinges on an approximation technique and on the use of the stochastic maximum principle. In particular, we introduce a sequence of approximating games where, for any \(n\in \mathbb {N}\), players are restricted to pick strategies with Lispchitz constant bounded by n. For fixed n, this approximating problem can be reformulated in terms of a more standard stochastic differential game, which falls into the class of games with bang-bang controls (see Hamadene and Mannucci 2019; Hamadène and Mu 2014; Mannucci 2004, 2014); that is, depending on the state of the system, players at equilibrium do nothing or act with the maximum rate allowed. Thanks to the results in Hamadene and Mannucci (2019), the existence of a Nash equilibrium \(\eta ^n = (\eta ^{i,n},...,\eta ^{N,n})\) for the game with n-Lipschitz controls can be established. By assuming some conditions on the coefficients of the matrices \(Q^i\), we then show some a priori estimates on the sequence \((\eta ^n)_n\). Indeed, this requirements on the \(Q^i\) translate into a coercivity condition on the space of profile strategies and it ensures that the Nash equilibria, whenever they exist, always live in a bounded set. These estimates allow to find an accumulation point \(\eta \) of the sequence \((\eta ^n)_n\). Since, for each n, \(\eta ^n\) is a Nash equilibrium of the game with n-Lipschitz strategies, by the necessary conditions of the stochastic maximum principle, it can be expressed as the solution of a certain forward-backward stochastic differential equation. We then take limits in such a system in order to prove that the limit point \(\eta \) satisfy a set of conditions (in the spirit of the stochastic maximum principle), which in turn ensure \(\eta \) to be a Nash equilibrium. Indeed, as a byproduct of our result, one obtains the existence a solution to the system of forward-backward stochastic differential equation related to the equilibria.

As an application of our main result, we show the existence of equilibria in non-symmetric games of capacity expansion in oligopoly markets (see Back and Paulsen 2009; Steg 2012).

1.1 Related literature

A game with singular controls was first studied in Grenadier (2002), in order to derive symmetric equilibrium investment strategies in a continuous-time symmetric (i.e., when the cost functionals and the dynamics are the same for all players) option exercise game. This model was later revised in Back and Paulsen (2009), where the open-loop equilibrium is provided under a suitable specification of the model.

For singular control problems, Bank and Riedel (2001) introduced a system of first-order conditions, characterizing the optimal policies (see also Ferrari 2015; Ferrari and Salminen 2016). These conditions represent a version of the stochastic maximum principle (see Peng 1990) in the context of singular control. Inspired by the earlier work (Bank 2005; Steg 2012) considers irreversible investment problems in oligopoly markets, determines the equilibrium in the symmetric case, and characterizes (in the non-symmetric case) the open-loop equilibria through the first-order conditions. A similar approach is also followed in Ferrari et al. (2017) for a public good contribution game in which players are allowed to choose a regular control and a singular control. A general characterization of open-loop Nash equilibria through the stochastic maximum principle approach has been investigated in Wang et al. (2018) for regular-singular stochastic differential games. The existence of equilibria in a non-symmetric game with multi-dimensional singular controls and non-Markovian costs has been established in Dianetti and Ferrari (2020) when the costs satisfy the submodularity conditions (see Topkis 1979 for a seminal paper on static N-player submodular games). The submodularity property represents, roughly speaking, the situation in which players have an incentive to imitate the behaviour of their opponents, and it is widely used in the economic literature (see Topkis 2011; Vives 1999). In comparison to these works, the present paper establishes the first existence result for open-loop Nash equilibria for the non-symmetric LQS game (unlike Back and Paulsen 2009; Ferrari et al. 2017; Grenadier 2002, that require symmetry), without enforcing the submodularity structure (unlike Dianetti and Ferrari 2020).

We conclude this section with a literature review on Markovian equilibria and mean field game equilibria. The study of Markovian equilibria in games with singular controls seems to be particularly challenging (see the discussion in Section 2 of Back and Paulsen (2009)). Indeed, following the dynamic programming principle approach, finding a Markovian equilibrium means to construct a solution of a related reflected stochastic differential equation, on which few is known even for the control problem (see Boryc and Kruk 2016; Dianetti and Ferrari 2023; Kruk 2000). However, we can mention few contributions. By showing a verification theorem, Guo et al. (2022); Guo and Xu (2019) discuss some sufficient conditions for Nash equilibria in terms of a system of partial differential equations, and construct a Markovian equilibrium in a linear quadratic symmetric game. When two players acts on the same one-dimensional diffusion, Nash equilibria are computed in Kwon (2022), while connections between nonzero-sum games of singular control and games of optimal stopping have been tackled in De Angelis and Ferrari (2018). We also mention (Cont et al. 2021), where Pareto optima are analysed, and Bovo et al. (2022); Hernandez-Hernandez et al. (2015); Kwon and Zhang (2015) for other types of games involving singular controls. When the number of players is very large, equilibria can be approximated via mean field games (see Huang et al. 2006; Lasry and Lions 2007). In the singular control case, the abstract existence of mean field game equilibria is studied under general conditions in Guo and Lee (2022); Fu (2023); Fu and Horst (2017) and, for submodular mean field games, in Dianetti et al. (2023). A more explicit analysis is instead provided in Campi et al. (2022); Cao and Guo (2022); Guo and Xu (2019) and in Cao et al. (2022), both for the discounted infinite horizon problem and in the case of ergodic costs. We also mention the recent (He et al. 2023), which provides a representation theorem for the equilibria.

Outline of the paper. The rest of the paper is organized as follows. In Sect. 2 we introduce the probabilistic setup for LQS games and discuss some preliminary results. Sections 3 is devoted to the existence theorem for Nash equilibria, while in Sect. 4 we present an application to oligopoly games.

2 Linear-quadratic-singular stochastic differential games

2.1 The game

Fix \(N \in \mathbb {N}\), \(N \ge 2\), a finite time horizon \(T\in (0,\infty )\), and consider an N-dimensional Brownian motion \(W=(W^1,...,W^N)\), defined on a complete probability space \((\Omega , \mathcal {F},\mathbb {P})\). Denote by \(\mathbb {F}= ( {\mathcal F}_t)_t\) the right-continuous extension of the filtration generated by W, augmented by the \(\mathbb {P}\)-null sets.

Consider a game with N players, indexed by \(i \in \{ 1,...,N \}\). The filtration \(\mathbb {F}\) represents the flow of information available to players. When player i does not intervene, its state \(X^i\) evolves accordingly to the linear stochastic differential equation

$$\begin{aligned} dX_t^{i}= ( a^i_t + b^i_tX_t^i )dt + \sigma ^i_t dW_{t}^i, \quad X_{0}^{i}=x^i_0. \end{aligned}$$
(1)

The drift and the volatility of \(X^i\) are given in terms of deterministic bounded measurable functions \(a^i,b^i: [0,T] \rightarrow \mathbb {R}\) and \(\sigma ^i: [0,T] \rightarrow [0,\infty )\), while the initial condition \(x_0^i\in \mathbb {R}\) is deterministic. Each player i is allowed to choose two controls \(\xi ^i\) and \(\zeta ^i\) in the set

$$\begin{aligned} \tilde{\mathcal {A}}:= \left\{ \, \xi :\Omega \times [0,T] \rightarrow [0,\infty ) \, \bigg | \, \begin{array} {l} \xi \text { is an} \mathbb {F}\text {-progressively measurable }\textrm{c}\grave{\textrm{a}}\textrm{dl}\grave{\textrm{a}}\textrm{g} \\ \text {nondecreasing process, with }{\mathbb {E}}[ \xi _T ] < \infty \end{array} \, \right\} . \end{aligned}$$

Thus, the strategy of player i is given by the vector \(\eta ^i = (\xi ^i, \zeta ^i) \in \tilde{\mathcal {A}}^2\). Intuitively, player i uses the control \(\xi ^i\) to increase its state \(X^i\) and the control \(\zeta ^i\) to decrease it (see (2) below), and these two different actions might lead to different costs (see (3) below). We will denote by \(\eta := (\eta ^1,..., \eta ^N) \in \tilde{\mathcal {A}}^{2N}\) a vector of strategies, also referred to as profile strategy. Given strategies \( \xi ^j, \zeta ^j \in \tilde{\mathcal {A}} \), \(j=1,...,N\), with slight abuse we will interchangeably use the notations \(((\xi ^1, \zeta ^1),...,(\xi ^N,\zeta ^N)) = (\xi ^1,...,\xi ^N,\zeta ^1,...,\zeta ^N) = (\xi , \zeta )\). Also, for a profile strategy \(\eta \in \tilde{\mathcal {A}}^{2N}\) and controls \(\bar{\xi }^i, \bar{\zeta }^i \in \tilde{\mathcal {A}}\), we define the unilateral deviation for player i as \((\eta ,\eta ^{-i})=(\xi ,\zeta , \eta ^{-i}) = ((\eta ,\eta ^{-i})^1,...,(\eta ,\eta ^{-i})^N)\) with

$$\begin{aligned} (\eta ,\eta ^{-i})^j:= {\left\{ \begin{array}{ll} \eta ^j &{} \text {if} \quad j \ne i, \\ (\bar{\xi }^i, \bar{\zeta }^i) &{} \text {if} \quad j = i. \end{array}\right. } \end{aligned}$$

A strategy \(\eta ^i = (\xi ^i, \zeta ^i) \in \tilde{\mathcal {A}}^2\) is said to be admissible if it is an element of

$$\begin{aligned} \mathcal {A}^2:= \Big \{ \eta ^i = (\xi ^i, \zeta ^i) \in \tilde{\mathcal {A}}^2 \text { with } v^i:= \xi ^i - \zeta ^i\text { satisfying } \begin{array} {l} \mathbb E \big [ \int _0^T |v_t^i|^2 dt + |v_T^i|^2 \big ] < \infty \end{array} \Big \}. \end{aligned}$$

Similarly, an admissible profile strategy is a vector \(\eta \in \mathcal {A}^{2N}\).

When the admissible profile strategy \(\eta \in \mathcal {A}^{2N}\) is chosen by the players, the controlled state \(X^\eta :=(X^{1,\eta },...,X^{N,\eta })\) of the system evolves as

$$\begin{aligned} dX_t^{i,\eta }= ( a^i_t + b^i_t X_t^{i,\eta })dt + \sigma _t^i dW_{t}^i + d\xi _{t}^i - d \zeta ^i_t, \quad X_{0-}^{i,\eta }=x_0^i, \quad i=1,...,N, \end{aligned}$$
(2)

where \(X_{0-}^{i,\eta }\) denotes the left limit in 0 of the process \(X^{i,\eta }\). Notice that the effect of the controls of the players is linear on the state, and that, for any \(\eta \in \mathcal {A}^{2N}\), there exists a unique strong solution \(X^\eta \) (we refer to Protter (2005) for further details).

Given admissible strategies \(\eta ^{-i} \in \mathcal {A}^{2(N-1)}\), the aim of player i is to choose \(\eta ^i = (\xi ^i,\zeta ^i) \in \mathcal {A}^2\) in order to minimize the quadratic-singular expected cost

$$\begin{aligned} J^i(\eta ^i,\eta ^{-i}):= \mathbb {E} \bigg [ \int _0^T X_t^\eta Q^i_t X_t^\eta dt + X_T^\eta Q^i_T X_T^\eta + \int _{[0,T]} (c_t^{i,+} d\xi ^{i}_{t} + c_t^{i,-} d\zeta ^{i}_{t}) \bigg ].\qquad \end{aligned}$$
(3)

Here, the \(\mathbb {R}^{N \times N}\) matrix \(Q_t^i= (q_t^{k,j;i})_{k,j}\) is given via bounded measurable functions \(q^{k,j;i}:[0,T] \rightarrow \mathbb {R}\), \(k,j=1,...,N\), and we set

$$\begin{aligned} y Q^i_t z {:}= \sum _{j,k=1}^N q_t^{k,j;i} y^k z^j, \quad y, z \in \mathbb {R}^N. \end{aligned}$$

The cost of increasing and decreasing the state process is given by continuous functions \(c^{i,+}, c^{i,-}:[0,T] \rightarrow [0,\infty )\), respectively. Also, since any càdlàg bounded variation process v can be identified with a Radon measure on [0, T], for any continuous function \(f:[0,T] \rightarrow \mathbb {R}\), the integrals with respect to v are defined by

$$\begin{aligned} \int _{[0,T]} f_t \, dv_t:= f_0 v_0 + \int _0^T f_t \, dv_t, \end{aligned}$$

where the integral on the right hand side is intended in the standard Lebesgue-Stieltjes sense on the interval (0, T]. Notice that, in light of the square integrability of the admissible strategies (see the definition of \(\mathcal {A}^2\)), the cost functional is well defined.

We will focus on the following notion of equilibrium.

Definition 1

An admissible profile strategy \( \eta \in \mathcal {A}^{2N}\) is an (open-loop) Nash equilibrium if

$$\begin{aligned} J^i ( \eta ^i, \eta ^{-i}) \le J^i ( \bar{\eta }^i, \eta ^{-i}), \quad \text {for any }\bar{\eta }^i \in \mathcal {A}^2, \end{aligned}$$

for any \(i =1,...,N\).

2.2 Stochastic maximum principle for LQS games

For later use, we now review some basic tools in the theory of stochastic singular control. In particular, we will introduce the adjoint processes and state a version of the stochastic maximum principle.

For a generic \(d \in \mathbb N\), define the set

$$\begin{aligned} \mathbb H ^{2,d}:= \Big \{ M: \Omega \times [0,T] \rightarrow \mathbb {R}^d \, \Big | \, \mathbb F\text {-progr. meas. process with } \begin{array}{l} \mathbb E \big [ \int _0^T |M_t|^2 dt \big ] < \infty \end{array} \Big \} \end{aligned}$$

and set \(\mathbb H ^{2}:=\mathbb H ^{2,1}\).

Given an admissible profile strategy \(\eta \), for any \(i=1,...,N\), define the adjoint process \(Y^{i,\eta } \in \mathbb H ^{2}\) as

$$\begin{aligned} Y_t^{i,\eta }:= 2 {\mathbb {E}}\bigg [ \Gamma ^i_{t,T} Q^{i;i}_T X^\eta _T + \int _t^T \Gamma ^i_{t,s} Q^{i;i}_s X^\eta _s ds \bigg | \mathcal F _t \bigg ], \quad t \in [0,T], \end{aligned}$$
(4)

where

$$\begin{aligned} \Gamma _{s,t}^i: = \exp \Big ( \int _s^t b^i_r dr \Big ), \quad s,t \in [0,T]. \end{aligned}$$
(5)

Here and in the sequel, for \(i,j=1,...,N\), the (raw) vector \(Q_t^{j;i} = (q_t^{j,1;i},...,q_t^{j,N;i})\) denotes the j-th raw of the matrix \(Q^i_t\), and \(Q_t^{j;i} x\) is the product \(\sum _{k=1}^N q_t^{j,k;i} x^k\), \(x\in \mathbb {R}^N\).

Remark 1

For any fixed \(i=1,...,N\), consider (as in Wang et al. 2018) the backward stochastic differential equation (BSDE, in short)

$$\begin{aligned} d Y ^j_t = - ( 2 Q^{j;i}_t X^\eta _t + {b^j_t} Y_t^j) dt + Z_t^j dW_t, \quad Y_T^j = 2 Q^{j;i}_T X^\eta _T, \quad j=1,...,N, \end{aligned}$$

for which a solution is defined as a process \((Y,Z) = (Y^1,...,Y^N, Z^1,...,Z^N)\in \mathbb H^{2,N} \times \mathbb H ^{2,N \times N}\) which satisfies, for any \(j=1,...,N\), the integral equation

$$\begin{aligned} Y ^j_t = 2 Q^{j;i}_T X^\eta _T + \int _t^T ( 2 Q^{j;i}_t X^\eta _t + {b^j_t} Y_t^j) dt - \int _t^T Z_t^j dW_t, \quad t \in [0,T], \mathbb P\text {-a.s.} \end{aligned}$$

The solution (YZ) to such a BSDE is typically referred to as adjoint process (see Wang et al. 2018 for further details) and its Y-component admits a continuous explicit solution as in (4) (see Proposition 6.2.1 at p. 142 in Pham (2009)), thus justifying the name given to \(Y^{i,\eta }\).

The adjoint process \(Y^{i,\eta }\) can also be interpreted as the subgradient of the cost functional \(J^i (\cdot , \eta ^{-i})\). This observation is made rigorous in the following lemma, which will be used many times in the rest of this paper.

Lemma 2.1

For any \(i =1,...,N\), \(\eta = (\eta ^1,...,\eta ^N) = (\xi ,\zeta ) \in \mathcal {A}^{2N}\) and \((\bar{\xi }^i, \bar{\zeta }^i) \in \mathcal {A}^2\), we have

$$\begin{aligned} J^i(\bar{\xi }^i, \bar{\zeta }^i; \eta ^{-i}) - J^i(\xi ^i, \zeta ^i; \eta ^{-i}) \ge&{\mathbb {E}}\bigg [ \int _{[0,T]} Y_t^{i,\eta }d(\bar{v}^i - v^i)_t \\&+ \int _{[0,T]} c_t^{i,+} d(\bar{\xi }^i - \xi ^i)_t + \int _{[0,T]} c_t^{i,-}d ( \bar{\zeta }^i - \zeta ^i)_t \bigg ], \end{aligned}$$

where \(v^i: = \xi ^i - \zeta ^i\) and \( \bar{v} ^i: = \bar{\xi }^i - \bar{\zeta }^i\).

Proof

Take \(i \in \{1,...,N\}\), \(\eta = (\eta ^1,..., \eta ^N) \in \mathcal {A}^{2N}\) and \(\bar{\xi }^i, \bar{\zeta }^i \in \mathcal {A}\). In order to simplify the notation, set \( X:= X^\eta , \ Y^i:= Y^{i,\eta }\) and denote by \(\bar{X}^i\) the solution to the SDE

$$\begin{aligned} d \bar{X}_t^{i}= ( a^i_t + b^i_t \bar{X}_t^{i})dt + \sigma _t^i dW_{t}^i + d \bar{\xi }_{t}^i - d \bar{\zeta }^i_t, \quad \bar{X} _{0-}^{i}=x^i_0. \end{aligned}$$

For later use, we first show the following elementary identity:

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _{[0,T]} Y^i_t d (\bar{v}^i- v^i)_t \bigg ] = {\mathbb {E}}\bigg [ \int _0^T 2 Q_t^{i;i} X_t (\bar{X}^i_t - X ^i_t) dt + 2 Q_T^{i;i} X_T (\bar{X}^i_T - X ^i_T) \bigg ]. \end{aligned}$$
(6)

Indeed, since the process \(\Delta := \bar{X}^i - X ^i\) solves the linear equation \(d \Delta _t = b^i_t \Delta _t dt + d( \bar{v}^i- v^i)_t, \ \Delta _{0-}=0\), by a simple use of Itô’s formula (see Theorem 32 at p. 78 in Protter (2005)) on the process \( \big ( \exp \big (- \int _0^t b^i_r dr \big ) \Delta _t \big )_t \) one can verify that \( \exp \big (- \int _0^t b^i_r dr \big ) (\bar{X}^i_t - X ^i_t) = \int _{[0,t]} \exp \big ( - \int _0^s b^i_r dr \big ) d( \bar{v}^i- v^i)_s\). Thus, recalling the definition of \((\Gamma ^i_{t,s})_{t,s}\) in (5), we find

$$\begin{aligned} \Gamma ^i_{t,0} ( \bar{X}^i_t - X ^i_t) = \int _{[0,t]} \Gamma ^i_{s,0} d( \bar{v}^i- v^i)_s, \end{aligned}$$

which, together with an integration by parts, gives

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T 2Q_t^{i;i}&X_t^i ( \bar{X}^i_t - X ^i_t) dt + 2 Q_T^{i;i} X_T ( \bar{X}^i_T - X ^i_T) \bigg ]\\ \nonumber&= {\mathbb {E}}\bigg [ \int _0^T \Big ( -\int _t^T \Gamma _{0,s}^i 2 Q_s^{i;i} X_s ds \Big )' \Big ( \int _{[0,t]} \Gamma ^i_{s,0} d( \bar{v}^i- v^i)_s \Big ) dt \\ \nonumber&\quad \quad + \Gamma _{0,T}^i 2 Q_T^{i;i} X_T \Big ( \int _{[0,T]} \Gamma ^i_{t,0} d( \bar{v}^i- v^i)_t \Big ) \bigg ] \\ \nonumber&= {\mathbb {E}}\bigg [ \int _{[0,T]} \Big ( \int _t^T \Gamma _{t,s}^i 2 Q_s^{i;i} X_s ds + \Gamma _{t,T}^i2 Q_T^{i;i} X_T \Big ) d ( \bar{v}^i- v^i)_t \bigg ]. \nonumber \end{aligned}$$
(7)

Moreover, Theorem 1.33 in Jacod (1979), implies that

$$\begin{aligned}&{\mathbb {E}}\bigg [ \int _{[0,T]} \Big ( \int _t^T \Gamma _{t,s}^i 2 Q_s^{i;i} X_s ds + \Gamma _{t,T}^i 2 Q_T^{i;i} X_T \Big ) d ( \bar{v}^i- v^i)_t \bigg ] \\&\quad = {\mathbb {E}}\bigg [ \int _{[0,T]} {\mathbb {E}}\Big [ \int _t^T \Gamma _{t,s}^i 2 Q_s^{i;i} X_s ds + \Gamma _{t,T}^i2 Q_T^{i;i} X_T \Big | {\mathcal F}_t \Big ] d ( \bar{v}^i- v^i)_t \bigg ], \end{aligned}$$

which, together with (7) and (4), gives (6).

Now, by the convexity of the maps \(x \mapsto x Q^i_t x\), thanks to (6) we have

$$\begin{aligned} J^i(\bar{\xi }^i, \bar{\zeta }^i; \eta ^{-i})&- J^i(\xi ^i, \zeta ^i; \eta ^{-i}) \\&\ge {\mathbb {E}}\bigg [ \int _0^T 2 Q_t^{i;i} X_t ( \bar{X}^i_t - X ^i_t) dt + 2 Q_T^{i;i} X_T ( \bar{X}^i_T - X ^i_T) \bigg ] \\&\quad \quad + {\mathbb {E}}\bigg [ \int _{[0,T]} c_t^{i,+} d( \bar{\xi }^i - \xi ^i)_t + \int _{[0,T]} c_t^{i,-} d( \bar{\zeta }^i - \zeta ^i)_t \bigg ]\\&= {\mathbb {E}}\bigg [ \int _{[0,T]} Y_t^{i,\eta }d(\bar{v}^i - v^i)_t \bigg ] \\&\quad \quad + {\mathbb {E}}\bigg [ \int _{[0,T]} c_t^{i,+} d( \bar{\xi }^i - \xi ^i)_t + \int _{[0,T]} c_t^{i,-} d( \bar{\zeta }^i - \zeta ^i)_t \bigg ], \end{aligned}$$

completing the proof of the lemma. \(\square \)

Next, we state the following version of the stochastic maximum principle, characterizing the Nash equilibria in terms of the related adjoint processes. Such a theorem was originated in Bank and Riedel (2001) for singular control problems, and we refer to Wang et al. (2018) for a more general version in a game-context (which contains Theorem 2.2 below as a particular case). Since our existence result (see Theorem 3.2 below) hinges on the stochastic maximum principle, for the reader’s convenience we provide a proof.

Theorem 2.2

The admissible profile strategy \( \eta = (\xi , \zeta ) \in \mathcal {A}^{2N}\) is a Nash equilibrium if and only if, for any \(i=1,...,N\), the following conditions hold:

  1. 1.

    \( Y ^{i,\eta }_t + c_t^{i,+} \ge 0\) and \(- Y ^{i,\eta }_t + c_t^{i,-} \ge 0\), for any \(t \in [0,T], \ \mathbb P\)-a.s.;

  2. 2.

    \(\int _{[0,T]} ( Y ^{i,\eta }_t + c_t^{i,+}) d \xi ^i_t = 0\) and \(\int _{[0,T]} (- Y ^{i,\eta }_t + c_t^{i,-}) d \zeta ^i_t =0\), \(\mathbb P \)-a.s.

Proof

We first show that these conditions are sufficient for Nash. Fix \(i \in \{1,...,N\}\) and consider \((\bar{\xi }^i, \bar{\zeta }^i) \in \mathcal A ^2\). Using Condition 2, in light of Lemma 2.1 we have

$$\begin{aligned} J^i(\bar{\xi }^i, \bar{\zeta }^i; \eta ^{-i}) - J^i(\xi ^i, \zeta ^i; \eta ^{-i}) \ge&{\mathbb {E}}\bigg [ \int _{[0,T]} (Y_t^{i,\eta }+ c_t^{i,+} ) d(\bar{\xi }^i - \xi ^i)_t \\&\quad + \int _{[0,T]} (-Y_t^{i,\eta }+ c_t^{i,-} ) d ( \bar{\zeta }^i - \zeta ^i)_t \bigg ] \\&= {\mathbb {E}}\bigg [ \int _{[0,T]} (Y_t^{i,\eta }+ c_t^{i,+} ) d\bar{\xi }^i _t \\&\quad + \int _{[0,T]} (-Y_t^{i,\eta }+ c_t^{i,-} ) d \bar{\zeta }^i_t \bigg ] \ge 0, \end{aligned}$$

where the last inequality follows from Condition 1. Since i is arbitrary, it follows that \(\eta \) is a Nash equilibrium.

We next show that such conditions are necessary. Fix \(i \in \{1,...,N\}\) and consider \(\bar{\xi }^i, \bar{\zeta }^i \in \tilde{\mathcal A }\) with \(\mathbb E[ |\bar{\xi }^i_T |^2 + |\bar{\zeta }^i_T|^2 ] < \infty \). For any \(\varepsilon \in (-1,1) \), define the strategy profile \(\eta ^\varepsilon : = ( \xi ^i + \varepsilon \bar{\xi }^i, \zeta ^i + \varepsilon \bar{\zeta }^i, \eta ^{-i})\). Notice that, either if \(\varepsilon >0\) or if \(\varepsilon < 0\) and \(\bar{\xi }^i \le \xi ^i, \ \bar{\zeta }^i \le \zeta ^i\), then the strategy profile \(\eta ^\varepsilon \) is admissible. Using that \(\eta \) is a Nash equilibrium, from Lemma 2.1 we have

$$\begin{aligned} \varepsilon {\mathbb {E}}\bigg [ \int _{[0,T]} (Y_t^{i,\eta ^\varepsilon }&+ c_t^{i,+} ) d \bar{\xi }^i _t + \int _{[0,T]} (- Y_t^{i,\eta ^\varepsilon } + c_t^{i,-} ) d \bar{\zeta }^i _t \bigg ] \\ \nonumber&\ge {J^i ( \xi ^i + \varepsilon \bar{\xi }^i, \zeta ^i + \varepsilon \bar{\zeta }^i, \eta ^{-i}) - J^i ( \xi ^i, \zeta ^i, \eta ^{-i})} \ge 0. \nonumber \end{aligned}$$
(8)

Moreover, observe that \(X_t^{\eta ^\varepsilon } \rightarrow X_t^\eta \) as \(\varepsilon \rightarrow 0\) for any \(t\in [0,T]\), \(\mathbb P\)-a.s., which in turn implies that \(Y_t^{i,\eta ^\varepsilon } \rightarrow Y_t^{i,\eta }\) as \(\varepsilon \rightarrow 0\) for any \(t\in [0,T]\), \(\mathbb P\)-a.s. Thus, for bounded \(\bar{\xi }^i\) and \(\bar{\zeta }^i\), taking limits as \(\varepsilon \rightarrow 0^+\) in (8), by the dominated convergence theorem we obtain

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _{[0,T]} (Y_t^{i,\eta } + c_t^{i,+} ) d \bar{\xi }^i _t + \int _{[0,T]} (- Y_t^{i,\eta } + c_t^{i,-} ) d \bar{\zeta }^i _t \bigg ] \ge 0. \end{aligned}$$

Since \(\bar{\xi }^i\) and \(\bar{\zeta }^i\) are arbitrary, Condition 1 follows. Finally, for \(M \in \mathbb N\), setting \(\bar{\xi }^{i,M}: = \xi ^i \wedge M\) and \(\bar{\zeta }^{i,M}i:= \zeta ^i \wedge M\), and taking limits as \(\varepsilon \rightarrow 0^-\) in (8), by the dominated convergence theorem we get

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _{[0,T]} (Y_t^{i,\eta } + c_t^{i,+} ) d \bar{\xi }^{i,M} _t + \int _{[0,T]} (- Y_t^{i,\eta } + c_t^{i,-} ) d \bar{\zeta }^{i,M} _t \bigg ] \le 0. \end{aligned}$$

Taking limits as \(M \rightarrow \infty \) in the latter inequality, by the monotone convergence theorem we conclude that

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _{[0,T]} (Y_t^{i,\eta } + c_t^{i,+} ) d \xi ^{i} _t + \int _{[0,T]} (- Y_t^{i,\eta } + c_t^{i,-} ) d \zeta ^{i} _t \bigg ] \le 0, \end{aligned}$$

which in light of Condition 1 implies Condition 2 \(\square \)

3 Existence of Nash equilibria

3.1 Assumptions and main result

Define matrix-valued functions \( \hat{Q}, \bar{Q}:[0,T] \rightarrow \mathbb {R}^{N\times N}\) as

$$\begin{aligned} \hat{Q} _t^{k,j} = 2 q_t^{k,j;k}, \quad \text {and} \quad \bar{Q} _t^{k,j} = {\left\{ \begin{array}{ll} q_t^{k,k;k} &{} \text {if} \quad k = j, \\ 2 q_t^{k,j;k} &{} \text {if} \quad { k\ne j}. \end{array}\right. } \end{aligned}$$
(9)

These two matrices contain some essential information on the strength of the interaction among players. In particular, the entry \(\bar{Q} _t^{k,j}\) measures how relevant the position of player j is for the optimization problem of player k at time t. In order to show the existence of Nash equilibria, we will require \(\bar{Q}\) to be positive definite (see Assumption 3.1 below). This in turn implies that large positions of the players are never convenient at equilibrium (see Remark 2 and Lemma 3.4 below).

We now summarize the sufficient conditions for the existence of Nash equilibria.

Assumption 3.1

For any \(i=1,...,N\), we require that:

  1. 1.

    The functions \(a^i, b^i, \sigma ^i, q^{k,j;i}: [0,T] \rightarrow \mathbb {R}\) are bounded, for any \(k,j=1,...,N\);

  2. 2.

    The functions \(c^{i,+}, c^{i,-}: [0,T] \rightarrow (0,\infty )\) are continuous;

  3. 3.

    For any \(t\in [0,T]\), the matrix \(Q_t^i\) is symmetric;

  4. 4.

    For any \(t\in [0,T]\), the matrix \(\bar{Q} _t\) is positive definite (hence, also \(\hat{Q} _t\) is positive definite); i.e., there exists \(\kappa > 0\) such that \( x\bar{Q} _t x \ge \kappa |x|^2\) for any \(x\in \mathbb {R}^N\).

It is worth to underline that some of these requirements are in place for convenience of exposition (in particular, the symmetry of \(Q^i\)): a model in which these are violated is discussed in Sect. 4.

Remark 2

Clearly, the most restrictive hypothesis is the positive definiteness of the matrices \(\bar{Q} ^i\). On the one hand, it implies that \(q_t^{i,i;i} \ge \kappa \), and so that \(x Q _t^i x \ge C (|x^i|^2 - |x^{-i}|^2)\) for any \(x \in \mathbb {R}^N\), for some \(C>0\). This condition is quite standard in singular control (see Dianetti and Ferrari 2023, among others) and it allows to prove the existence of the optimal controls for the single-player optimization problems (i.e., for the control problems \(\inf _{\eta ^i} J^i(\eta ^i, \eta ^{-i})\), parametrized by \(\eta ^{-i}\)). On the other hand, the assumption on the matrix \(\bar{Q}\) represents a coercivity condition on the space of profile strategies and it ensures that the Nash equilibria, whenever they exist, always live in a bounded subset of \(\mathcal {A}^{2N}\) (see the a priori estimates in Lemma 3.4 below). This assumption is different from more typical requirements used to treat LQ games in an arbitrary time-horizon, which instead imply a certain monotonicity (in the sense of Hu and Peng 1995) of the associated forward-backward system of equations (see Sections 5.2.2 and 5.4.3 in Carmona 2016 for more details).

We now state the main result of this paper.

Theorem 3.2

Under Assumption 3.1, there exists a Nash equilibrium.

The proof of Theorem 3.2 is given in the next subsection (see Subsection 3.2), and it consists of several steps. We resume here the key ideas. First, we introduce a sequence of approximating games where, for any \(n\in \mathbb {N}\), players are restricted to pick strategies \(\xi ^i, \zeta ^i \in \tilde{\mathcal {A}}\) with Lispchitz constant bounded by n. For fixed n, this approximating problem falls into the class of games with bang-bang controls and we can employ the results in Hamadene and Mannucci (2019) in order to show the existence of a Nash equilibrium \(\eta ^n = (\eta ^{i,n},...,\eta ^{N,n})\). We then show some a priori estimates on the sequence \((\eta ^n)_n\), which in turn allow to find an accumulation point \(\eta \). Finally, we prove that the limit point satisfies the conditions of Theorem 2.2, hence it is a Nash equilibrium.

3.2 Proof of Theorem 3.2

In the following subsections we will prove Theorem 3.2, and Assumption 3.1 will be in force. During the proofs, \(C>0\) will denote a generic constant, which might change from line to line.

3.2.1 Nash equilibria for a sequence of approximating games

Define, for each \(n\ge 1\), the n-Lipschitz game as the game in which, for any \(i=1,...,N\), player i is allowed to chose strategies \(\xi ^i\) and \(\zeta ^i\) in the space of n-Lipschitz strategies

$$\begin{aligned} \mathcal {A}_n:=\{ \xi \in \tilde{\mathcal {A}} \text { with Lipschitz constant bounded by }n\text { and }\xi _0=0 \}. \end{aligned}$$

For a given profile strategy \(\eta = (\xi , \zeta ) \in \mathcal {A}_n^{2N}\), player i minimizes the cost \(J^i\) defined as in (3), in which the state equation is replaced by the controlled SDE

$$\begin{aligned} dX_t^{i,\eta }=( a^i_t + b^i_t X_t^{i,\eta } )ds + \sigma ^{i,n}_t dW_{t}^i + d\xi _{t}^i - d \zeta ^i_t, \quad X_{0}^{i,\eta }=x^i_0, \end{aligned}$$
(10)

with strictly elliptic diffusion term

$$\begin{aligned} \sigma _t^{i,n}:= \sigma _t^i \vee \frac{1}{n}. \end{aligned}$$
(11)

We first have the following existence result.

Proposition 3.3

For any \(n \ge 1\), there exists a Nash equilibrium \(\eta ^n = ( \eta ^{1,n},..., \eta ^{N,n}) \in \mathcal {A}_n^{2N}\) of the n-Lipschitz game; that is, \(\eta ^n \in \mathcal {A}_n^{2N}\) such that

$$\begin{aligned} J^i ( \eta ^{i,n}, \eta ^{-i, n}) \le J^i (\bar{\eta }^{i}, \eta ^{-i,n}), \quad \text {for any }\bar{\eta }^i \in \mathcal {A}_n^2, \end{aligned}$$

for any \(i =1,...,N\).

Proof

For each n, the n-Lipschitz game can be reformulated as a stochastic differential game with regular controls by setting

$$\begin{aligned} u_t^i:=\frac{d\xi _t^i}{d t} \quad \text {and} \quad w_t^i:=\frac{d\zeta _t^i}{d t}. \end{aligned}$$

Indeed, in the n-Lipschitz game player i chooses a strategies \(u^i, w^i\) in the set

$$\begin{aligned} \mathcal {U}_n:=\{ \text {progressively measurable processes }u\text { with} 0 \le u_t \le n, \ \mathbb {P}\otimes dt\text {-a.e.} \} \end{aligned}$$

in order to minimize the expected cost

$$\begin{aligned}&J^i(\alpha ^i,\alpha ^{-i}) := \mathbb {E} \bigg [ \int _0^T \big ( X_t^\alpha Q^i_t X_t^\alpha + c_t^{i,+} u^i_t + c_t^{i,-} w^i_t \big ) dt + X_T^\alpha Q^i_T X_T^\alpha \bigg ],\nonumber \\&\text {subject to} \quad dX_t^{j,\alpha }=( a^j_t + b^j_t X_t^{j,\alpha } + u_t^j - w_t^j)ds + \sigma ^{j,n}_t dW_{t}^j, \ X_{0}^{j,\alpha }=x^j_0, \nonumber \\&\qquad j=1,...,N. \end{aligned}$$
(12)

Here, we use the notation

$$\begin{aligned} \alpha : =(\alpha ^1,...,\alpha ^N):= ((u^1,w^1),...,(u^N,w^N)) \quad \text {and} \quad X^{\alpha }:= (X^{1,\alpha },...,X^{N,\alpha }). \end{aligned}$$

Thanks to the uniform ellipticity enforced in (11), we can employ Theorem 4.1 in Hamadene and Mannucci (2019) in order to deduce that, for any n there exists a Nash equilibrium \(\alpha ^n = (\alpha ^{1,n},..., \alpha ^{N,n})\), with \(\alpha ^{i,n} = (u^{i,n}, w^{i,n}) \in \mathcal U _n^2\). Hence, defining the processes

$$\begin{aligned} \xi ^{i,n}_t:= \int _0^t u_s^{i,n} ds \quad \text {and} \quad \zeta ^{i,n}_t:= \int _0^t w_s^{i,n} ds, \end{aligned}$$
(13)

we have that \(\eta ^n:=(\eta ^{1,n},..., \eta ^{N,n})\) is a Nash equilibrium for the n-Lipschitz game, with \(\eta ^{i,n}:= (\xi ^{i,n}, \zeta ^{i,n})\). \(\square \)

For any \(n \in \mathbb N\), we can now fix a Nash equilibrium \(\eta ^n \in \mathcal A _n^{2N}\), which is given in terms of the equilibrium \(\alpha ^n\) of the game in (12). Thus, we proceed by characterizing such these equilibria by using the stochastic maximum principle (see Chapter 5 in Carmona 2016).

When viewed as a stochastic differential game with regular controls (see (12)), the pre-Hamiltonians of the n-Lipschitz game write as

$$\begin{aligned} H^{i,n}(t,x,u, w;y^1,...,y^N):= \sum _{j=1}^N (a^j_t + b^j_t x^j + u^j -w^j) y^j + x Q^i_t x + c_t^{i,+} u^i + c_t^{i,-} w^i, \end{aligned}$$

for any \(i=1,...,N\), \((t,x) \in [0,T] \times \mathbb {R}^N\), \(u=(u^1,...,u^N), \, w=(w^1,...,w^N) \in [0,n]^N\) and \(y:=(y^1,...,y^N) \in \mathbb {R}^N\). In particular, the function \(H^{i,n}\) represents the pre-Hamiltonian related to the optimization problem of player i.

Define the process \(( X ^{n}, Y ^{n}, Z ^{n}) = (X^{1,n},...,X^{N,n}, Y^{1,n},...,Y^{N,n},Z^{1,n},...,Z^{N,n})\), with \( (X^{i,n},Y^{i,n},Z^{i,n}) \in \mathbb H^2 \times \mathbb H^{2,N} \times \mathbb H^{2,N\times N}\), as the unique solution of the forward-backward stochastic differential equation (FBSDE, in short)

$$\begin{aligned} {\left\{ \begin{array}{ll} d X^{i,n}_t &{} = \big ( a^i_t + b^i_t X ^{i,n}_t + u_t^{i,n} - w_t^{i,n} \big )dt + \sigma ^{i,n}_tdW^i_t, \quad X_{0}^{i,n}=x^i_0, \quad i=1,...,N,\\ d Y^{i,n}_t &{} = - D_x H^{i,n} ( t, X_t^{n}, Y _t^{n}) dt + Z_t^{i,n} dW_t, \quad Y_T^{i,n} = 2 Q^{i}_T X_T^{n}, \quad i=1,...,N. \end{array}\right. } \end{aligned}$$

The necessary conditions of the stochastic maximum principle (see Theorem 5.19 at p. 187 in Carmona 2016) characterize the Nash equilibria as the minimizers of the pre-Hamiltonian; that is, for any \(i=1,...,N\) we have

$$\begin{aligned} H^{i,n}&(t,X_t^n, u^{1,n}_t,...,u_t^{N,n}, w_t^{1,n},...,w_t^{N,n} ;Y_t^{1,i,n},...,Y_t^{N,i,n}) \nonumber \\&= \inf _{ u^i, w^i \in [0,n] } H^{i,n}(t,X_t^n, (u^i,u_t^{-i,n}), (w^i,w_t^{-i,n}) ;Y_t^{1,i,n},...,Y_t^{N,i,n}), \quad \mathbb P \otimes dt\text {-a.e.} \end{aligned}$$
(14)

Now, we can compute the optimal feedbacks for player i, as multivalued functions \( \hat{u} ^{i,n}\) and \(\hat{w} ^{i,n}\) from \([0,T] \times \mathbb {R}^{K+N} \times [0,n]^{2(N-1)} \times \mathbb {R}^N \) into [0, n]. Indeed, by setting

$$\begin{aligned} (\hat{u} ^{i,n}, \hat{w}^{i,n}) (t,x,u^{-i}, w^{-i} ;&y^1,...,y^N)\\&:=\mathop {\mathrm {arg\,min}}\limits _{ u^i, w^i \in [0,n] } H^{i,n}(s,x,u^1,...u^N, w^1,...,w^N ;y^1,...,y^N) \\&:= \Big ( \mathop {\mathrm {arg\,min}}\limits _{ u^i \in [0,n] } u^i(y^i + c_t^{i,+}), \mathop {\mathrm {arg\,min}}\limits _{ w^i \in [0,n] } w^i(-y^i + c_t^{i,-}) \Big ), \end{aligned}$$

and using the notation \(n A:= \{ n a \, | \, a \in A \}\) for \(A \subset \mathbb {R}\), we obtain

$$\begin{aligned} \hat{u} ^{i,n}(t,y^{i}) = n {\left\{ \begin{array}{ll} \{ 1 \} &{} \text { if } y^i + c_t^{i,+} < 0 , \\ {[}0,1] &{} \text { if } y^i + c_t^{i,+} = 0 , \\ \{0 \} &{} \text { if } y^i + c_t^{i,+} > 0 , \\ \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \hat{w} ^{i,n}(t,y^i)= n {\left\{ \begin{array}{ll} \{ 1 \} &{} \text { if } - y^i + c_t^{i,-} < 0 , \\ {[}0,1] &{} \text { if } - y^i + c_t^{i,-} = 0 , \\ \{0 \} &{} \text { if } - y^i + c_t^{i,-} > 0 . \\ \end{array}\right. } \end{aligned}$$

Hence, the necessary conditions (14) rewrites as

$$\begin{aligned} u _t^{i,n} \in \hat{u}^{i,n} (t,Y_t^{i,i,n}) \quad \text {and} \quad w _t^{i,n}\in \hat{w}^{i,n} (t,Y_t^{i,i,n}), \quad \mathbb P \otimes dt\text {-a.e.}. \end{aligned}$$
(15)

Since \(\hat{u} ^{i,n}\) and \(\hat{w} ^{i,n}\) depend only on \(Y^{i,i,n}\) and since the equation for \(Y^{i,i,n}\) does not depend on \(Y^{j,i,n}\) if \(j \ne i\), one can reduce the FBSDE (14). In particular, with slight abuse of notation (i.e., writing \((Y ^{i,n}, Z ^{i,n})\) instead of \((Y ^{i,i,n}, Z ^{i,i,n})\)) we have \( u _t^{i,n} = \hat{u} ^{i,n}(t, Y_t^{i,n})\) and \(w _t^{i,n} = \hat{w}^{i,n}(t, Y_t^{i,n})\) with \(( X ^{n}, Y ^{n}, Z ^{n})= (X^{1,n},...,X^{N,n}, Y ^{1,n},...,Y ^{N,n}, Z^{1,n},..., Z ^{N,n})\) solution to the FBSDE

$$\begin{aligned} {\left\{ \begin{array}{ll} d X^{i,n}_t &{} = \big ( a^i_t + b^i_t X ^{i,n}_t+ u_t^{i,n} -w_t^{i,n} \big ) dt + \sigma ^{i,n}_t dW^i_t, \quad X_{0}^{i,n}=x^i_0, \quad i=1,...,N,\\ d Y^{i,n}_t &{} = - ( 2 Q_t^{i;i} X _t^{i,n} + b_t^i Y_t^{i,n}) dt + Z _t^{i,n} dW_t, \quad Y_T^{i,n} = 2 Q^{i;i}_T X_T^{n}, \quad i=1,...,N. \\ \end{array}\right. } \end{aligned}$$

Moreover, using (13) and noticing that the equations for \(Y^{i,n}\) are linear, by using Proposition 6.2.1 at p. 142 in Pham (2009), as in (4) we can rewrite this system as

$$\begin{aligned} {\left\{ \begin{array}{ll} d X^{i,n}_t &{} = \big ( a^i_t + b^i_t X ^{i,n}_t \big ) dt + \sigma ^{i,n}_t dW^i_t + d \xi ^{i,n}_t - d \zeta ^{i,n}_t, \quad X_{0}^{i,n}=x_0^i, \quad i=1,...,N,\\ Y_t^{i,n} &{}= 2 {\mathbb {E}}\Big [ \Gamma ^i_{t,T} Q^{i;i}_T X^n_T + \int _t^T \Gamma ^i_{t,s} Q^{i;i}_s X^n_s ds \Big | \mathcal F _t \Big ], \quad i=1,...,N, \end{array}\right. }\nonumber \\ \end{aligned}$$
(16)

and the necessary conditions for the n-Lipschitz game (see (15)) translate into

$$\begin{aligned} \xi ^{i,n}_t = \int _0^t u_s^{i,n} ds, \ u_t^{i,n} \in \hat{u}^{i,n} (t,Y_t^{i,n}) \quad \text {and} \quad \zeta ^{i,n}_t = \int _0^t w_s^{i,n} ds, \ w_t^{i,n} \in \hat{w}^{i,n} (t,Y_t^{i,n}).\nonumber \\ \end{aligned}$$
(17)

3.2.2 A priori estimates for Nash equilibria and convergence to a limit point

From the previous subsection, we can fix a sequence \((\xi ^n, \zeta ^n) _n\) of Nash equilibria of the n-Lipschitz games and consider the associated sequence \((X^n,Y^n,\xi ^n,\zeta ^n)_n\) of the solutions to the FBSDE (16) which satisfy the conditions (17).

We begin with the following a priori estimates on the moments of the these Nash equilibria, which will be used to find limit points if the sequence \((X^n,Y^n,\xi ^n,\zeta ^n)_n\) in Proposition 3.5.

Lemma 3.4

We have

$$\begin{aligned} \sup _n \mathbb E \bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ]< \infty \quad \text {and} \quad \sup _n \mathbb E \bigg [ |\xi ^n_T| + |\zeta ^n_T| + \sup _{t\in [0,T]} |X_t^n| \bigg ] < \infty . \end{aligned}$$

Proof

We divide the proof in two steps.

Step 1. For \(i=1,...,N\), let \(\tilde{X}^{i,n}\) denote the solution to the SDE (10) controlled by 0; that is, to the SDE

$$\begin{aligned} d \tilde{X}_t^{i,\eta }=( a^i_t + b^i_t \tilde{X}_t^{i,\eta } )ds + \sigma ^{i,n}_t dW_{t}^i, \quad \tilde{X} _{0}^{i,\eta }=x^i_0. \end{aligned}$$
(18)

Since \(\eta ^n\) is a Nash equilibrium for the n-Lipschitz game, we have \(J^i(\eta ^{i,n}, \eta ^{-i,n}) \le J^i (0,0, \eta ^{-i,n})\), from which we obtain

$$\begin{aligned} \mathbb E \bigg [ \int _0^T&X ^{n}_t Q_t^i X ^{n}_t dt + X ^{n}_T Q^i_T X^n_T \bigg ] \\ \nonumber&\le J^i(\eta ^{i,n}, \eta ^{-i,n}) \le J^i(0,0, \eta ^{-i,n}) \\ \nonumber&= \mathbb E \bigg [ \int _0^T \Big ( q^{i,i;i}_t (\tilde{X}_t^{i,n})^2 + 2 \sum _{j \ne i} q^{i,j;i}_t \tilde{X}_t^{i,n} X_t^{j,n} + \sum _{k\ne i,j \ne i } q^{k,j;i}_t X_t^{k,n} X_t^{j,n} \Big ) dt \bigg ] \\ \nonumber&\quad + \mathbb E \bigg [ q^{i,i;i}_T (\tilde{X}_T^{i,n})^2 + 2 \sum _{j \ne i} q^{i,j;i}_T \tilde{X}_T^{i,n} X_T^{j,n} + \sum _{k\ne i,j \ne i } q^{k,j;i}_T X_T^{k,n} X_T^{j,n} \bigg ], \nonumber \end{aligned}$$

which in turn rewrites as

$$\begin{aligned} \mathbb E \bigg [ \int _0^T&\Big ( q^{i,i;i}_t ( X_t^{i,n})^2 + 2 \sum _{j \ne i} q^{i,j;i}_t X_t^{i,n} X_t^{j,n} \Big ) dt + q^{i,i;i}_T ( X_T^{i,n})^2 + 2 \sum _{j \ne i} q^{i,j;i}_T X_T^{i,n} X_T^{j,n} \bigg ] \\&\le \mathbb E \bigg [ \int _0^T \Big ( q^{i,i;i}_t (\tilde{X}_t^{i,n})^2 + 2 \sum _{j \ne i} q^{i,j;i}_t \tilde{X}_t^{i,n} X_t^{j,n}\Big ) dt \\&\quad \quad \quad \quad \quad \quad \quad \quad + q^{i,i;i}_T (\tilde{X}_T^{i,n})^2 + 2 \sum _{j \ne i} q^{i,j;i}_T \tilde{X}_T^{i,n} X_T^{j,n} \bigg ]. \end{aligned}$$

Therefore, summing over \(i=1,...,N\), for \(\bar{Q}\) as in (9),

$$\begin{aligned} \tilde{Q}_t^{k,j}:= {\left\{ \begin{array}{ll} 0 &{} \text {if} \quad k = j, \\ 2 q_t^{k,j;k} &{} \text {if} \quad { k\ne j}. \end{array}\right. } \quad \text {and} \quad \tilde{X}^n:=(\tilde{X}^{1,n},...,\tilde{X} ^{N,n}), \end{aligned}$$

using the integrability of \(\tilde{X}^n\), we find

$$\begin{aligned} \mathbb E \bigg [ \int _0^T X_t^{n} \bar{Q} _t X_t^{n} dt + X_T^{n} \bar{Q} _T X_T^{n} \bigg ] \le C \bigg ( 1+ \mathbb E \bigg [ \int _0^T \tilde{X}_t^n \tilde{Q}_t X _t^{n} dt + \tilde{X}_T ^n \tilde{Q}_T X_T^{n} \bigg ] \bigg ), \end{aligned}$$

and, since \(\bar{Q} > 0\) (cf. Condition 4 in Assumption 3.1), we deduce that

$$\begin{aligned} \mathbb E \bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ] \le C \bigg ( 1+ \mathbb E \bigg [ \int _0^T \tilde{X} _t^n \tilde{Q} _t X_t^{n} dt + \tilde{X}_T ^n \tilde{Q} _T X_T^{n} \bigg ] \bigg ). \end{aligned}$$

By employing Hölder inequality with exponent 2 on the latter estimate, we obtain

$$\begin{aligned} \mathbb E \bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ]&\le C \bigg ( 1+ \sum _{i,j=1}^N \mathbb E \bigg [\int _0^T |\tilde{X}^{i,n}_t|| X ^{j,n}_t | dt + |\tilde{X}^{i,n}_T|| X ^{j,n}_T |\bigg ] \bigg ) \\&\le C \bigg ( 1 + \sum _{i,j=1}^N \bigg ( \mathbb E \bigg [\int _0^T |\tilde{X}^{i,n}_t|^{2} dt + |\tilde{X}^{i,n}_T|^{2} \bigg ] \bigg )^{\frac{1}{2}} \\&\quad \times \bigg ( \mathbb E \bigg [ \int _0^T | X ^{j,n}_t |^2 dt + | X ^{j,n}_T |^{2} \bigg ] \bigg )^{\frac{1}{2}} \bigg ). \end{aligned}$$

Hence, again by the integrability of \(\tilde{X}^n\), we get

$$\begin{aligned} \mathbb E \bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ] \le C \bigg ( 1+ \bigg ( \mathbb E \bigg [ \int _0^T | X ^{n}_t |^{2} dt + | X ^{n}_T |^{2} \bigg ] \bigg )^{\frac{1}{2}} \bigg ), \end{aligned}$$

which in turn implies that

$$\begin{aligned} \sup _n \mathbb E \bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ] < \infty , \end{aligned}$$
(19)

thus proving the first part of the statement.

Step 2. We now estimate \(\xi _{T}^n\) and \(\zeta _T^n\). Fix \(i\in \{1,...,N\}\) and, for \(\tilde{X}^{i,n}\) denoting the solution to the SDE (18), set \(\bar{X}^n = (\bar{X}^{1,n},...,\bar{X}^{N,n})\) by \(\bar{X}^{j,n}:= X^{j,n}\) if \(j\ne i\) and \(\bar{X}^{i,n}:= \tilde{X}^{i,n}\). By optimality of \(\eta ^n\) we have

$$\begin{aligned} \mathbb E \bigg [ \int _0^T X_t^{n} Q ^i_t X_t^{n} dt&+ X_T^{n} Q ^i_T X_T^{n} + \int _{[0,T]} (c^{i,+}_t d\xi ^{i,n}_t + c^{i,-}_t d\zeta ^{i,n}_t) \bigg ] \\&\le J^i(\eta ^{i,n}, \eta ^{-i,n}) \le J^i(0, \eta ^{-i,n}) \\&= \mathbb E \bigg [ \int _0^T \bar{X}_t^{n} Q ^i_t \bar{X}_t^{n} dt + \bar{X}_T^{n} Q ^i_T \bar{X}_T^{n} \bigg ]. \end{aligned}$$

Now, using the fact that \(c^{i,+}_t, c^{i,-}_t \ge \bar{c} >0\) (see in Condition 2 in Assumption 3.1), by employing Hölder inequality with exponent 2 we find

$$\begin{aligned} \bar{c} \mathbb E \Big [ \xi ^{i,n}_T + \zeta ^{i,n}_T \Big ]&\le \mathbb E \bigg [ \int _{[0,T]} (c^{i,+}_t d\xi ^{i,n,+}_t + c^{i,-}_t d\xi ^{i,n,-}_t) \bigg ] \\&\le \mathbb E \bigg [ \int _0^T \Big ( \bar{X}_t^{n} Q ^i_t \bar{X}_t^{n} - X_t^{n} Q ^i_t X_t^{n}\Big ) dt + \bar{X}_T^{n} Q ^i_T \bar{X}_T^{n} - X_T^{n} Q ^i_T X_T^{n} \bigg ] \\&\le C \bigg ( 1+ \mathbb E \bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ] \bigg ), \end{aligned}$$

so that, thanks to the estimates in (19), we obtain

$$\begin{aligned} \sup _n \mathbb E [ \xi ^{i,n}_T + \zeta ^{i,n}_T] < \infty . \end{aligned}$$

Finally, since \(i \in \{1,...,N\}\) is arbitrary, we obtain

$$\begin{aligned} \sup _n \mathbb E [ |\xi _T^n| + |\zeta ^n_T|] \le \sup _n \sum _{i=1}^N \mathbb E [ \xi ^{i,n}_T +\zeta ^{i,n}_T] < \infty , \end{aligned}$$

and, by classical Grönwall estimate, we conclude that

$$\begin{aligned} \sup _n \mathbb E \bigg [ \sup _{t\in [0,T]} |X_t^n| \bigg ] < \infty , \end{aligned}$$

completing the proof. \(\square \)

We are now ready to identify the accumulation points of the sequence of Nash equilibria \((X^n,Y^n,\xi ^n,\zeta ^n)_n\) of the n-Lipschitz game. To this end, for a generic \(d \in \mathbb N\), introduce the Hilbert space \(\mathbb H _T^{2,d}\) with norm \(\Vert \cdot \Vert _T^{2,d} \) defined as

$$\begin{aligned} \mathbb H _T^{2,d}:= \{ M \in \mathbb H ^{2,d} \text { s.t. } \Vert M \Vert _T^{2,d} < \infty \} \quad \text {and} \quad \Vert M \Vert _T^{2,d}: = \mathbb E \bigg [ \int _0^T |M_t|^2 dt + |M_T|^2 \bigg ], \end{aligned}$$

and set \(\mathbb H ^{2}_T:= \mathbb H ^{2,1}_T\). Also, on \( \mathbb H _T^{2,d}\) we can consider the weak convergence; that is, for \(M,\, M^n \in \mathbb H _T^{2,d}\), \(n\in \mathbb N\), we say that

$$\begin{aligned} M^n \rightarrow M \text { as }n\rightarrow \infty ,\text { weakly in }\mathbb H _T^{2,d}, \end{aligned}$$

if, for any \(H \in \mathbb H _T^{2,d}\), one has

$$\begin{aligned} \lim _n {\mathbb {E}}\bigg [ \int _0^T H_t M_t^{n} dt + H_T M_T^{n} \bigg ] = {\mathbb {E}}\bigg [ \int _0^T H_t M_t dt + H_T M_T \bigg ]. \end{aligned}$$

We now state the following convergence result in which we identify a candidate Nash equilibrium as a limit point of the sequence \((X^n, Y^n, \xi ^n, \zeta ^n)_n\).

Proposition 3.5

There exists a subsequence of \((X^n, Y^n, \xi ^n, \zeta ^n)_n\) (still indexed by n) and processes \((X, Y) = (X^1,..., X^N, Y^1,...,Y^N) \in \mathbb H _T^{2,2N}\) and \(( \xi , \zeta ) = (\xi ^1,...,\xi ^N, \zeta ^1,...,\zeta ^N) \in \tilde{\mathcal A }^{2N}\) such that:

  1. 1.

    \((X^n, Y^n)_n \rightarrow (X, Y)\) as \(n\rightarrow \infty \), weakly in \(\mathbb H _T^{2,2N}\);

  2. 2.

    For \(\bar{\xi }^{i,m}:= \frac{1}{n} \sum _{n=1}^m \xi ^{i,n}\) and \(\bar{\zeta }^{i,m}:= \frac{1}{n} \sum _{n=1}^m \zeta ^{i,n}\), for \(\mathbb {P}\)-a.a. \(\omega \in \Omega \), the convergence

    $$\begin{aligned} \bar{\xi }_t^{i,m}(\omega ) \rightarrow \xi _t^{i} (\omega ) \text { for any continuity point of }\xi ^i(\omega )\text { and } \bar{\xi }_T^{i,m}(\omega ) \rightarrow \xi _T^{i}(\omega ), \\ \nonumber \bar{\zeta }_t^{i,m}(\omega ) \rightarrow \zeta _t^{i} (\omega ) \text { for any continuity point of }\zeta ^i(\omega )\text { and } \bar{\zeta }_T^{i,m}(\omega ) \rightarrow \zeta _T^{i}(\omega ), \nonumber \end{aligned}$$

    as \(m \rightarrow \infty \) holds, for any \(i=1,...,N\);

  3. 3.

    The profile strategy \((\xi , \zeta )\) is admissible.

Proof

Since the process \(Y^{i,n}\) solves the BSDE in (16), we have

$$\begin{aligned} {\mathbb {E}}\big [ |Y^{i,n}_t|^2 \big ] \le C {\mathbb {E}}\bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ], \end{aligned}$$

so that, by Lemma 3.4, we have

$$\begin{aligned} \sup _n {\mathbb {E}}\bigg [ \int _0^T |Y_t^{n}|^2 dt + | Y_T^{n}|^2 \bigg ] < \infty . \end{aligned}$$

The latter, together with the estimates in Lemma 3.4, allows to find a subsequence of \((X^n, Y^n)_n\) (still labelled by n) and a process \((X,Y) = (X^1,..., X^N, Y^1,...,Y^N)\) such that \((X^n, Y^n)\) converges to (XY) as \(n\rightarrow \infty \), weakly in \(\mathbb H _T^{2,2 N}\).

We next identify the limits for the sequence \((\xi ^n,\zeta ^n)_n\). By the estimates in Lemma 3.4, thanks to Lemma 3.5 in Kabanov (1999) we can find processes \(\xi = (\xi ^1,...,\xi ^N) \in \tilde{\mathcal {A}} ^N\) and \(\zeta = (\zeta ^1,...,\zeta ^N) \in \tilde{\mathcal {A}} ^N\) and a subsequence of indexes (not relabelled) such that, for any further subsequence, by setting

$$\begin{aligned} \bar{\xi }^{i,m}:= \frac{1}{n} \sum _{n=1}^m \xi ^{i,n} \quad \text {and} \quad \bar{\zeta }^{i,m}:= \frac{1}{n} \sum _{n=1}^m \zeta ^{i,n}, \end{aligned}$$

we have, for \(\mathbb {P}\)-a.a. \(\omega \in \Omega \), the convergence

$$\begin{aligned} \bar{\xi }_t^{i,m}(\omega ) \rightarrow \xi _t^{i} (\omega ) \text { for any continuity point of }\xi ^i(\omega )\text { and } \bar{\xi }_T^{i,m}(\omega ) \rightarrow \xi _T^{i}(\omega ), \\ \nonumber \bar{\zeta }_t^{i,m}(\omega ) \rightarrow \zeta _t^{i} (\omega ) \text { for any continuity point of }\zeta ^i(\omega )\text { and } \bar{\zeta }_T^{i,m}(\omega ) \rightarrow \zeta _T^{i}(\omega ), \nonumber \end{aligned}$$
(20)

as \(m \rightarrow \infty \) for any \(i=1,...,N\). On the other hand, for any \(i=1,...,N\), we can define the processes \(v^{i,n}:= \xi ^{i,n}-\zeta ^{i,n}\) and, by Lemma 3.4, we have

$$\begin{aligned} \sup _n {\mathbb {E}}\bigg [ \int _0^T |v^{i,n}_t|^2 dt + |v^{i,n}_T|^2 \bigg ] \le C \sup _n {\mathbb {E}}\bigg [ \int _0^T |X_t^{i,n}|^2 dt + | X_T^{i,n}|^2 \bigg ] < \infty . \end{aligned}$$

Thus, there exists a further subsequence (again, not relabelled) and a process \(v^i \in \mathbb H ^{2}_T \) such that

$$\begin{aligned} v^{i,n} \rightarrow v^i \text { as }n\rightarrow \infty ,\text { weakly in }\mathbb H _T^{2}. \end{aligned}$$
(21)

Moreover, by Banach-Saks theorem, we can find another subsequence of \((v^{i,n})_n\) (still labelled by n) such that \(\bar{v}^{i,m}:= \frac{1}{n} \sum _{n=1}^m v^{i,n} \rightarrow v^i\), as \(m \rightarrow \infty \), strongly in \(\mathbb H ^{2}_T\). Thus, up to a subsequence (still labelled by m), we have the convergence

$$\begin{aligned} \bar{v} _t^{i,m} \rightarrow v^i_t, \text { as }m\rightarrow \infty , \mathbb {P}\otimes dt\text {-a.e. in }\Omega \times [0,T]. \end{aligned}$$

The latter limit, together with (20), implies that \(v^i = \xi ^i - \zeta ^i\).

Finally, since \(v^i \in \mathbb H ^{2}_T\), we conclude that \((\xi ^i,\zeta ^i)\) is admissible, completing the proof of the proposition. \(\square \)

3.2.3 Properties of limit points

In the next two proposition we will show that the accumulation point \((X,Y,\xi ,\zeta )\) satisfies the conditions of Theorem 2.2.

Proposition 3.6

The process \((X,Y,\xi ,\zeta )\) solves the FBSDE

$$\begin{aligned} {\left\{ \begin{array}{ll} d X^{i}_t &{} = \big ( a^i_t + b^i_t X ^{i}_t \big ) dt + \sigma ^i_t dW^i_t + d \xi ^{i}_t - d \zeta ^{i}_t \quad X_{0-}^{i}=x^i_0, \quad i=1,...,N,\\ Y_t^{i} &{}= 2 {\mathbb {E}}\Big [ \Gamma ^i_{t,T} Q^{i;i}_T X_T + \int _t^T \Gamma ^i_{t,s} Q^{i;i}_s X_s ds \Big | \mathcal F _t \Big ], \quad i=1,...,N. \end{array}\right. }\nonumber \\ \end{aligned}$$
(22)

Proof

Take \(i \in \{1,...,N\}\). We first prove that \(X^i\) solves the forward equation. Since, for any n, the process \(X^{i,n}\) solves the forward equation in (16), we have

$$\begin{aligned} X_t^{i,n} = A_t^{i,n} + \int _0^t b_s^i X^{i,n}_s ds + v_t^{i,n}, \quad A_t^{i,n}:= x_0^i+ \int _0^t a_s^i ds + \int _0^t \sigma _s^{i,n} dW_s^i. \end{aligned}$$

Then, for any \(M \in \mathbb H ^2\), via an integration by parts we obtain

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T M_t X^{i,n}_t dt \bigg ]&= {\mathbb {E}}\bigg [ \int _0^T M_t \Big (A_t^{i,n} + \int _0^t b_s^i X^{i,n}_s ds + v_t^{i,n} \Big ) dt \bigg ] \\&= {\mathbb {E}}\bigg [ \int _0^T M_t (A_t^{i,n} + v_t^{i,n})dt \bigg ] \\&\quad +{\mathbb {E}}\bigg [ \int _0^T \Big ( \int _0^T M_s ds \Big ) b_t^i X^{i,n}_t dt - \int _0^T \Big ( \int _0^t M_s ds \Big ) b_t^i X^{i,n}_t dt \bigg ]. \end{aligned}$$

Notice that, from the definition of \(\sigma ^{i,n}\) in (11), we have

$$\begin{aligned} \lim _n \mathbb E \bigg [ \int _0^T |A^{i,n}_t - A^i_t|^2 dt \bigg ] = 0, \quad \text { where } \quad A_t^{i}:= x_0^i+ \int _0^t a_s^i ds + \int _0^t \sigma _s^{i} dW_s^i. \end{aligned}$$

Hence, the convergence established in Proposition 3.5 (see also (21)) allows to take limits as \(n\rightarrow \infty \) in the latter equality, and integrating again by parts, we conclude that

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T M_t X^{i}_t dt \bigg ] = {\mathbb {E}}\bigg [ \int _0^T M_t \Big (A_t^i + \int _0^t b_s^i X^{i}_s ds + v_t^{i} \Big ) dt \bigg ], \quad \text {for any }M \in \mathbb H^2. \end{aligned}$$

Thus, the forward equation \(X_t^{i} = A_t^i + \int _0^t b_s^i X^{i}_s ds + v_t^{i}\) holds \(\mathbb P \otimes dt \)-a.e.

We now prove that \(Y^i\) solves the backward equation. Since, for any n, the process \(Y^{i,n}\) solves the backward equation in (16), we have, for any \(M \in \mathbb H _T^2\), the identity

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T M_t Y^{i,n}_t dt \bigg ] = 2 {\mathbb {E}}\bigg [ \int _0^T M_t \Big ( {\mathbb {E}}\Big [ \Gamma ^i_{t,T} Q^{i;i}_T X^n_t + \int _t^T \Gamma ^i_{t,s} Q^{i;i}_s X^n_s ds \Big | \mathcal F _t \Big ] \Big ) dt \bigg ], \end{aligned}$$

and, by using Theorem 1.33 in Jacod (1979) and then an integration by parts, we obtain

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T M_t Y^{i,n}_t dt \bigg ]&= 2 {\mathbb {E}}\bigg [ \int _0^T M_t \Big ( \Gamma ^i_{t,T} Q^{i;i}_T X^n_T + \int _t^T \Gamma ^i_{t,s} Q^{i;i}_s X^n_s ds \Big ) dt \bigg ] \\&= 2 {\mathbb {E}}\bigg [ X^n_T Q^{i;i}_T \int _0^T M_t \Gamma ^i_{t,T} dt + \int _0^T \Big ( \int _0^t M_s \Gamma ^i_{s,t} ds \Big ) Q^{i;i}_t X^n_t dt \bigg ]. \end{aligned}$$

Finally, thanks to the convergence established in Proposition 3.5, we can take limits as \(n\rightarrow \infty \) in the latter equality and, using the same steps backward, we conclude that

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T M_t Y^{i}_t dt \bigg ] = 2 {\mathbb {E}}\bigg [ \int _0^T M_t \Big ( {\mathbb {E}}\Big [ \Gamma ^i_{t,T} Q^{i;i}_T X_T + \int _t^T \Gamma ^i_{t,s} Q^{i;i}_s X_s ds \Big | \mathcal F _t \Big ] \Big ) dt \bigg ], \end{aligned}$$

for any \(M \in \mathbb H _T^2\), so that \(Y^i\) solves the backward equation. \(\square \)

Proposition 3.7

For every \(i=1,...,N\), the following conditions hold true:

  1. 1.

    \( Y ^{i}_t + c_t^{i,+} \ge 0\) and \(- Y ^{i}_t + c_t^{i,-} \ge 0\), for any \(t \in [0,T], \ \mathbb P\)-a.s.;

  2. 2.

    \(\int _{[0,T]} ( Y ^{i}_t + c_t^{i,+}) d \xi ^i_t = 0\) and \(\int _{[0,T]} (- Y ^{i}_t + c_t^{i,-}) d \zeta ^i_t =0\), \(\mathbb P \)-a.s.

Proof

We prove each claim separately.

Proof of 1. By Lemma 2.1, we have

$$\begin{aligned} J^i( 0,0 ; \eta ^{-i,n}) - J^i(\xi ^{i,n}&, \zeta ^{i,n}; \eta ^{-i,n} ) \\ \nonumber&\ge - \mathbb E \bigg [ \int _0^T (Y_t^{i,n} + c_t^{i,+}) d\xi ^{i,n}_t +\int _0^T (- Y_t^{i,n} + c_t^{i,-}) d\zeta ^{i,n}_t \bigg ], \end{aligned}$$
(23)

where, in the last equality, we have used the integrability of \(\xi ^{i,n}\) and \(\zeta ^{i,n}\). Next, for \(y \in \mathbb R\), set \(y^+:= \max \{y,0\}\) and \(y^-:= \max \{ - y, 0 \}\). By using the necessary conditions in (17), we obtain

$$\begin{aligned} n \mathbb E \bigg [ \int _0^T (Y_t^{i,n} + c_t^{i,+})^- dt&+ \int _0^T (- Y_t^{i,n} + c_t^{i,-})^- dt \bigg ] \\&\le J^i( 0 ; \eta ^{-i,n}) - J^i(\xi ^{i,n}, \zeta ^{i,n}; \eta ^{-i,n} ), \end{aligned}$$

so that, thanks to the boundedness of \(Q^i\), we have

$$\begin{aligned} n \mathbb E \bigg [ \int _0^T (Y_t^{i,n} + c_t^{i,+})^- dt&+ \int _0^T (- Y_t^{i,n} + c_t^{i,-})^- dt \bigg ] \\&\le C\bigg ( 1+ \mathbb E \bigg [ \int _0^T |X_t^{n}|^2 dt + | X_T^{n}|^2 \bigg ] \bigg ). \end{aligned}$$

Hence, by Lemma 3.4 we deduce that

$$\begin{aligned} \lim _n \mathbb E \bigg [ \int _0^T (Y_t^{i,n} + c_t^{i,+})^- dt \bigg ] = \lim _n \mathbb E \bigg [ \int _0^T (- Y_t^{i,n} + c_t^{i,-})^- dt \bigg ] = 0. \end{aligned}$$
(24)

From the latter equality, using that \(Y^{i,n}\) converges weakly to \(Y^i\) as \(n\rightarrow \infty \) (cf. Proposition 3.5), we deduce that

$$\begin{aligned} 0 \le {\mathbb {E}}\bigg [ \int _0^T (Y_t^{i} + c_t^{i,+})^- dt \bigg ]&= - \lim _n {\mathbb {E}}\bigg [ \int _0^T (Y_t^{i,n} + c_t^{i,+}) \mathbbm {1} _{\{ Y_t^{i} + c_t^{i,+} \le 0\} } dt \bigg ] \\&\le \lim _n {\mathbb {E}}\bigg [ \int _0^T (Y_t^{i,n} + c_t^{i,+})^- \mathbbm {1} _{\{ Y_t^{i} + c_t^{i,+} \le 0\} } dt \bigg ] \\&\le \lim _n {\mathbb {E}}\bigg [ \int _0^T (Y_t^{i,n} + c_t^{i,+})^- dt \bigg ] = 0. \end{aligned}$$

Similarly, we find

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T ( - Y_t^{i} + c_t^{i,-})^- dt \bigg ] = 0, \end{aligned}$$

completing the proof of Claim 1.

Proof of 2. Using (6) (see the proof of Lemma 2.1) with \(\eta = \eta ^{-i,n}\) and \((\bar{\xi }, \bar{\zeta }) = (0,0)\), denoting by \(\tilde{X}^{i,n}\) the solution to (18), we obtain

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _{[0,T]} Y^{i,n}_t d (\xi ^{i,n} - \zeta ^{i,n}) _t \bigg ]&= 2 {\mathbb {E}}\bigg [ \int _0^T \Big ( \sum _{j=1}^N q_t^{i,j;i} X_t^{j,n}\Big )(X^{i,n}_t -\tilde{X} ^{i,n}_t) dt \bigg ] \\&\quad + 2 {\mathbb {E}}\bigg [ \Big ( \sum _{j=1}^N q_T^{i,j;i} X_T^{j,n}\Big )(X^{i,n}_T -\tilde{X} ^{i,n}_T)\bigg ].\nonumber \end{aligned}$$

Thus, summing over \(i=1,...,N\), for \(\hat{Q}\) as in (9) and \(\tilde{X} ^n:=(\tilde{X}^{1,n},...,\tilde{X} ^{N,n})\) we find

$$\begin{aligned} \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} Y^{i,n}_t d (\xi ^{i,n} - \zeta ^{i,n}) _t \bigg ]&= {\mathbb {E}}\bigg [ \int _0^T ( X^n_t \hat{Q} _t X^n_t -\tilde{X}^n_t \hat{Q} _t X _t^n) dt \bigg ] \\ \nonumber&\quad + {\mathbb {E}}\big [ X^n_T \hat{Q} _T X^n_T -\tilde{X}^n_T \hat{Q} _T X _T^n \big ]. \nonumber \end{aligned}$$
(25)

Similarly, for \(\tilde{X} = (\tilde{X} ^1,..., \tilde{X} ^N )\), with \(\tilde{X} ^j\) solution to the SDE (1) for \(j=1,...,N\), we obtain the identity

$$\begin{aligned} \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} Y^{i}_t d (\xi ^{i} - \zeta ^{i}) _t \bigg ]&= {\mathbb {E}}\bigg [ \int _0^T ( X_t \hat{Q} _t X_t - \tilde{X}_t \hat{Q} _t X _t) dt \bigg ] \\ \nonumber&\quad + {\mathbb {E}}\big [ X_T \hat{Q} _T X_T - \tilde{X}_T \hat{Q} _T X _T \big ].\nonumber \end{aligned}$$
(26)

From (11), we notice that

$$\begin{aligned} \lim _n \mathbb E \bigg [ \int _0^T |\tilde{X}^n_t -\tilde{X} _t|^2 dt \bigg ] = 0. \end{aligned}$$

Therefore, since \(X^n \rightarrow X\) weakly (see Proposition 3.5), we have

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T \tilde{X}_t \hat{Q} _t X _t dt + \tilde{X}_T \hat{Q} _T X _T \bigg ] = \lim _n {\mathbb {E}}\bigg [ \int _0^T \tilde{X}_t^n \hat{Q} _t X_t^n dt +\tilde{X}_T^n \hat{Q} _T X _T^n \bigg ] \end{aligned}$$

and, by convexity of the map \(x \mapsto x \hat{Q} x\) (cf. Condition 4 in Assumption 3.1), we find

$$\begin{aligned} {\mathbb {E}}\bigg [ \int _0^T X_t \hat{Q} _t X_t dt + X_T \hat{Q} _T X _T \bigg ] \le \liminf _n {\mathbb {E}}\bigg [ \int _0^T X_t^n \hat{Q} _t X_t^n dt + X_T^n \hat{Q} _T X _T^n\bigg ]. \end{aligned}$$

Hence, by using the latter limits in (25) and (26), we obtain

$$\begin{aligned} \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} Y^{i}_t d ( \xi ^{i} - \zeta ^{i} )_t \bigg ] \le \liminf _n\sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} Y^{i,n}_t d( \xi ^{i,n} - \zeta ^{i,n}) _t \bigg ], \end{aligned}$$
(27)

Next, for a suitable subsequence of indexes \((n_k)_k\), since the functions \(c^{i,+}\) and \(c^{i,-}\) are bounded and continuous, the limits at Point 2 in Proposition 3.5 give

$$\begin{aligned} \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} \Big ( c_t^{i,+} d \xi ^{i}_t + c_t^{i,-} d \zeta ^{i}_t \Big ) \bigg ] = \lim _m \frac{1}{m} \sum _{k=1}^m \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} \Big ( c_t^{i,+} d \xi ^{i, n_k}_t + c_t^{i,-} d \zeta ^{i,n_k}_t \Big ) \bigg ].\nonumber \\ \end{aligned}$$
(28)

Moreover, by the limits in (27), we also have

$$\begin{aligned} \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} Y^{i}_t d ( \xi ^{i} - \zeta ^{i} )_t \bigg ] \le \liminf _m \frac{1}{m} \sum _{k=1}^m \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} Y^{i,n_k}_t d( \xi ^{i,n_k} - \zeta ^{i,n_k}) _t \bigg ]. \end{aligned}$$
(29)

Finally, by the step 1 in this proof, the integrals \(\int _{[0,T]} (Y^{i}_t + c_t^{i,+}) d \xi ^{i}_t \) and \( \int _{[0,T]} ( -Y^{i}_t + c_t^{i,-}) d \zeta ^{i}_t\) are well defined (possibly equal to \(+\infty \)) and, from (28) and (29), we conclude that

$$\begin{aligned} \sum _{i=1}^N&{\mathbb {E}}\bigg [ \int _{[0,T]} (Y^{i}_t + c_t^{i,+}) d \xi ^{i}_t + \int _{[0,T]} ( -Y^{i}_t + c_t^{i,-}) d \zeta ^{i}_t \bigg ] \\&\le \liminf _m \frac{1}{m} \sum _{k=1}^m \sum _{i=1}^N {\mathbb {E}}\bigg [ \int _{[0,T]} (Y^{i,n_k}_t + c_t^{i,+}) d \xi ^{i,n_k}_t + \int _{[0,T]} ( -Y^{i,n_k}_t + c_t^{i,-}) d \zeta ^{i,n_k}_t \bigg ],\\&= \liminf _m \frac{1}{m} \sum _{k=1}^m \sum _{i=1}^N \bigg (- n_k {\mathbb {E}}\bigg [ \int _0^T \Big ( (Y^{i,n_k}_t + c_t^{i,+})^- + ( -Y^{i,n_k}_t + c_t^{i,-})^- \Big ) dt \bigg ] \bigg ) \le 0, \end{aligned}$$

where we have used the necessary conditions in (17). The latter inequality, combined with Claim 1, in turn implies that

$$\begin{aligned} \int _{[0,T]} (Y^{i}_t + c_t^{i,+}) d \xi ^{i}_t = \int _{[0,T]} ( -Y^{i}_t + c_t^{i,-}) d \zeta ^{i}_t= 0, \end{aligned}$$

thus completing the proof of the proposition. \(\square \)

In order to conclude the proof of Theorem 3.2, we only remain to observe that, by Propositions 3.6 and 3.7, the constructed \(\eta \) satisfies the conditions of Theorem 2.2, and it is therefore a Nash equilibrium.

4 An application to oligopoly investment games

We consider N firms competing in a market by producing and selling a certain good. The stochastic demand of such a good is modeled by the one-dimensional diffusion process

$$\begin{aligned} d X^0_t = \mu ^0(t,X^0_t) dt + \sigma ^0(t,X^0_t) dW^0_t, \quad X_0^0 =x^0_0 \in \mathbb {R}, \end{aligned}$$

which is driven by a one-dimensional Brownian motion \(W^0\) on some complete probability space \((\Omega , \mathcal F, \mathbb P)\). The functions \(\mu ^0, \sigma ^0:[0,T] \times \mathbb {R}\rightarrow \mathbb {R}\) are assumed to be Lipschitz continuous and the diffusive term \(\sigma ^0\) is assumed to satisfy the nondegeneracy condition

$$\begin{aligned} 0 < \underline{\sigma }\le \sigma ^0(t,x) \le \bar{\sigma }, \quad \text {for any }t \in [0,T], x\in \mathbb {R},\text { for some }\underline{\sigma }, \bar{\sigma }\in \mathbb {R}. \end{aligned}$$
(30)

Following the fluctuations of the demand \(X^0\), each company i can expand its capital stock \(X^i\) through an irreversible investment strategy \(\xi ^i\). Since \(\sigma ^0\) is nondegenerate, the filtration generated by \(X^0\) coincides with the filtration generated by \(W^0\). Let \(\mathbb F ^0 = ( {\mathcal F}_t^0)_t\) the right-continuous extension of the filtration generated by \(W^0\), augmented by the \(\mathbb {P}\)-null sets. Thus, strategies \(\xi ^i\) are \(\mathbb F ^0\)-adapted, nonnegative, nondecreasing, càdlàg processes with \({\mathbb {E}}[\xi _T^i ]< \infty \), and the capital stock of firm i evolves as

$$\begin{aligned} dX_t^i = - \delta ^i X_t^i dt + d \xi _t^i, \quad X_{0-}^i = x^i_0 \ge 0, \end{aligned}$$

where the parameter \(\delta ^i>0\) measures the natural deterioration of the capital. The production output of firm i is given by the multiple \(\alpha ^i X_t^i\), for some \(\alpha ^i >0\).

Assuming a linear demand (as in Back and Paulsen 2009, at the end of Section 1), the price at time t of the good is given by \( X^0_t - \gamma \sum _{j=1}^N \alpha ^j X_t^j, \) for a parameter \(\gamma >0\). Hence, the profit from sales of company i is given by \( X_t^{i} \big ( X_t^0 - \gamma \sum _{j=1}^N \alpha ^j X_t^{j} \big ). \) Moreover, we assume that the cost faced by company i per unit of investment is \(c^i>0\), and that the company’s discount factor is \(\rho > 0\). Summarizing, each company aims at maximizing the net quadratic-singular profit functional

$$\begin{aligned} P^i(\xi ^i,\xi ^{-i}):= \mathbb {E} \bigg [ \int _0^T e^{-\rho t} \alpha ^i X_t^{i} \Big ( X_t^0 - \gamma \sum _{j=1}^N \alpha ^j X_t^{j} \Big ) dt - c^i \int _{[0,T]} e^{-\rho t} d\xi ^{i}_{t} \bigg ]. \end{aligned}$$

Before discussing the existence of Nash equilibria, some observations are worth being done.

Remark 3

We point out that, in comparison to the setup in Sect. 2, the model described in this section is more general from two points of view:

  1. 1.

    The uncertainty in the decision of players is driven by the presence of the extra noise \(W^0\). As a consequence, since the idiosyncratic noises \(W^i\) are set to zero, the equilibrium investment strategies are expected to be adapted only to the noise \(W^0\) (this fact will be proved in the Step 2 in the proof of Theorem 4.1 below). Moreover, \(X^0\) enters in the optimization problem of the players as a random unaffected parameter. Hence, even if the SDE for \(X^0\) is not linear, the optimization problems of the players remain convex, and the necessary and sufficient conditions for Nash as in Theorem 2.2 remain valid up to minimal adjustments: In particular, any \(\mathbb F^0\)-adapted Nash equilibrium solves the system (34) with the optimality conditions (35) and (36) below (notice that the conditional expectation of the adjoint process in (34) is taken with respect to \(\mathbb F ^0\)).

  2. 2.

    The symmetry of \(Q^i\) and the non-degeneracy of \(\bar{Q}\) in Assumption 3.1 are not satisfied. Nevertheless, a taylor made a priori estimate (as in Lemma 3.4) can be shown (see Step 1 in the proof of Theorem 4.1 below), which allows to recover the convergence of the approximating Nash equilibria (as in Proposition 3.5).

Remark 4

The nondegeneracy condition (30) is in place merely for technical reasons, and it is used to employ the results in Hamadene and Mannucci (2019) in order to construct Nash equilibria of the related Lipschitz games. A key example, which already appeared in the literature (see e.g. in Aid et al. (2015); Bar-Ilan et al. (2002)), where the nondegeneracy condition is satisfied is when the demand \(X^0\) is a mean-reverting process, following the SDE

$$\begin{aligned} d X^0_t = \mu ^0 (\theta ^0 - X^0_t) dt + \sigma ^0 dW^0_t, \quad X_0^0 =x^0_0 >0, \end{aligned}$$

for parameters \(\theta ^0\in \mathbb {R}, \, \mu ^0, \sigma ^0 > 0\). Mean-reverting dynamics find important application in the energy and commodity markets (see, e.g., Benth et al. 2014 or Chapter 2 in Lutz 2009).

4.1 Existence of Nash equilibria

Slightly adapting Theorem 3.2, we can show existence of equilibrium investment strategies.

Theorem 4.1

There exists an \(\mathbb F ^0\)-adapted Nash equilibrium.

Proof

In order to simplify the notation we take \(\alpha ^1=...=\alpha ^N=1\) and \(\rho =0\) (the proof in the general case is analogous). The rest of the proof is dived in two steps.

Step 1. We first give a sketch of how to construct a Nash equilibrium as in Theorem 3.2. Without loss of generality, we can assume the probability space \((\Omega , \mathcal F, \mathbb P)\) to be large enough to accommodate N independent Brownian motions \(W^i\), \(i=1,...,N\), which are independent from \(W^0\). Let \(\mathbb F = ( {\mathcal F}_t)_t\) the right-continuous extension of the filtration generated by \((W^0,W^1,...,W^N)\), augmented by the \(\mathbb {P}\)-null sets.

We observe that the symmetry of \(Q^i\) and the non-degeneracy of \(\bar{Q}\) in Assumption 3.1 are not satisfied. However, despite the presence of the extra uncontrolled dynamics \(X^0\), Theorem 3.2 applies (with minimal adjustment) and provides the existence of a Nash equilibrium \(\xi = (\xi ^1,..., \xi ^N)\) which is \(\mathbb F\)-adapted, with \({\mathbb {E}}[ |\xi _T|^2 ] < \infty \). In particular, the main difference is in the estimates of Lemma 3.4, which can be recovered as follows. For \(n\in \mathbb N\), let \(\xi ^n\) be a Nash equilibrium of the related n-Lipschitz game,

and denote by \(X^n= (X^{1,n},...,X^{N,n})\) and \(\tilde{X}^n =(\tilde{X} ^{1,n},..., \tilde{X} ^{N,n}) \) the solutions to the controlled and uncontrolled equations

$$\begin{aligned} d X _t^{i,n}&= - \delta ^i X _t^{i,n} dt + \frac{1}{n} dW^i_t +d \xi ^{i,n}_t, \quad X _{0 -}^{i,n} = x^i_0, \\ d \tilde{X} _t^{i,n}&= - \delta ^i \tilde{X} _t^{i,n} dt + \frac{1}{n} dW^i_t , \quad \tilde{X} _{0}^{i,n} = x^i_0, \end{aligned}$$

respectively, for \( i =1,...,N\). By optimality, for \(i=1,...,N\) we have \( P^i(\xi ^{i,n},\xi ^{-i,n}) \ge P^i(0,\xi ^{-i,n})\), which implies that

$$\begin{aligned} \mathbb {E} \bigg [ \int _0^T X_t^{i,n} \Big ( \gamma \sum _{j=1}^N X_t^{j,n} - X_t^{0} \Big ) dt \bigg ] \le \mathbb {E} \bigg [ \int _0^T \tilde{X}_t^{i,n} \Big ( \gamma \tilde{X}_t^{i,n}+ \gamma \sum _{j\ne i} X_t^{{j,n}} - X_t^{0} \Big ) dt \bigg ]. \end{aligned}$$

Thus, summing over i, we obtain

$$\begin{aligned} \mathbb {E} \bigg [ \int _0^T \Big ( \sum _{j=1}^N X_t^{j,n} \Big ) ^2 dt \bigg ] \le C \bigg ( 1+\mathbb {E} \bigg [ \int _0^T \sum _{i=1}^N \Big ( X_t^{0} X_t^{i,n} + \gamma \tilde{X}_t^{i,n} \sum _{j\ne i} X_t^{{j,n}} \Big ) dt \bigg ] \bigg ).\nonumber \\ \end{aligned}$$
(31)

Moreover, solving explicitly the equation for \(X^{j,n}\), we have \(X^{j,n}_t = e^{-\delta ^j t} \big ( x_0^j + \int _0^t e^{\delta ^j s} d \xi ^{j,n}_s + \frac{1}{n} \int _0^t e^{\delta ^j s} dW^i_s \big )\), so that

$$\begin{aligned} X^{j,n}_t - M_t^{j,n}:= X^{j,n}_t - \frac{1}{n} \int _0^t e^{\delta ^j (s-t)} dW^j_s = e^{-\delta ^j t} \Big ( x_0^j + \int _0^t e^{\delta ^j s} d \xi ^{j,n}_s \Big ) \ge 0. \end{aligned}$$

Thus, we have \( 0 \le X^{i,n}_t - M_t^{i,n} \le \sum _{j=1}^N (X^{j,n}_t - M_t^{j,n})\). This implies that

$$\begin{aligned} (X^{i,n}_t)^2 + (M_t^{i,n} )^2 -2 X^{i,n}_t M_t^{i,n}&= (X^{i,n}_t - M_t^{i,n} )^2 \\ {}&\le \Big ( \sum _{j=1}^N (X^{j,n}_t - M_t^{j,n}) \Big )^2 \\&\le 2 \Big ( \Big ( \sum _{j=1}^N X^{j,n}_t \Big )^2 + \Big ( \sum _{j=1}^N M_t^{j,n} \Big )^2 \Big ), \end{aligned}$$

which, summing over i, gives

$$\begin{aligned} \mathbb {E} \bigg [ \int _0^T |X_t^{n}|^2 dt \bigg ] \le C \bigg (1 + \mathbb {E} \bigg [ \int _0^T \bigg ( \Big ( \sum _{j=1}^N X_t^{j,n} \Big ) ^2 + 2 \sum _{i=1}^N X^{i,n}_t M_t^{i,n} \bigg ) dt \bigg ] \bigg ). \end{aligned}$$

Combining the latter estimate with (31), thanks to Hölder inequality we find

$$\begin{aligned} \mathbb {E} \bigg [ \int _0^T |X_t^{n}|^2 dt \bigg ] \le C \bigg ( 1+ \bigg ( \mathbb {E} \bigg [ \int _0^T |X_t^{n}|^2 dt \bigg ]\bigg )^{1/2} \bigg ). \end{aligned}$$

Hence, we conclude that \(\sup _n \mathbb {E} [ \int _0^T |X_t^{n}|^2 dt ] < \infty \) and (as in Lemma 3.4) that \(\sup _n \mathbb {E} [ |\xi _T^n| ] < \infty \).

We also underline that there is a difference in the optimality conditions of Theorem 2.2. Indeed, if the process \((X,Y) = (X^1,...,X^N,Y^1,...,Y^N) \in \mathbb H^{2,2N }\) is associated to a Nash equilibrium \(\xi \), then it solves the FBSDE

$$\begin{aligned} {\left\{ \begin{array}{ll} X_t^i = x_0^i -\delta ^i \int _0^t X_s^ids + \xi ^i_t, \quad i=1,...,N, \\ Y_t^{i} = {\mathbb {E}}\big [ \int _t^T e^{-\delta ^i (s-t)} (\gamma \sum _{j=1}^N X_s^j +\gamma X_s^i - X_s^0 ) ds \big | \mathcal F _t \big ], \quad i=1,...,N, \end{array}\right. } \end{aligned}$$

and, by the analogous of Theorem 2.2 in the current setting, the equilibrium \(\xi \) satisfies the conditions:

$$\begin{aligned}&Y ^{i}_t + c^{i} \ge 0,\text { for any }t \in [0,T], \ \mathbb P\text {-a.s.;} \end{aligned}$$
(32)
$$\begin{aligned}&\int _{[0,T]} ( Y ^{i}_t + c^{i}) d \xi ^i_t = 0, \mathbb P \text {-a.s.} \end{aligned}$$
(33)

Step 2. We now construct an \(\mathbb F ^0\)-adapted equilibrium. Set \(\bar{\xi }: = (\bar{\xi }^1,..., \bar{\xi }^N)\), where

$$\begin{aligned} \bar{\xi }^i: = ({\mathbb {E}}[ \xi _t^i |{\mathcal F}_t^0 ])_t, \quad \text {for}\quad i=1,...,N. \end{aligned}$$

Clearly, the processes \(\bar{\xi }^i\) are \(\mathbb F ^0\)-adapted and, since \({\mathbb {E}}[ \xi _t^i |{\mathcal F}_t^0 ] = {\mathbb {E}}[ \xi _t^i |{\mathcal F}_T^0 ] \ \mathbb P\)-a.s., we see that \(\bar{\xi }^i\) are nondecreasing and càdlàg. Next, set

$$\begin{aligned} \bar{X} ^i: = ({\mathbb {E}}[ X _t^i |{\mathcal F}_t^0 ])_t \quad \text {and} \quad \bar{Y} ^i: = ({\mathbb {E}}[ Y _t^i |{\mathcal F}_t^0 ])_t, \quad \text {for} \quad i=1,...,N. \end{aligned}$$

With elementary arguments, we find

$$\begin{aligned} {\left\{ \begin{array}{ll} \bar{X} _t^i = x_0^i -\delta ^i \int _0^t \bar{X} _s^ids + \bar{\xi }^i_t, \quad i=1,...,N, \\ \bar{Y} _t^{i} = {\mathbb {E}}\big [ \int _t^T e^{-\delta ^i (s-t)} (\gamma \sum _{j=1}^N \bar{X}_s^j +\gamma \bar{X}_s^i - X_s^0 ) ds \big | \mathcal F _t^0 \big ], \quad i=1,...,N. \end{array}\right. } \end{aligned}$$
(34)

Next, we want to show that \(\bar{\xi }\) is a Nash equilibrium by checking the sufficient conditions of Theorem 2.2. By taking the conditional expectation in (32), we find

$$\begin{aligned} \bar{Y} ^{i}_t + c^{i} \ge 0,\text { for any }t \in [0,T], \ \mathbb P\text {-a.s.} \end{aligned}$$
(35)

Also, similarly to (6), denoting by \(\tilde{X}^i \) the solution to the uncontrolled equation \(d \tilde{X} _t^i = - \delta ^i \tilde{X} _t^i dt, \ \tilde{X} _{0}^i = x^i_0\), we have

$$\begin{aligned} \mathbb E \bigg [ \int _{[0,T]} ( \bar{Y} ^{i}_t + c^{i}) d \bar{\xi }^i_t \bigg ]&= {\mathbb {E}}\bigg [ \int _0^T \Big (\gamma \sum _{j=1}^N \bar{X}_t^j +\gamma \bar{X}_t^i - X_t^0 \Big ) ( \bar{X} ^i_t - \tilde{X}^i_t) dt + c^{i} \bar{\xi }^i_T \bigg ]. \end{aligned}$$

Moreover, noticing that \(X^0\) is \(\mathbb F^0\)-adapted and that \(\tilde{X}^i\) is deterministic, summing over i and using Jensen inequality for conditional expectation we obtain

$$\begin{aligned} \sum _{i=1}^N \mathbb E \bigg [ \int _{[0,T]} ( \bar{Y} ^{i}_t + c^{i}) d \bar{\xi }^i_t \bigg ]&= {\mathbb {E}}\bigg [ \int _0^T \bigg ( \gamma \Big (\sum _{i=1}^N \bar{X}_t^i \Big ) ^2 + \gamma \sum _{i=1}^N \big ( \bar{X}_t^i \big )^2 - X_t^0 \sum _{i=1}^N \bar{X}_t^i \bigg ) dt \\&\quad - \sum _{i=1}^N \int _0^T \Big (\gamma \sum _{j=1}^N \bar{X}_t^j + \gamma \bar{X}_t^i - X_t^0 \Big ) \tilde{X}^i_t dt + \sum _{i=1}^N c^{i} \bar{\xi }^i_T \bigg ] \\&\le {\mathbb {E}}\bigg [ \int _0^T \bigg ( \gamma \Big (\sum _{i=1}^N X_t^i \Big ) ^2 + \gamma \sum _{i=1}^N \big ( X_t^i \big )^2 - X_t^0 \sum _{j=1}^N X_t^i \bigg ) dt \\&\quad - \sum _{i=1}^N \int _0^T \Big (\gamma \sum _{j=1}^N X_t^j + \gamma X_t^i - X_t^0 \Big ) \tilde{X}^i_t dt + \sum _{i=1}^N c^{i} \xi ^i_T \bigg ] \\&= \sum _{i=1}^N \mathbb E \bigg [ \int _{[0,T]} ( Y ^{i}_t + c^{i}) d \xi ^i_t \bigg ]. \end{aligned}$$

Thus, using (6) and (33), we get

$$\begin{aligned} \sum _{i=1}^N \mathbb E \bigg [ \int _{[0,T]} ( \bar{Y} ^{i}_t + c^{i}) d \bar{\xi }^i_t \bigg ] \le \sum _{i=1}^N \mathbb E \bigg [ \int _{[0,T]} ( Y ^{i}_t + c^{i}) d \xi ^i_t \bigg ] =0, \end{aligned}$$

which, together with (35), in turn implies that

$$\begin{aligned} \int _{[0,T]} ( \bar{Y} ^{i}_t + c^{i}) d \bar{\xi }^i_t = 0, \mathbb P \text {-a.s.} \end{aligned}$$
(36)

Finally, we can invoke Theorem 2.2, in order to conclude that \(\bar{\xi }\) is a Nash equilibrium. \(\square \)

4.2 A comparison between linear costs and quadratic costs

In this subsection we will compare the equilibrium investments strategies found in Theorem 4.1 (hence for an LQS game) with the equilibrium strategies arising from replacing the linear cost with a quadratic cost (i.e., for the LQ game). In order to simplify the presentation, we will focus on the case \(\alpha ^1=...=\alpha ^N=1\) and \(\rho =0\).

With the same notation as in the beginning of Sect. 4, consider the game in which each player is allowed to choose a square integrable \(\mathbb F ^0\)-adapted process \(u^i: \Omega \times [0,T] \rightarrow [0,\infty )\) to expand its capital stock

$$\begin{aligned} dX_t^i = ( - \delta ^i X_t^i + u^i_t )dt, \quad X_{0-}^i = x^i_0 \ge 0, \end{aligned}$$

in order to maximize, given strategies \(u^{-i}:=(u^j)_{j\ne i}\) of its opponents, the net quadratic profit functional

$$\begin{aligned} P_2^i(u^i,u^{-i}):= \mathbb {E} \bigg [ \int _0^T \Big ( X_t^{i} \Big ( X_t^0 - \gamma \sum _{j=1}^N X_t^{j} \Big ) - c^i |u^i_t|^2 \Big ) dt \bigg ]. \end{aligned}$$

Notice that the cost \(P_2^i\) is quadratic in the size of the investment rate \(u^i\), so that this model is a LQ game.

We next have a closer look to the related FBSDEs. On the one hand, using the stochastic maximum principle (see Chapter 5 in Carmona (2016)), the Nash equilibria \(\hat{u} = (\hat{u}^1,...,\hat{u} ^N )\) of the LQ game are given in terms of the solutions to the FBSDE

$$\begin{aligned} {\left\{ \begin{array}{ll} \hat{X} _t^i = x_0^i -\delta ^i \int _0^t \hat{X} _s^ids + \int _0^t \hat{u}^i_s ds, \quad i=1,...,N,\\ \hat{Y} _t^{i} = {\mathbb {E}}\big [ \int _t^T e^{-\delta ^i (s-t)} (\gamma \sum _{j=1}^N \hat{X}_s^j +\gamma \hat{X}_s^i - X_s^0 ) ds \big | \mathcal F _t^0 \big ], \quad i=1,...,N,\\ \hat{u} ^i_t = \big ( - \frac{\hat{Y} ^i_t}{2c^i} \big ) \vee 0, \quad i=1,...,N, \end{array}\right. } \end{aligned}$$
(37)

where the last equation represents the optimality condition for the equilibrium strategies \((\hat{u} ^1,..., \hat{u} ^N )\). On the other hand, from the proof of Theorem 4.1, we know that the Nash equilibrium \((\bar{\xi }^1,...,\bar{\xi }^N)\) of the LQS model is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} \bar{X} _t^i = x_0^i -\delta ^i \int _0^t \bar{X} _s^ids + \bar{\xi }^i_t,\quad i=1,...,N, \\ \bar{Y} _t^{i} = {\mathbb {E}}\big [ \int _t^T e^{-\delta ^i (s-t)} (\gamma \sum _{j=1}^N \bar{X}_s^j +\gamma \bar{X}_s^i - X_s^0 ) ds \big | \mathcal F _t^0 \big ], \quad i=1,...,N,\\ \bar{Y} ^{i}_t + c^{i} \ge 0,\text { for any }t \in [0,T], \int _{[0,T]} ( \bar{Y} ^{i}_t + c^{i}) d \bar{\xi }^i_t = 0, \quad i=1,...,N. \end{array}\right. } \end{aligned}$$
(38)

We are now ready to discuss differences and similarities among these two models. While the forward and the backward components of the systems (37) and (38) are essentially the same, the nature of the equilibria differ due to the optimality conditions. In particular:

  1. 1.

    In both models, investing is never convenient as long as the adjoint process is positive. However, while for the LQ game players start investing proportionally to \(-\hat{Y} ^i\) as soon as \(\hat{Y} ^i <0\), in the LQS game players invest only when \(\bar{Y} ^i = -c^i\).

  2. 2.

    In the LQ model, players invest at finite rates \(\hat{u} ^i\). Instead, the equilibrium investment strategy \(\bar{\xi }^i\) in the LQS game is typically singular with respect to the Lebesgue measure, having support in the set \(\{ s \in [0,T] | \bar{Y} _s^i = -c^i \}\).