1 Introduction

The theory of mean field games (MFGs for short) is concerned with the study of asymptotic Nash equilibria for stochastic differential games with infinite number of players subject to a mean field interaction (i.e. each player is affected by the other players only through the empirical distribution of the system).

A Nash equilibrium constitutes a consensus (or compromise) between all the players from which no player has unilateral incentive to escape.

As the number of players (N) of the stochastic differential game increases, finding Nash equilibria becomes an increasingly complex problem as it typically involves a system of N PDEs set on a space of dimension N. The motivation for studying the asymptotic regime is to reduce the underlying complexity. At least in the case where the players are driven by independent noises, the hope is indeed to take benefit from the theory of propagation of chaos for mean field interacting systems (see for example [12]) in order to reduce the analysis of the whole system to the analysis of a single representative player.

In MFGs, the representative player aims at minimizing a cost functional while interacting with an environment described by a flow of distributions. Finding Nash equilibria thus consists in finding optimal states whose flow of marginal distributions matches exactly the flow of distributions describing the environment. This is a constraint of McKean–Vlasov type.

MFGs were introduced independently and simultaneously by Lasry and Lions [8] and by Caines, Huang and Malhamé [7] (who used the name of Nash Certainty Equivalence). We refer to the notes written by Cardaliaguet for a very good introduction to the subject [1]. Carmona and Delarue studied MFGs with a probabilistic approach [2, 3]. Many other authors have contributed to the rapid development of the theory. Under convexity conditions on the cost functional, existence of Nash equilibria has been proved in the above works. Further monotonicity conditions introduced by Lasry and Lions provide uniqueness of Nash equilibrium.

In this note, we investigate a class of linear-quadratic mean field games (LQ-MFGs) in which the representative player at equilibrium interacts with the mean of its distribution.

Inspired by earlier works in that direction, we suppose further that, in addition to the independent noises, the N players in the finite game are also subject to a common (or systemic) noise, such a modelling being motivated by practical applications. For example, financial markets models often consider some common market noise affecting the agents. Mean field games with common noise are also related to mean field games with a major agent, as introduced by Huang et al. [6, 10]. Carmona and Zhu provided a probabilistic approach to MFGs with a major agent [16].

We proceed to find Nash equilibria through Carmona and Delarue’s scheme, based on the theory of forward–backward stochastic differential equations (FBSDE for short) of the McKean–Vlasov type. The major change is that, due to the presence of common noise, the representative player at equilibrium feels the mean field interaction through its conditional expectation given the common noise. The environment is thus described by a stochastic process whose randomness comes only from the common noise.

The strategy is to characterize the environment as the forward component of an auxiliary FBSDE driven by the common noise only. Thanks to the common noise, this FBSDE is non-degenerate and thus satisfies an existence and uniqueness theorem proved by Delarue in [4]. This establishes the existence and uniqueness of a Nash equilibrium for this class of LQ-MFGs.

Afterwards, we present a counter-example to uniqueness of Nash equilibria for a game in this class of LQ-MFGs in the absence of common noise. This provides a concrete example when common noise restores uniqueness. Several situations outside of the MFGs framework in which noise restores uniqueness are presented in the monograph by Flandoli (see [5]). To the best knowledge of the author, the example proposed in this paper is the first one in the literature for MFGs.

For expository purposes the work presented here involves one-dimensional equations with prescribed coefficients, but the results remain valid for higher dimensions.

2 A Class of Linear-Quadratic N-Players Games

We consider stochastic differential games with fixed terminal time \(T>0\) and \(N \in \mathbb {N}\) players. Let \(B = (B_t)_{t\in [0,T]}, (W^i_t)_{t \in [0,T]}, i =1,\ldots ,N\) be \(N+1\) independent one-dimensional Brownian motions defined on a complete filtered probability space \((\Omega , { \hat{\mathcal {F}}, (\hat{\mathcal {F}_t})_{t\in [0,T]} }, \mathbb {P})\) satisfying the usual conditions. Let \(\psi ^i, i=1,\ldots ,N \) be independent and identically distributed \(\hat{\mathcal {F}_0}\)-measurable random variables taking values in \(\mathbb {R}\) and independent of all Brownian motions.

Let \(\sigma ,\sigma _0 \) be non-negative constants, \(c \in \mathbb {R}\), and \(b, f , g:\mathbb {R} \rightarrow \mathbb {R}\) be given Lipschitz continuous and bounded functions.

We consider the following linear-quadratic stochastic differential game with N players:

For all \(i=1,\ldots ,N\), the ith player’s state process during the game is given by \((X^i_t)_{t \in [0,T]}\) and takes values in \(\mathbb {R}\). We consider a mean field interaction given by an average of all players’ states. We consider the empirical mean:

$$\begin{aligned} \mu ^N_t = \frac{1}{N}\sum _{j=1}^{N} X^j_t, \quad \forall t \in [0,T]. \end{aligned}$$

Each player has the cost function and stochastic dynamics below.

$$\begin{aligned} J(\alpha ^1,\ldots ,\alpha ^i,\ldots ,\alpha ^N)&:= \mathbb {E} \Bigg [ \int _0^T \frac{1}{2} \bigg [( \alpha ^i_t)^2 + \bigg ( f ( \mu ^N_t ) + X^i_t \bigg )^2 \bigg ] \hbox {d}t \nonumber \\&\quad + \frac{1}{2} \bigg ( X^i_T + g ( \mu ^N_T ) \bigg )^2 \Bigg ]. \end{aligned}$$
(1)
$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X^i_t = [ {c} X^i_t + \alpha ^i_t + b( \mu ^N_t ) ] \hbox {d}t + \sigma \hbox {d}W^i_t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ X^i_0 = \psi ^i. \end{array}\right. } \end{aligned}$$
(2)

The cost function of each player depends on the strategies of the other players through the mean field process \((\mu ^N_t)_{t\in [0,T]}\). Each player controls its state process by choosing a control process \(\alpha ^i = (\alpha ^i_t)_{t \in [0,T]} \in {\hat{\mathcal {H}^2}}\), the set of \({(\hat{\mathcal {F}_t})_{t \in [0,T]}}\)-progressively measurable processes satisfying

$$\begin{aligned} \mathbb {E} \bigg [ \int _0^T |\alpha _s|^2 \hbox {d}s \bigg ] < \infty . \end{aligned}$$

When \(\sigma _0 > 0, B\) is integrated to the state dynamics of all the players. They are thus dependent. We say that we are in the presence of common noise. When \(\sigma _0 = 0 , B\) is not integrated to any player’s state and hence all players are independent. We say that we are in the absence of common noise. We call B the common noise and \(\sigma _0\) its intensity. In both cases, the players are exchangeable and we can study the asymptotic regime of this game.

Finding Nash equilibria consists in finding sets of (consensual) controls between the players that minimizes the cost functional of any player when all the other players use the consensual controls. This is a complex problem for large N. The strategy proposed by MFGs theory to reduce the complexity is to find Nash equilibria for the asymptotic regime of the game (‘\(N = \infty \)’) and use them as approximate Nash equilibria for the N-player LQ-Games.

2.1 The Asymptotic Regime: A Class of LQ-MFGs

Taking the limit as N tends to infinity in the above class of LQ-Games with players at equilibrium yields a class of LQ-MFGs for which the representative player’s state process denoted by \((X_t)_{t \in [0,T]}\) (taking values in \(\mathbb {R}\)) interacts with its expectation (in the absence of common noise) or with its conditional expectation given B (in the presence of common noise). This is possible thanks to a propagation of chaos property.

We now consider two Brownian motions \( B = (B_t)_{t \in [0,T]}, (W_t)_{t \in [0,T]}\) defined on a complete filtered probability space \((\Omega , \mathcal {F},(\mathcal {F}_t)_{t\in [0,T]}, \mathbb {P})\) satisfying the usual conditions. Let the representative player’s initial state be given by \(\psi \in \mathcal {L}^2_{\mathcal {F}_0}\), the set of square integrable, \(\mathcal {F}_0\)-measurable random variables. We suppose that the filtration \((\mathcal {F}_t)_{t\in [0,T]}\) corresponds to the natural filtration generated by \({\psi , W, B}\) augmented with \(\mathbb {P}\)-null sets. Let \((\mathcal {F}^B_t)_{t\in [0,T]}\) be the filtration generated by B only and augmented with \(\mathbb {P}\)-null sets. Also, let \(\mathcal {H}^2\), the set of \((\mathcal {F}_t)_{t \in [0,T]}\)-progressively measurable processes satisfying \( \mathbb {E} \bigg [ \int _0^T|\alpha _s|^2 \hbox {d}s \bigg ] < \infty .\)

Finding Nash equilibrium for this class of LQ-MFGs is possible through the scheme proposed by Carmona and Delarue [2]. In our situation, the scheme reads as follows:

Scheme 1

(MFGs-solution scheme)

  1. 1.

    (Mean field Input) Consider a continuous \((\mathcal {F}^B_t)_{t\in [0,T]} \)-adapted process \((\mu _t)_{t \in [0,T]}\) taking values in \(\mathbb {R}\). This process is aimed to be the representative player’s flow of conditional expectations given B at equilibrium.

  2. 2.

    (Cost Minimization) Solve the following stochastic optimal control problem for the representative player; Find \(\alpha ^{*} \in \mathcal {H}^2\), satisfying

    $$\begin{aligned} J(\alpha ^*) = \min _{\alpha \in \mathcal {H}^2} J(\alpha )&:= \min _{\alpha \in \mathcal {H}^2} \mathbb {E} \Bigg [ \int _0^T \frac{1}{2} [\alpha _t^2 + (f(\mu _t) + X_t)^2 ] \hbox {d}t \nonumber \\&\quad + \frac{1}{2}(X_T + g(\mu _T))^2 \Bigg ] \end{aligned}$$
    (3)

    under the stochastic dynamics:

    $$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t = [ {c}X_t + \alpha _t + b(\mu _t)]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t, \, \forall t \in [0,T] \\ X_0 = \psi . \end{array}\right. } \end{aligned}$$
    (4)
  3. 3.

    (McKean–Vlasov constraint) Find \((\mu _t)_{t \in [0,T]}\) such that,

    $$\begin{aligned} \forall t \in [0,T], \quad \mu _t = \mathbb {E}[X^{\alpha ^*}_t| \mathcal {F}^B_t]. \end{aligned}$$

Remark 1

It is possible to show that for all \(t \in [0,T],\) \(\mathbb {E}[X^{\alpha ^*}_t| \mathcal {F}^B_t] = \mathbb {E}[X^{\alpha ^*}_t| \mathcal {F}^B_T ].\) Indeed, let \(\mathcal {F}^B_{t,T}\) denote the filtration generated by increments of B on (tT] augmented with \(\mathbb {P}\)-null sets. Then, for all \(t \in [0,T]\) \(\mathcal {F}^B_T = \mathcal {F}^B_t \vee \mathcal {F}^B_{t,T}\) , \(\mathcal {F}^B_t\) and \(\mathcal {F}^B_{t,T}\) are independent. So, for all \(t \in [0,T],\) \( \mathbb {E}[X^{\alpha ^*}_t| \mathcal {F}^B_t] = \mathbb {E}[X^{\alpha ^*}_t| \mathcal {F}^B_t \vee \mathcal {F}^B_{t,T} ].\) This holds since, for all \(t \in [0,T], X^{\alpha ^*}_t\) is independent of \(\mathcal {F}^B_{t,T}.\)

With this observation, the Mckean–Vlasov constraint now reads: Find \((\mu _t)_{t \in [0,T]}\) such that

$$\begin{aligned} \forall t \in [0,T], \, \mu _t = \mathbb {E}[X^{\alpha ^*}_t| \mathcal {F}^B_T]. \end{aligned}$$
(5)

Moreover, since the map \(t \mapsto X^{\alpha ^*}_t\) is continuous, one can show that there exists a continuous version of the map \( t \mapsto \mathbb {E}[X^{\alpha ^*}_t| \mathcal {F}^B_T ].\)

3 Solvability of Scheme 1

3.1 Stochastic Maximum Principle

We solve the Problem (34) using Pontryagin’s Stochastic Maximum Principle which yields a stochastic Hamiltonian system. For a review of this principle, see, for example, [11] and [15].

Definition 1

The Problem (34) admits the following Hamiltonian,

$$\begin{aligned} H(t,a,x,y,u)= y[{c}x + a + b(u)] + \frac{1}{2} a^2 + \frac{1}{2}(x + f(u))^2 \end{aligned}$$
(6)

for all \( t \in [0,T], a , x, y, u \in \mathbb {R}\).

Proposition 1

Suppose that we are given a continuous \((\mathcal {F}^B_t)_{t\in [0,T]} \)-adapted process \((\mu _t)_{t \in [0,T]}\) taking values in \(\mathbb {R}\). Let H be the Hamiltonian for the Problem (34) and \(\alpha ^* = (\alpha ^*_t)_{t \in [0,T]} \in \mathcal {H}^2\). Then \((\alpha ^*_t)_{t \in [0,T]}\) is a solution to the Problem (34) if and only if there exists \((X_t, Y_t, Z_t, Z_t^0)_{t \in [0,T]}\) an adapted solution to the adjoint FBSDE:

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t = \partial _y H(t, \alpha ^*_t , X_t, Y_t, \mu _t) \hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \hbox {d}Y_t = - \partial _x H(t, \alpha ^*_t , X_t, Y_t, \mu _t) \hbox {d}t + Z_t \hbox {d}W_t + Z_t^0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ X_0 = \psi , \quad Y_T = X_T + g(\mu _T). \end{array}\right. } \end{aligned}$$
(7)

subject to,

$$\begin{aligned} H(t, \alpha ^*_t, X_t, Y_t, \mu _t) = \min _{a \in \mathbb {R}} H(t, a, X_t, Y_t, \mu _t), \, \forall t \in [0,T], \quad \mathbb {P}-a.s. \end{aligned}$$

Proof

The proposition follows from the Pontryagin stochastic maximum principle with the fact that for all \((t,y,u) \in [0,T]\times \mathbb {R} \times \mathbb {R}\), we have convexity of the map \((a,x) \mapsto H(t,a,x, y, u).\) (See Theorem 5.4.6 in [11]) \(\square \)

Remark 2

  1. 1.

    We say that \((X_t, Y_t, Z_t, Z_t^0)_{t \in [0,T]}\) is an adapted solution to the adjoint FBSDE, if XY are \((\mathcal {F}_t)_{t\in [0,T]} \)-adapted processes and \(Z, Z^0\) are \((\mathcal {F}_t)_{t\in [0,T]} \)-progressively measurable processes satisfying \(\mathbb {E}[sup_{t \in [0,T]}[|X_t|^2 + |Y_t|^2] + \int _0^T [|Z_t|^2+|Z^0_t|^2]\hbox {d}t] < \infty \) and system \((7)\mathbb {P}\)- almost surely.

  2. 2.

    Observe that for all \((t,y,u) \in [0,T]\times \mathbb {R} \times \mathbb {R}\), the map \((a,x) \mapsto H(t,a,x, y, u)\) is strictly convex. Hence, for all \((t,x,y,u) \in [0,T]\times \mathbb {R} \times \mathbb {R} \times \mathbb {R}\) there is a unique \(a^* = a^*(t,x, y, u) \in \mathbb {R}\) such that

    $$\begin{aligned} H(t, a^*(t,x, y, u) , x, y, u) = \min _{a \in \mathbb {R}} H(t,a,x,y,u). \nonumber \end{aligned}$$

    Thanks to the strict convexity of H, we know that the zeros of \(\partial _a H\) are the minimizers of H. In our situation, the unique minimizer is given by \( a^*(t,x, y, u) = -y.\)

    Therefore, for all \(t \in [0,T], \alpha ^*_t\) in the previous proposition is uniquely defined as a function of \((t, X_t, Y_t, \mu _t)\), precisely:

    $$\begin{aligned} \forall t \in [0,T], \quad \alpha ^*_t = -Y_t . \end{aligned}$$
    (8)

Proposition 2

Suppose that we are given a continuous \((\mathcal {F}^B_t)_{t\in [0,T]}\)-adapted process \((\mu _t)_{t \in [0,T]}\) taking values in \(\mathbb {R}\). The process \(\alpha ^* = - Y \in \mathcal {H}^2\) is a solution to the stochastic optimal control Problem (34) if and only if \((X_t,Y_t,Z_t,Z^0_t)_{t \in [0,T]}\) is an adapted solution to the FBSDE:

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t =[{c}X_t - Y_t + b(\mu _t)]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \hbox {d}Y_t = [ -X_t -{c} Y_t -f(\mu _t)]\hbox {d}t + Z_t \hbox {d}W_t + Z_t^0 \hbox {d}B_t, \quad \forall t \in [0,T]\\ X_0 = \psi , \quad Y_T = X_T +g(\mu _T). \end{array}\right. } \end{aligned}$$
(9)

Proof

The proposition follows immediately from Proposition 1 and Remark 2. \(\square \)

In order to solve Scheme 1, we have to find solutions to FBSDE (9), subject to the constraint that the given process \(\mu \) satisfies:

$$\begin{aligned} \forall t \in [0,T], \quad \mu _t = \mathbb {E}[X_t |\mathcal {F}^B_T]. \end{aligned}$$
(10)

In this probabilistic approach, solutions to the MFG problem can be used to construct solutions to the conditional McKean–Vlasov FBSDE below (and vice versa)

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t = [{c}X_t - Y_t + b(\mathbb {E}[X_t |\mathcal {F}^B_T])]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T]\\ \hbox {d}Y_t = [ -X_t -{c} Y_t -f (\mathbb {E}[X_t |\mathcal {F}^B_T])]\hbox {d}t + Z_t \hbox {d}W_t + Z_t^0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ X_0 = \psi , \quad Y_T = X_T +g(\mathbb {E}[X_T |\mathcal {F}^B_T]). \end{array}\right. } \end{aligned}$$
(11)

3.2 Solvability of (9)

In this subsection, we show that given a continuous \((\mathcal {F}^B_t)_{t\in [0,T]}\)-adapted process \((\mu _t)_{t \in [0,T]}\), the Problem (9) is uniquely solvable. Then we derive a characterization of the solution using an appropriate Ansatz.

Proposition 3

Suppose that we are given a continuous \((\mathcal {F}^B_t)_{t\in [0,T]} \)-adapted process \((\mu _t)_{t \in [0,T]}\) taking values in \(\mathbb {R}\). Then, there exists a unique adapted solution \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) to the FBSDE (9)

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t = [{c}X_t - Y_t + b(\mu _t)]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \hbox {d}Y_t = [ -X_t -{c} Y_t -f(\mu _t)]\hbox {d}t + Z_t \hbox {d}W_t + Z_t^0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ X_0 = \psi , \quad Y_T = X_T +g(\mu _T). \end{array}\right. } \end{aligned}$$

Proof

Using the changes of variables \(\bar{X}_t = X_t - \psi , \bar{Y}_t = Y_t - X_t, \bar{Z}_t = Z_t - \sigma ,\bar{Z}_t^0 = Z_t^0 - \sigma _0 \forall t \in [0,T]\), we get that solutions to Problem (9) can be used to construct solution to FBSDE below (and vice versa)

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\bar{X}_t= [{(-1+c)} \bar{X}_t -\bar{Y}_t +{(-1+c)} \psi + b(\mu _t)]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \hbox {d}\bar{Y}_t= [{(1-c)} \bar{Y}_t {-2c} \bar{X}_t - 2 \psi - b(\mu _t) -f(\mu _t)]\hbox {d}t + \bar{Z}_t \hbox {d}W_t+\bar{Z}_t^0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \bar{X}_0 = 0 , \quad \bar{Y}_T = g(\mu _T). \end{array}\right. } \end{aligned}$$

Following the article of Yong (see [13]) on linear FBSDE, solutions to FBSDE above can be used to construct solutions to the reduced FBSDE (12) below (and vice versa)

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\tilde{X}_t = [{(-1+c)} \tilde{X}_t -\tilde{Y}_t ]\hbox {d}t,\quad \forall t \in [0,T] \\ \hbox {d}\tilde{Y}_t = [{ -2c} \tilde{X}_t+{(1-c)} \tilde{Y}_t ]\hbox {d}t + \tilde{Z}_t \hbox {d}W_t + \tilde{Z}_t^0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \tilde{X}_0 = 0, \quad \tilde{Y}_T = m, \end{array}\right. } \end{aligned}$$
(12)

where m is \(\mathcal {F}_T\)-measurable.

Now, using Theorem 6.1 in [13] in this situation, we conclude that the reduced FBSDE (12) has a unique solution if and only if

$$\begin{aligned} (0,1) \exp (\mathcal {A}t) (0,1)' > 0, \quad \forall t \in [0,T].\end{aligned}$$

where \((0,1)'\) denotes the transpose (0, 1) and

$$\begin{aligned} \mathcal {A}= \begin{bmatrix} A&B \\ \hat{A}&\hat{B} \end{bmatrix} = \begin{bmatrix} (-1+c)&-1 \\ -2c&(1-c) \end{bmatrix}. \end{aligned}$$

After some computations, we obtain

$$\begin{aligned}(0,1) \exp (\mathcal {A}t) (0,1)' = \frac{\left( 1-c+\sqrt{1+c^2}\right) \exp \left( 2t\sqrt{1+c^2}\right) - \left( 1-c- \sqrt{1+c^2}\right) }{2\sqrt{1+c^2}\exp \left( t \sqrt{1+c^2}\right) } \end{aligned}$$

It is enough to check the sign of

$$\begin{aligned} \left( 1-c+\sqrt{1+c^2}\right) \exp \left( t2\sqrt{1+c^2}\right) - \left( 1-c-\sqrt{1+c^2}\right) . \end{aligned}$$

The expression above has its minimum at \(t =0\), given by \((1-c+\sqrt{1+c^2}) - (1-c- \sqrt{1+c^2}) = 2 \sqrt{1+c^2} > 0.\)

So, \((1-c+\sqrt{1+c^2}) \exp (t2\sqrt{1+c^2}) - (1-c- \sqrt{1+c^2}) > 0, \quad \forall t \in [0,T]\). This implies \( (0,1) \exp (\mathcal {A}t) (0,1)' > 0 \quad \forall t \in [0,T],\) and FBSDE (12) has a unique solution. \(\square \)

Theorem 6.1 in [13] gives more than a uniqueness results. It also provides a uniqueness result for the Riccati ODE

$$\begin{aligned} \frac{\mathrm{d}P_t}{\mathrm{d}t} = P_t^2 + 2(1-c) P_t -2c, \quad P_T = 0. \end{aligned}$$

And it states that the unique adapted solution to the reduced FBSDE (12) satisfies \( \tilde{Y_t} = P_t \tilde{X_t} + p_t, \forall t \in [0,T]\) where \((p_t)_{t \in [0,T]}\) solves an associated BSDE.

Now, we want to characterize the solution of FBSDE (9). Inspired by previous studies on linear FBSDE (see for example [14]), we make the following Ansatz.

Ansatz: We want the solution of (9) to satisfy the condition that Y has the following form,

$$\begin{aligned} Y_t = \eta _t X_t + h_t, \, \forall t \in [0,T], \end{aligned}$$
(13)

where \( (\eta _t)_{t \in [0,T]} \in \mathcal {C}^1\) is the unique solution to the Riccati ODE

$$\begin{aligned} \frac{\mathrm{d}\eta _t}{\mathrm{d}t} = \eta _t^2 -{2 c }\eta _t - 1, \quad \eta _T = 1, \end{aligned}$$
(14)

The uniqueness of \((\eta _t)_{t \in [0,T]}\) follows easily from the uniqueness of \((P_t)_{t \in [0,T]}\) since \( \eta _t = P_t + 1 \forall t \in [0,T]\) is a solution to (14).

And, \(h= (h_t)_{t \in [0,T]}\) is an \((\mathcal {F}^B_t)_{t \in [0,T]}\)-adapted process whose randomness comes only from the common noise and satisfies the BSDE

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}h_t=[{(-c+\eta _t)}h_t -f(\mu _t) - \eta _t b(\mu _t)]\hbox {d}t + Z^1_t \hbox {d}B_t, \quad \forall t \in [0,T] \\ h_T=g(\mu _T). \end{array}\right. } \end{aligned}$$
(15)

Proposition 4

Suppose that we are given a continuous \((\mathcal {F}^B_t)_{t\in [0,T]} \)-adapted process \((\mu _t)_{t \in [0,T]}\) taking values in \(\mathbb {R}\). Then the solution, \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\), to Problem (9) satisfies (13) with \(h= (h_t)_{t \in [0,T]}\) satisfying BSDE (15).

Proof

Let \((\eta _t)_{t \in [0,T]}\) the solution to (14) and \(( h_t, Z^1_t)_{t \in [0,T]}\) is a solution to the Problem (15). We want to show that the unique solution \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) to the Problem (9) satisfies (13). We do this by construction.

Let \((X_t)_{t \in [0,T]}\) be the solution of the forward SDE

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t = [{-(-c+\eta _t)}X_t - h_t + b(\mu _t)]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t , \quad \forall t \in [0,T]\\ X_0 = \psi . \end{array}\right. } \end{aligned}$$
(16)

Let \(Y_t =\eta _t X_t + h_t, \forall t \in [0,T]\). By Itô’s formula

$$\begin{aligned} \hbox {d}Y_t = \frac{\mathrm{d}\eta _t}{\mathrm{d}t} X_t \mathrm{d}t + \eta _t \mathrm{d}X_t + \mathrm{d}h_t. \end{aligned}$$

Then by substituting, in the above expression, (14), (15), (23) and putting \(Z^0_t = Z^1_t + \eta _t \sigma _0, \, Z_t = \eta _t \sigma \forall t \in [0,T]\), we see that \((Y_t, Z_t, Z^0_t )_{t \in [0,T]}\) solves the following backward SDE:

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}Y_t = [-X_t {-c} \eta _t X_t {-c} h_t - f(\mu _t)]\hbox {d}t + Z_t \hbox {d}W_t + Z^0_t \hbox {d}Bt, \quad \forall t \in [0,T] \\ Y_T = X_T+ g(\mu _T). \end{array}\right. } \end{aligned}$$

Therefore, \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) solves the following FBSDE

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t = [{c}X_t - Y_t + b(\mu _t)]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \hbox {d}Y_t = [-X_t {-c}Y_t - f(\mu _t)]\hbox {d}t + Z_t \hbox {d}W_t + Z^0_t \hbox {d}Bt, \quad \forall t \in [0,T] \\ X_0 = \psi ,\quad Y_T = X_T+g(\mu _T) \end{array}\right. } \end{aligned}$$
(17)

and satisfies (13), since \(Y_t = \eta _t X_t + h_t, \forall t \in [0,T]\) with \((\eta _t)_{t \in [0,T]}\) the solution to (14) and \(( h_t, Z^1_t)_{t \in [0,T]}\) solution to (15). This concludes the proof. \(\square \)

3.3 Solvability of (5)

Proposition 5

Let \((\mu _t)_{t \in [0,T]}\) be an \((\mathcal {F}^B_t)_{t\in [0,T]}\)-adapted process with values in \(\mathbb {R}\) and \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) be the unique solution to Problem (9). Then, \(\mu _t = \mathbb {E}[X_t|\mathcal {F}^B_T]\), for all \(t \in [0,T]\) if and only if

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\mu _t = [{-(-c +\eta _t)}\mu _t - h_t + b(\mu _t)]\hbox {d}t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T]\\ \mu _0=\mathbb {E}[\psi ]. \end{array}\right. } \end{aligned}$$
(18)

Proof

Step 1 Consider \((\mu _t)_{t \in [0,T]}, (\mathcal {F}^B_t)_{t\in [0,T]} \)-adapted with values in \(\mathbb {R}\) and \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) be the unique solution to Problem (9). By the previous proposition, \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) satisfies ansatz (13).

For all \(t \in [0,T]\), taking conditional expectation of \(X_t\) given \(\mathcal {F}^B_T\) yields,

$$\begin{aligned} \mathbb {E}[X_t|\mathcal {F}^B_T]&= \mathbb {E} \left[ \psi + \int _0^t ( {c}X_s -Y_s + b(\mu _s)) \hbox {d}s + \int _0^t \sigma \hbox {d}W_s + \int _0^t \sigma _0 \hbox {d}B_s | \mathcal {F}^B_T \right] . \end{aligned}$$

Since \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) satisfies (13) and \((h_t)_{t \in [0,T]}\) is \(\mathcal {F}^B_T\)-measurable,

$$\begin{aligned} \mathbb {E}[X_t|\mathcal {F}^B_T]&= \mathbb {E} [\psi ] + \int _0^t \mathbb {E} [ -{(-c+\eta _s)}X_s-h_s + b(\mu _s) | \mathcal {F}^B_T ] \hbox {d}s+\int _0^t \sigma _0 \hbox {d}B_s \\&=\mathbb {E} [\psi ] + \int _0^t ( -{(-c+\eta _s)}\mathbb {E} [X_s| \mathcal {F}^B_T] - h_s + b(\mu _s) ) \hbox {d}s + \int _0^t \sigma _0 \hbox {d}B_s. \end{aligned}$$

Suppose that \( \mu _t = \mathbb {E}[X_t|\mathcal {F}^B_T], \forall t \in [0,T].\) Then,

$$\begin{aligned} \mu _t&= \mathbb {E} [\psi ] + \int _0^t ( -{(-c+\eta _s)}\mu _s - h_s + b(\mu _s) ) \hbox {d}s + \int _0^t \sigma _0 \hbox {d}B_s, \quad \forall t \in [0,T]. \end{aligned}$$

Hence,

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\mu _t = [{-(-c+\eta _t)}\mu _t - h_t + b(\mu _t)]\hbox {d}t +\sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \mu _0 = \mathbb {E}[\psi ]. \end{array}\right. } \end{aligned}$$

Step 2 Consider \((\mu _t)_{t \in [0,T]}, (\mathcal {F}^B_t)_{t\in [0,T]} \)-adapted with values in \(\mathbb {R}\) and \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) be the unique solution to Problem (9). By the previous proposition, \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) satisfies ansatz (13).

Taking the conditional expectation of \(X_t\) given \(\mathcal {F}^B_T\) yields,

$$\begin{aligned} \mathbb {E} [X_t|\mathcal {F}^B_T] = \mathbb {E} [\psi ] + \int _0^t [{-(-c+\eta _s)} \mathbb {E} [X_s|\mathcal {F}^B_T] - h_s + b(\mu _s)]\hbox {d}s+ \int _0^t \sigma _0 \hbox {d}B_s. \end{aligned}$$
(19)

Suppose that \(\mu \) a solution to (18). Then,

$$\begin{aligned} \mu _t = \mathbb {E}[\psi ] + \int _0^t ( {-(-c+\eta _s)} \mu _s - h_s + b(\mu _s)) \hbox {d}s + \int _0^t \sigma _0 \hbox {d}B_s, \quad \forall t\in [0,T]. \end{aligned}$$
(20)

Subtracting (19) from (20) gives:

$$\begin{aligned} (\mu _t - \mathbb {E} [X_t|\mathcal {F}^B_T]) + \int _0^t {(-c+\eta _s)}(\mu _s - \mathbb {E} [X_s|\mathcal {F}^B_T])\hbox {d}s = 0, \quad \forall t \in [0,T]. \end{aligned}$$
(21)

We thus have a linear ordinary differential equation with initial value zero. It follows that

$$\begin{aligned} (\mu _t - \mathbb {E}[X_t|\mathcal {F}^B_T]) = 0, \quad \forall t \in [0,T]. \end{aligned}$$

The proof is complete. \(\square \)

Proposition 6

There exists \((\alpha ^*_t,\mu _t)_{t \in [0,T]} \) an MFGs-solution if and only if there exists \((\mu _t, h_t, Z^1_t)_{t \in [0,T]}\) an adapted solution to the FBSDE:

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\mu _t = [{-(-c+\eta _t)} \mu _t - h_t + b(\mu _t)]\hbox {d}t + \sigma _0 \hbox {d}B_t, \quad \forall t \in [0,T] \\ \hbox {d}h_t=[{(-c+\eta _t)}h_t -f(\mu _t) - \eta _t b(\mu _t)]\hbox {d}t + Z^1_t \hbox {d}B_t,\quad \forall t \in [0,T] \\ h_T = g(\mu _T),\quad \mu _0 = \mathbb {E}[\psi ]. \end{array}\right. } \end{aligned}$$
(22)

Moreover, the optimal feedback is given by:

$$\begin{aligned} \alpha ^*_t = -\eta _t X_t - h_t, \forall t \in [0,T] . \end{aligned}$$

Proof

Step 1: Suppose that \((\alpha ^*_t,\mu _t)_{t \in [0,T]} \) is an MFGs-solution.

Since \((\mu _t)_{t \in [0,T]}\) is \((\mathcal {F}^B_t)_{t\in [0,T]}\)-adapted with values in \(\mathbb {R}\), by Proposition 2, there exists \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) solution to (9) and \((\alpha ^*_t = - Y_t)_{t \in [0,T]}\) solves the stochastic optimal control Problem (34).

By Proposition 4, \((X_t, Y_t, Z_t, Z^0_t)_{t \in [0,T]}\) solution to (9) satisfies ansatz (13). Therefore, we have \(Y_t = \eta _t X_t + h_t, \, \forall t \in [0,T]\) with \((\eta _t)_{t \in [0,T]}\) the solution to (14) and \(( h_t, Z^1_t)_{t \in [0,T]}\) solution to (15).

Also, since \((\alpha ^*_t,\mu _t)_{t \in [0,T]} \) is an MFGs-solution, \((\mu _t)_{t \in [0,T]}\) must verify the condition \(\mu _t = \mathbb {E} [X_t|\mathcal {F}^B_T], \forall t \in [0,T].\) And by Proposition 5, it follows that \((\mu _t)_{t \in [0,T]}\) is a solution to (18).

Hence, \((\mu _t, h_t, Z^1_t)_{t \in [0,T]}\) is a solution to (22).

Step 2: Suppose that we are given \((\mu _t, h_t, Z^1_t)_{t \in [0,T]}\) solution to (22). Clearly, \((\mu _t)_{t \in [0,T]}\) is \((\mathcal {F}^B_t)_{t\in [0,T]} \)-adapted with values in \(\mathbb {R}\).

Let \((X_t)_{t \in [0,T]}\) be the solution of the forward SDE

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}X_t = [{-(-c+\eta _t)}X_t - h_t + b(\mu _t)]\hbox {d}t + \sigma \hbox {d}W_t + \sigma _0 \hbox {d}B_t , \quad \forall t \in [0,T]\\ X_0 = \psi . \end{array}\right. } \end{aligned}$$
(23)

Let also, \(Z^0_t = Z^1_t + \eta _t \sigma _0, \, Z_t = \eta _t \sigma \) and \(Y_t = \eta _t X_t + h_t, \, \forall t \in [0,T]\).

By Propositions 2, 3 and 4, \((X_t, Y_t,Z_t,Z^0_t)_{t \in [0,T]}\) solves Problem (9) and \((\alpha ^*_t = -Y_t)_{t \in [0,T]}\) solves the stochastic optimal control Problem (34).

Finally, it remains to check that the given \((\mu _t)_{t \in [0,T]}\) satisfies the McKean–Vlasov constraint, \(\mu _t = \mathbb {E} [X_t|\mathcal {F}^B_T] = 0, \forall t \in [0,T].\) This follows from Proposition 5.

Hence, \((\alpha ^*_t = -Y_t, \mu _t)_{t \in [0,T]} \) is an MFGs-solution. \(\square \)

4 Unique Solvability and Common Noise

The next proposition shows that in the presence of the common noise we have a unique Nash equilibrium for this class of LQ-MFGs.

This is possible thanks to the previous proposition which makes the equivalence between the solvability of the class of LQ-MFGs considered and the solvability of the auxiliary FBSDE (22). In the presence of common noise, the system (22) is said to be non-degenerate. For an insight on the solvability of such FBSDE, see for example [9].

4.1 Unique Solvability (\(\sigma _0 > 0\))

Proposition 7

Suppose that \(\sigma _0 > 0\), then there is a unique Nash equilibrium for the class of LQ-MFGs under study.

Proof

To prove this proposition, it is enough to show that there exists a unique adapted solution \((\mu _t, h_t, Z^1_t)_{t \in [0,T]}\) to the Problem (22).

Let us define the following smooth, invertible and bounded function

$$\begin{aligned} w_t = \exp \Big ( \int _t^T {(-c + \eta _s)}\hbox {d}s \Big ) \forall t \in [0,T]. \end{aligned}$$

We now consider the transformations

$$\begin{aligned} \mu ^*_t&= w_t^{-1} \mu _t, \quad \forall t \in [0,T] \end{aligned}$$
(24)
$$\begin{aligned} h^*_t&= w_t h_t, \quad \forall t \in [0,T] \end{aligned}$$
(25)

Using these transformations, it follows immediately that \((\mu _t, h_t, Z^1_t)_{t \in [0,T]}\) is an adapted solution to the Problem (22) if and only if \((\mu ^*_t, h^*_t, Z^2_t)_{t \in [0,T]}\) is an adapted solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\mu ^*_t = [ -w^{-2}_t h^*_t + w^{-1}_t b(w_t \mu ^*_t)]\hbox {d}t + w^{-1}_t \sigma _0\hbox {d}B_t, \, \forall t \in [0,T] \\ \hbox {d}h^*_t =[ -w_t f(w_t \mu ^*_t)-w_t \eta _t b(w_t \mu ^*_t)]\hbox {d}t + Z^2_t \hbox {d}B_t, \quad \forall t \in [0,T] \\ \mu ^*_0=\mathbb {E} [\psi ]w^{-1}_0, \, h^*_T =g(\mu ^*_T). \end{array}\right. } \end{aligned}$$
(26)

Finally, since fbg are given bounded and Lipschitz continuous functions, \(w^{-1}_t > 0\,\forall t \in [0,T]\) and \(\sigma _0 > 0\), the Problem (26) satisfies the hypothesis of the existence and uniqueness theorem of Delarue ([4]—[Theorem 2.6 p.240 200]).

Therefore, the system of FBSDE (22) admits a unique adapted solution and the proof is complete. \(\square \)

4.2 Non-uniqueness (\(\sigma _0 = 0\))

Finding Nash equilibria for LQ-MFGs under study in the absence of common noise is equivalent to solving the FBSDE (22) with \(\sigma _0 = 0\).

Thanks to the transformations (2425) in the previous proof, the solvability of the system of FBSDEs (22) with \(\sigma _0 = 0\) is equivalent to the solvability of the following problem: Find \((\mu ^*_t, h^*_t, Z^2_t)_{t \in [0,T]}\) an adapted solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\mu ^*_t = [ -w^{-2}_t h^*_t + w^{-1}_t b(w_t \mu ^*_t)]\hbox {d}t ,\, \forall t \in [0,T] \\ \hbox {d}h^*_t = [ -w_t f(w_t\mu ^*_t) - w_t \eta _t b(w_t \mu ^*_t)]\hbox {d}t + Z^2_t \hbox {d}B_t, \, \forall t\in [0,T] \\ \mu ^*_0=\mathbb {E} [\psi ]w^{-1}_0, \, h^*_T = g(\mu ^*_T). \end{array}\right. } \end{aligned}$$
(27)

Counter-example to uniqueness: To construct counter-example to uniqueness, we choose \(f = b =\psi = 0.\) We set \(K_t = \int _0^t w^{-2}_s \hbox {d}s, \forall t \in [0,T],\) so that \(K_T > 0.\) Since the terminal time \(T>0 \) is fixed, \(R = K_T > 0\) is a constant.

Now, let us define \(g : \mathbb {R} \rightarrow \mathbb {R}\) as follows;

$$\begin{aligned} g(x)= {\left\{ \begin{array}{ll} 1 \qquad \quad \,\,\,\, {\text {if}} \quad x < - R\\ -x/R\quad {\text { if }} \quad |x| \le R \\ -1 \qquad \quad \,{\text {if}} \quad x > R \end{array}\right. } \end{aligned}$$

For the specified LQ-MFG above, the Problem (27) reads as follows: Find \((\mu ^*_t, h^*_t, Z^2_t)_{t \in [0,T]}\), an adapted solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} \hbox {d}\mu ^*_t = - w^{-2}_t h^*_t \hbox {d}t, \, \forall t \in [0,T]. \\ \hbox {d}h^*_t = Z^2_t \hbox {d}B_t \\ \mu ^*_0= 0, \, h^*_T = g(\mu ^*_T). \end{array}\right. } \end{aligned}$$
(28)

For all \(A \in \mathbb {R}\), such that \(|A| \le 1,\) the processes \(( -AK_t, A, 0)_{t \in [0,T]}\) are adapted solutions to (28).

Hence, we found infinitely many Nash equilibria for this LQ-MFG without common noise. The corresponding optimal feedbacks are given by

$$\begin{aligned} \alpha ^*_t = - \eta _t X_t - A w_t^{-1}, \forall t \in [0,T], \end{aligned}$$

for all \(A \in \mathbb {R}\), such that \(|A| \le 1.\)

5 Summary

The results exposed in this note illustrate the power of adding common noise as an hypothesis in the study of MFGs from a mathematical perspective. For the class of LQ-MFGs studied here, the uniqueness of the Nash equilibrium is obtained from the common noise hypothesis. No monotonicity hypothesis is required. These results are in line with the idea that adding noise to a problem can help to achieve uniqueness.

An interesting question is the one of the zero-noise limits of the Nash equilibrium of the LQ-MFGs with common noise when uniqueness fails for the situation without common noise. Does this limit exist? Does it select one or more Nash equilibria for the situation without common noise?

Future work will consider these questions and cases where the mean field interaction is not just an average.