Keywords

1 Introduction

The systems of partial differential equations associated to finite-horizon Mean Field Games (briefly, MFGs) with N populations of agents have the form

$$\displaystyle \begin{aligned} \left\{ \begin{array}{ll} - \partial_t v_k - \Delta v_k + H_k(x,Dv_k) = F_k(x, m(t,\cdot)) , & \mathit{in}\quad (0,T)\times\Omega , \\ \\ \partial_t m_k - \Delta m_k - \text{div}(D_p H_k(x,Dv_k)m_k) = 0 & \mathit{in}\quad (0,T)\times\Omega, \\ \\ v_k(T,x) = G_k(x, m(T,\cdot)) , \; m_k(0,x) = m_{0,k}(x) & \mathit{in}\quad \Omega , \quad k=1,\dots,N , \end{array} \right. \end{aligned} $$
(1.1)

where the unknown m is a vector of probability densities on Ω, F k and G k are function of this vector and represent the running and terminal costs of a representative agent of the k-population, and v k is the value function of this agent. The first N equations are parabolic of Hamilton-Jacobi-Bellman type and backward in time with a terminal condition, the second N equations are parabolic of Kolmogorov-Fokker-Planck type and forward in time with an initial condition. If the state space \(\Omega \subseteq {\mathbb R}^d\) is not all \({\mathbb R}^d\), boundary conditions must also be imposed. In most of the theory of MFGs they are periodic, which are the easiest to handle, here we will consider instead Neumann conditions, i.e.,

$$\displaystyle \begin{aligned} \partial_n v_k = 0, \quad \partial_n m_k + m_k D_p H_k(x, Dv_k) \cdot n = 0 \quad \mbox{on } (0,T)\times\partial \Omega . \end{aligned} $$
(1.2)

There is a large literature on the existence of solutions for these equations, especially in the case of a single population N = 1, beginning with the pioneering papers of Lasry and Lions [27,28,29] and Huang et al. [23,24,25] , see the lecture notes [9, 21, 22], the books [11, 18, 19], the survey [16], and the references therein. Systems with several populations, N > 1, were treated with Neumann conditions in [12, 15] for the stationary case and in [1] in the evolutive case, with periodic conditions in [4, 10].

Uniqueness of solutions is a much more delicate issue. For one population Lasry and Lions [27,28,29] discovered a monotonicity condition on the costs F and G that together with the convexity in p of the Hamiltonian H(x, p) implies the uniqueness of classical solutions. It reads

$$\displaystyle \begin{aligned} \int_{\mathbb R} (F(x,\mu) - F(x,\nu)) d(\mu -\nu)(x) ) > 0 , \quad \mbox{if } \; \mu\ne \nu \end{aligned} $$
(1.3)

and it means that a representative agent prefers the regions of the state space that are less crowded. This is a restrictive condition that is satisfied in some models and not in others. When it fails, non-uniqueness may arise: this was first observed in the stationary case by Lasry and Lions [29] and other counterexamples were shown by Guéant [21], Bardi [3], Bardi and Priuli [6], and Gomes et al. [20]. The need of a condition such as (1.3) for having uniqueness for finite-horizon MFGs was discussed at length in [31], and some explicit examples of non-uniqueness appeared very recently in [8, 14], and in [5] that presents also a probabilistic proof and references on other examples obtained by the probabilistic approach.

For multi-population problems, N > 1, there are extensions of the monotonicity condition (1.3) in [5, 12] and they are even more restrictive: they impose not only aversion to crowd within each population, but also that the costs due to this effect dominate the costs due to the interactions with the other populations. This is not the case in the multi-population models of segregation in urban settlements proposed in [1] following the ideas of the Nobel Prize Thomas Schelling [34]. There the interactions between two different populations are the main cause of the dynamics, and in fact examples of multiple solutions were shown in [1] and [15] for the stationary case and in [5] for the evolutive one. Therefore a different criterion giving uniqueness in some cases is particularly desirable when N > 1.

A second regime for uniqueness was introduced in a lecture of P.L. Lions on January 9th, 2009 [31]: it occurs if the length T of the time horizon is short enough. To our knowledge Lions’ original argument did not appear in print. For finite state MFGs, uniqueness for short time was proved by Gomes et al. [17] as part of their study of the large population limit. For continuous state, an existence and uniqueness result under a “small data” condition was given in [25] for Linear-Quadratic-Gaussian MFGs using a contraction mapping argument to solve the associated system of Riccati differential equations, and similar arguments were used for different classes of linear-quadratic problems in [32, 36]. The well-posedness when H(x, Dv) − F(x, m) is replaced by \(\varepsilon \mathcal {H}(x, Dv, m)\) with ε small is studied in [2], and another result for small Hamiltonian is in [35] for nonconvex H.

Very recently the first author and Fischer [5] revived Lions’ argument to show that the smoothness of the Hamiltonian is the crucial property to have small-time uniqueness without monotonicity of the costs and convexity of H, and gave an example of non-uniqueness for all T > 0 and H(x, p) = |p|. The uniqueness theorem for small data in [5] holds for N = 1 and \(\Omega ={\mathbb R}^d\) with conditions on the behaviour of the solutions at infinity.

In the present paper we focus instead on N ≥ 1 and Neumann boundary conditions, which is the setting of the MFG models of segregation in [1]. The new difficulties arise from the boundary conditions, that require different methods for some estimates, especially on the L norm of the densities m k. Our first uniqueness result assumes a suitable smoothness of the Hamiltonians H k, but neither convexity nor growth conditions, and that the costs F k, G k are Lipschitz in L 2 with respect to the measure m, with no monotonicity. The smallness condition on the data depends on the range of the spacial gradient of the solutions v k, unless D p H k are bounded and globally Lipschitz for all k. Then we complement such result with some a priori gradient estimates on v k, under an additional quadratic growth condition on H k and some more regularity of the costs, and get a \(\bar T>0\) depending only on the data such that there is uniqueness for all horizons \(T\leq \bar T\). Finally, we give sufficient conditions ensuring both existence and uniqueness for the system (1.1) with the boundary conditions (1.2), as well as for some robust MFGs considered in [7, 32], which are interesting examples with nonconvex Hamiltonian.

We mention that in the stationary case, uniqueness up to (space) translation may hold without (1.3) in force. A special class of MFG on \({\mathbb R}^d\) enjoying such a feature has been identified in [13].

The paper is organised as follows. Section 2 contains the main result about uniqueness for small data, possibly depending on gradient bounds on the solutions. Section 3 gives further sufficient conditions depending only on the data for uniqueness and existence of solutions. The Appendix recalls a comparison principle for HJB equations with Neumann conditions.

2 The Uniqueness Theorem

Consider the MFG system for N populations

$$\displaystyle \begin{aligned} \left\{ \begin{array}{ll} - \partial_t v_k - \Delta v_k + H_k(x,Dv_k) = F_k(x, m(t,\cdot)) , & \mathit{in} \quad (0,T)\times\Omega , \\ \\ \partial_t m_k - \Delta m_k - \text{div}(D_p H_k(x,Dv_k)m_k) = 0 & \mathit{in} \quad (0,T)\times\Omega, \\ \\ \partial_n v_k = 0, \; \partial_n m_k + m_k D_p H_k(x, Dv_k) \cdot n = 0 & \mathit{on} \quad (0,T)\times\partial \Omega , \\ \\ v_k(T,x) = G_k(x, m(T,\cdot)) , \; m_k(0,x) = m_{0,k}(x) & \mathit{in} \quad \Omega \end{array} \right. \end{aligned} $$
(2.1)

where k = 1, …, N, Dv k denotes the gradient of the k-th component v k of the unknown v with respect to the space variables, Δ is the Laplacian with respect to the space variables x, D p H k is the gradient of the Hamiltonian of the k-th population with respect to the moment variable, \(\Omega \subseteq {\mathbb R}^d\) is a bounded open set with boundary  Ω of class C 2, β for dome β > 0, and n(x) is its exterior normal at x. The components m k of the unknown vector m are bounded densities of probability measures on Ω, i.e., m lives in

$$\displaystyle \begin{aligned} { \mathcal{P}_N}(\Omega ) := \left\{\mu=(\mu_1,\dots,\mu_N)\in L^\infty(\Omega )^N : \mu_k\geq 0,\; \int_\Omega \mu_k(x) dx = 1 \right\}. \end{aligned}$$

F and G represent, respectively, the running and terminal cost of the MFG

$$\displaystyle \begin{aligned}F : \overline\Omega\times \mathcal{P}_N(\Omega)\to {\mathbb R}^N , \quad G : \overline\Omega\times \mathcal{P}_N(\Omega)\to {\mathbb R}^N. \end{aligned}$$

By classical solutions we will mean functions of (t, x) of class C 1 in t and C 2 in x in \([0,T]\times \overline \Omega \).

2.1 The Main Result

Our main assumptions are the smoothness of the Hamiltonians and a Lipschitz continuity of the costs in the norm ∥⋅∥2 of L 2( Ω)N that we state next. We consider \(H_k : \overline \Omega \times {\mathbb R}^d \to {\mathbb R}\) continuous and satisfying

$$\displaystyle \begin{aligned} D_pH_k(x,p) \mbox{ is continuous and locally Lipschitz in }p\mbox{ uniformly in } x\in\overline\Omega .\end{aligned} $$
(2.2)

We will assume F, G satisfy, for all μ, ν,

$$\displaystyle \begin{aligned} \|F(\cdot,\mu)-F(\cdot,\nu)\|{}_2^2 \leq L_F \|\mu-\nu\|{}_2^2 ,\end{aligned} $$
(2.3)
$$\displaystyle \begin{aligned} \|DG(\cdot,\mu)-DG(\cdot,\nu)\|{}_2^2 \leq L_{ G} \|\mu-\nu\|{}_2^2 , \end{aligned} $$
(2.4)

Theorem 2.1

Assume (2.2)(2.4), \(m_0\in { \mathcal {P}_N} (\Omega )\) , and \((\tilde v, \tilde m), (\overline v, \overline m)\) are two classical solutions of (2.1). Denote

$$\displaystyle \begin{aligned} \mathcal{C}:= \mathit{\text{co}}\{ D\tilde v(t,x), D\overline v(t,x) : (t,x)\in (0,T)\times\Omega \},\end{aligned} $$
$$\displaystyle \begin{aligned} C_H:= \max_{k= 1,\dots, N} \sup_{ x\in\Omega,\, p\in \mathcal{C}} |D_pH_k(x,p)|, \end{aligned} $$
(2.5)
$$\displaystyle \begin{aligned} \bar C_H := \max_{k= 1,\dots, N} \sup_{x\in\Omega,\, p, q\in \mathcal{C}} \frac{|D_pH_k(x,p)-D_pH_k(x,q)|}{|p-q|}. \end{aligned} $$
(2.6)

Then there exists a function Ψ of \(T, L_F, L_G, C_H, \bar C_H, N \) and maxkm 0,k (depending also on Ω), such that the inequality Ψ < 1 implies \(\tilde v(t,\cdot ) = \overline v(t,\cdot )\) and \(\tilde m(t,\cdot ) = \overline m(t,\cdot )\) for all t ∈ [0, T], and Ψ < 1 holds if either T, or \(\bar C_H\) , or the pair L F, L G is small enough.

For the proof we need two auxiliary results.

Proposition 2.2

There are constants r > 1 and C > 0 depending only on d and Ω such that

$$\displaystyle \begin{aligned} \|m_k\|{}_{L^{\infty}((0,T)\times\Omega)} \leq C [1+ \| m_{0,k}\|{}_\infty + (1+T)\|D_p H_k(\cdot,Dv_k) \|{}_{L^\infty((0,T)\times \Omega)}]^r, \quad k=1, \ldots, N.\end{aligned} $$
(2.7)

Proof

Step 1. We aim at proving that for any q ∈ [1, (d + 2)∕(d + 1)) there exists a constant \(\overline C\) depending only on d, q and Ω such that any positive classical solution φ of the backward heat equation

$$\displaystyle \begin{aligned} \begin{cases} -\partial_t \varphi - \Delta \varphi = 0 & \mbox{on }(0,t) \times \Omega \\ \partial_n \varphi = 0 & \mbox{on }(0,t) \times \partial \Omega \\ \int_\Omega \varphi(t,x) dx = 1, \end{cases} \end{aligned}$$

satisfies

$$\displaystyle \begin{aligned} \| \nabla \varphi \|{}_{L^q((0,t)\times \Omega)} \le \overline C (1+t)^{1/q}. \end{aligned}$$

We follow the strategy presented in [19, Section 5]. Note first that ∫Ω φ(s, x)dx = 1 for all s ∈ (0, t), by integrating by parts the equation and using the boundary conditions. We proceed in the case d ≥ 3; if d = 1 or d = 2, one argues in a similar way (see the discussion below). Let α ∈ (0, 1) to be chosen later; multiplying the equation by αφ α−1 and integrating by parts yield for all s ∈ (0, t)

$$\displaystyle \begin{aligned} \int_\Omega |\nabla \varphi^{\alpha/2}(s,x)|{}^2 dx = \frac{\alpha}{4(\alpha-1)}\partial_t \int_\Omega \varphi^\alpha(s,x) dx. \end{aligned}$$

Integrating in time and using the fact that ∫Ω φ(s, x)dx = 1 give

$$\displaystyle \begin{aligned} \int_0^t \int_\Omega |\nabla \varphi^{\alpha/2}|{}^2 dx ds = \frac{\alpha}{4(1 - \alpha)} \int_\Omega \varphi^\alpha(0,x) dx - \frac{\alpha}{4(1 - \alpha)} \int_\Omega \varphi^\alpha(t,x) dx \le c_1, \end{aligned} $$
(2.8)

where c 1 depends on d and Ω (the positive constants c 2, c 3, … used in the sequel will have the same dependance).

We now exploit the continuous embedding of W 1, 2( Ω) into \(L^{\frac {2d}{d-2}}(\Omega )\); the adaption of this proof to the cases d = 1, 2 is straightforward, as the injection of W 1, 2( Ω) is into L p( Ω) for all p ≥ 1. Hence, for all s ∈ (0, t), by Hölder and Sobolev inequalities

$$\displaystyle \begin{aligned}\displaystyle \int_\Omega \varphi^{\alpha + \frac{2}{d}}(s,x) dx \le \left(\int_\Omega \varphi(s,x) dx\right)^{\frac{2}{d}} \left(\int_\Omega \varphi^{\frac{\alpha}{2}\frac{2d}{d-2}}(s,x) dx\right)^{\frac{d-2}{d}} \\\displaystyle \le c_2\left(\int_\Omega |\nabla \varphi^{\alpha/2}|{}^2 dx + \int_\Omega \varphi^\alpha dx \right) \le c_2\left(\int_\Omega |\nabla \varphi^{\alpha/2}|{}^2 dx + 1 +|\Omega| \right), \end{aligned} $$

so

$$\displaystyle \begin{aligned} \int_0^t \int_\Omega \varphi^{\alpha + \frac{2}{d}} dx ds \le c_3 \left(\int_0^t \int_\Omega |\nabla \varphi^{\alpha/2}|{}^2 dx ds + t\right). \end{aligned} $$
(2.9)

Finally, since q < (d + 2)∕(d + 1), we may choose α ∈ (0, 1) such that

$$\displaystyle \begin{aligned} q\frac{2-\alpha}{2-q} = \alpha + \frac{2}{d}, \end{aligned}$$

and therefore, by the identity \(\nabla \varphi ^{\alpha /2} = \frac {\alpha }{2} \varphi ^{\frac {\alpha -2}{2}}\nabla \varphi \) and Young’s inequality

$$\displaystyle \begin{aligned}\displaystyle \int_0^t \int_\Omega |\nabla \varphi|{}^q dx ds = \left(\frac{2}{\alpha}\right)^q \int_0^t \int_\Omega |\nabla \varphi^{\alpha/2}|{}^q \, \varphi^{q\frac{2-\alpha}{2}} dx ds \le \\\displaystyle c_4 \left(\int_0^t \int_\Omega |\nabla \varphi^{\alpha/2}|{}^2 dx ds + \int_0^t \int_\Omega \varphi^{q\frac{2-\alpha}{2-q}} dx ds \right) \le c_4(c_1 + c_3(c_1 + t))), \end{aligned} $$

in view of (2.8) and (2.9), and the desired estimate follows.

Step 2. Fix t ∈ (0, T) and 1 < q < (d + 2)∕(d + 1). Let φ 0 be any non-negative smooth function on Ω such that n φ 0 = 0 on  Ω and ∫Ω φ 0(x)dx = 1. Let φ be the solution of the backward heat equation

$$\displaystyle \begin{aligned} \begin{cases} -\partial_t \varphi - \Delta \varphi = 0 & \mbox{on }(0,t) \times \Omega \\ \partial_n \varphi = 0 & \mbox{on }(0,t) \times \partial \Omega \\ \varphi(t,x) = \varphi_0(x) & \mbox{on }\Omega. \end{cases} \end{aligned}$$

Note that φ is positive on (0, t) × Ω by the strong maximum principle. Multiply the KFP equation in (2.1), integrate by parts and use the boundary conditions for m k to get

$$\displaystyle \begin{aligned} \int_0^t \int_\Omega \partial_t m_k \, \varphi + \nabla m_k \cdot \nabla \varphi +D_p H_k(x,Dv_k) \cdot \nabla \varphi \, m_k \, dxds = 0. \end{aligned}$$

Integrating again by parts (in space-time) yields

$$\displaystyle \begin{aligned} \int_\Omega m_k(t,x) \varphi_0(x) = \int_\Omega m_k(0,x) \varphi(0,x) - \int_0^t \int_\Omega D_p H_k(x,Dv_k) \cdot \nabla \varphi \, m_k \, dxds, \end{aligned}$$

using the equation and the boundary condition for φ. Hence,

$$\displaystyle \begin{aligned}\displaystyle \int_\Omega m_k(t,x) \varphi_0(x) \le \| m_{k,0} \|{}_\infty + \|D_p H_k(\cdot,Dv_k)\|{}_{L^\infty((0,t) \times \Omega)} \int_0^t \int_\Omega |\nabla \varphi| \, |m_k| \, dxds,\\\displaystyle \le \| m_{k,0} \|{}_\infty + \overline C (1+t)^{1/q} \|D_p H_k(\cdot,Dv_k)\|{}_{L^\infty((0,t) \times \Omega)} \|m_k\|{}_{L^{q'}((0,t) \times \Omega)} \end{aligned} $$

by Step 1. By the arbitrariness of φ 0, one obtains

$$\displaystyle \begin{aligned} \| m_{k}(t, \cdot) \|{}_\infty \le \| m_{k,0} \|{}_\infty + \overline C (1+t)^{1/q} \|D_p H_k(\cdot,Dv_k)\|{}_{L^\infty((0,t) \times \Omega)} \|m_k\|{}_{L^{q'}((0,t) \times \Omega)} , \end{aligned}$$

and since

$$\displaystyle \begin{aligned} \|m_k\|{}_{L^{q'}((0,t)\times\Omega)} \le \left( \int_0^t \| m_{k}(s, \cdot) \|{}_\infty^{q'-1} \int_\Omega m_k(s,x) dx \, ds\right)^{1/q'} \le \|m_k\|{}_{L^{\infty}((0,t)\times\Omega)}^{1/q}\, t^{1/q'}, \end{aligned}$$

we have

$$\displaystyle \begin{aligned} \| m_{k}(t, \cdot) \|{}_\infty \le \| m_{k,0} \|{}_\infty + \overline C (1+t) \|D_p H_k(\cdot,Dv_k)\|{}_{L^\infty((0,t) \times \Omega)} \|m_k\|{}_{L^{\infty}((0,t)\times\Omega)}^{1/q}. \end{aligned}$$

Passing to the supremum on t ∈ (0, T), we conclude (r in the statement can be chosen to be q′). □

Lemma 2.3 (A Mean-Value Theorem)

Let \(\mathcal {K}\subseteq {\mathbb R}^d\) , \(f : \overline \Omega \times \mathcal {K} \to {\mathbb R}^d\) be continuous and Lipschitz continuous in the second entry with constant L, uniformly in the first. Then there exists a measurable matrix-valued function M(⋅, ⋅, ⋅) such that

$$\displaystyle \begin{aligned} f(x,p) - f(x,q) = M(x,p,q)(p-q) , \quad |M(x,p,q)| \leq L , \quad \forall \, x\in \overline\Omega, \,p,q\in \mathcal{K}. \end{aligned} $$
(2.10)

Proof

Mollify f in the variables p and get a sequence f n converging to f locally uniformly and with Jacobian matrix satisfying ∥D p f n≤ L. Since f n is C 1 in p the standard mean-value theorem gives

$$\displaystyle \begin{aligned} f_n(x,p) - f_n(x,q) = \int_0^1 Df_n(x, q+s(p-q)) (p-q) \, ds =: M_n(x,p,q)(p-q) , \end{aligned} $$
(2.11)

and |M n|≤ L. We define the matrix M componentwise by setting

$$\displaystyle \begin{aligned} M(x,p,q)_{ij} := \liminf_n M_n(x,p,q)_{ij} , \quad i, j= 1,\dots, d, \end{aligned}$$

so that it is measurable in (x, p, q) and satisfies |M(x, p, q)|≤ L. Now we take the liminfn in the i-th component of the identity (2.11) and get the i-th component of the desired identity (2.10). □

Proof of Theorem 2.1

Step 1. First observe that, by the regularity of the solutions, C H < + and \(\bar C_H<+\infty \). We set

$$\displaystyle \begin{aligned} v:=\tilde v - \overline v , \quad m:=\tilde m - \overline m, \quad B_k(t,x):=\int_0^1D_pH_k(x, D\overline v(t,x)+s(D\tilde v-D\overline v)(t,x)) ds \end{aligned}$$

and observe that |B k|≤ C H for all k and v k satisfies

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} -\partial_tv_k+B_k(t,x)\cdot Dv_k=\Delta v_k+F_k(x,\tilde m(t))-F_k(x,\overline m(t)) \quad \mbox{ in }(0, T)\times \Omega \\ \\ \partial_n v_k = 0 \; \mbox{ on } (0,T)\times\partial \Omega , \quad v_k(T,x)=G_k(x, \tilde m(T)) - G_k(x, \overline m(T)) . \end{array} \right.\, \end{aligned} $$
(2.12)

Step 2. By the divergence theorem and the boundary conditions we compute

$$\displaystyle \begin{aligned}\displaystyle - \int_t^T\int_{\Omega}\partial_t v_k \Delta v_k \, ds = \int_t^T\frac d{dt}\int_{\Omega}\frac{|Dv_k|{}^2}{2} dx ds - \int_t^T\int_{\partial \Omega} \partial_tv_k Dv_k\cdot n\, d\sigma \\\displaystyle = \frac 12 \|Dv_k(T,\cdot)\|{}_2^2 - \frac 12\|Dv_k(t,\cdot)\|{}_2^2 . \end{aligned} $$

Now we set

$$\displaystyle \begin{aligned}\bar F(t,x):=F(x,\overline m)-F(x,\tilde m) , \quad \bar G(t,x):=G(x,\overline m)-G(x,\tilde m) , \end{aligned}$$

multiply the PDE in (2.12) by Δv k, integrate, use the terminal condition in (2.12) and estimate

$$\displaystyle \begin{aligned}\displaystyle \frac 12\|Dv_k(t,\cdot)\|{}_2^2 + \int_t^T \|\Delta v_k(s,\cdot)\|{}_2^2 \leq \frac 12 \|D\bar G(T,\cdot)\|{}_2^2 + \\ \|B_k\|{}_\infty\int_t^T \left(\frac 1{2\varepsilon}\|Dv_k(s,\cdot)\|{}_2^2 + \frac \varepsilon 2 \|\Delta v_k(s,\cdot)\|{}_2^2\right) ds + \int_t^T \left(\frac 1{2\varepsilon} \|\bar F(s,\cdot)\|{}_2^2 + \frac \varepsilon 2 \left\|\Delta v_k(s,\cdot)\right\|{}_2^2\right) ds . \end{aligned} $$

Next we choose ε such that 1 = (∥B k + 1)ε∕2 and use the assumptions (2.4) and (2.3) to get

Then Gronwall inequality gives, for c o := (∥B k + 1)∕2 = 1∕ε and for all 0 ≤ t ≤ T,

$$\displaystyle \begin{aligned} \|Dv_k(t,\cdot)\|{}_2^2 \leq \left( L_{G} \|m(T,\cdot)\|{}_2^2 + c_o L_F\int_t^T \|m(s,\cdot)\|{}_2^2 ds\right)e^{ c_o \|B_k\|{}_\infty T} . \end{aligned} $$
(2.13)

Step 3. In order to write a PDE solved by m we apply Lemma 2.3 to \(D_pH_k : \overline \Omega \times \mathcal {C} \to {\mathbb R}^d\), which is Lipschitz in p by the assumption in (2.6), and get a matrix M k such that

$$\displaystyle \begin{aligned} D_pH_k(x,D\overline v_k)-D_pH_k(x,D\tilde v_k) = M_k(x, D\overline v_k, D\tilde v_k)( D\overline v_k-D\tilde v_k), \end{aligned}$$

with \(|M_k|\leq \bar C_H\). Now define

$$\displaystyle \begin{aligned} &\tilde B_k(t,x):=D_pH_k(x,D\overline v_k) ,\quad A_k(t,x):= \tilde m_k M(x, D\overline v_k , D\tilde v_k ),\\ & \tilde F_k(t,x):=A_k(t,x)(D\overline v_k-D\tilde v_k). \end{aligned} $$

Then m k satisfies

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} \partial_t m_k - \text{div}\left( \tilde B_k m_k\right) = \Delta m_k + \text{div}\tilde F_k \quad \mbox{ in }(0, T)\times {\mathbb R}^d, \\ \\ \partial _n m_k + (m_k \tilde B_k + \tilde F_k)\cdot n = 0 \; \mbox{ on } (0,T)\times\partial \Omega,\quad m_k(0,x)= 0 . \end{array} \right.\,\end{aligned} $$
(2.14)

with \(|\tilde B_k|\leq C_H\) and \(|A_k|\leq \mathcal {M} \bar C_H\) by the assumption (2.6), where

$$\displaystyle \begin{aligned} \mathcal{M}:=\max_k C [1+ \| m_{0,k}\|{}_\infty + (1+T)\|D_p H_k(\cdot,Dv_k) \|{}_{L^\infty((0,T)\times \Omega)}]^r\end{aligned} $$

is the upper bound on m k given by Proposition 2.2 ( where C depends only on the set Ω).

Step 4. We multiply the PDE in (2.14) by m k and integrate by parts to get

$$\displaystyle \begin{aligned}\displaystyle 0 = \int_0^t\frac d{dt}\int_{\Omega}\frac{m_k^2}{2} \,dx ds + \int_0^t\int_{\Omega} |Dm_k|{}^2 \,dx ds - \int_0^t\int_{\partial \Omega} m_k Dm_k\cdot n \,d\sigma ds \\\displaystyle + \int_0^t\int_{\Omega} m_k\tilde B_k\cdot Dm_k \,dx ds - \int_0^t\int_{\partial \Omega} m_k^2 \tilde B_k\cdot n \,d\sigma ds\\ + \int_0^t\int_{\Omega} \tilde F_k \cdot Dm_k \,dx ds - \int_0^t\int_{\partial \Omega} m_k \tilde F_k\cdot n \,d\sigma ds.\end{aligned} $$

By the initial and boundary conditions in (2.14) we obtain

$$\displaystyle \begin{aligned}\displaystyle \frac 12\|m_k(t,\cdot)\|{}_2^2 + \int_0^t\|Dm_k(s,\cdot)\|{}^2_2 \,ds = -\int_0^t\int_{\Omega} \left(m_k\tilde B_k + \tilde F_k\right)\cdot Dm_k \,dx ds \leq \\\displaystyle \frac 1{2\varepsilon} \int_0^t \|\tilde F_k(s,\cdot)\|{}_2^2 ds + \frac {\|\tilde B_k\|{}_\infty}{2\varepsilon}\int_0^t\|m_k(s,\cdot)\|{}_2^2 ds + \varepsilon\frac {\|\tilde B_k\|{}_\infty+1}{2} \int_0^t\|Dm_k(s,\cdot)\|{}^2_2 \,ds ,\end{aligned} $$

and with the choice \(\varepsilon =2/(\|\tilde B\|{ }_\infty +1)=:1/c_1\)

$$\displaystyle \begin{aligned} \|m_k(t,\cdot)\|{}_2^2 \leq c_1 \int_0^t \|\tilde F_k(s,\cdot)\|{}_2^2 ds + c_1 \|\tilde B_k\|{}_\infty \int_0^t\|m(s,\cdot)\|{}_2^2 ds .\end{aligned} $$

Then Gronwall inequality and the definition of \(\tilde F_k\) give, for all 0 ≤ t ≤ T,

$$\displaystyle \begin{aligned} \|m_k(t,\cdot)\|{}_2^2 \leq c_1 e^{c_1\|\tilde B_k\|{}_\infty T} \|A_k\|{}^2_\infty \int_0^t \| Dv_k(s,\cdot)\|{}_2^2 ds.\end{aligned} $$
(2.15)

Step 5. Now we set

$$\displaystyle \begin{aligned}\phi(t):= \|Dv(t,\cdot)\|{}_2^2=\sum_{k=1}^N\|Dv_k(t,\cdot)\|{}_2^2\end{aligned}$$

and assume w.l.o.g. C H ≥ 1, so that c o, c 1 ≤ C H. By combining (2.13) and (2.15) we get

Then Φ :=sup0≤tT ϕ(t) satisfies

$$\displaystyle \begin{aligned} \Phi\leq \Phi \Psi, \quad \Psi:= T\bar C_H^2 C(L_G+L_FC_HT/2), \end{aligned}$$

which implies Φ = 0 if Ψ < 1. Therefore under such condition we conclude that \(D\tilde v_k(t,x)=D\overline v_k(t,x)\) for all k, x and 0 ≤ t ≤ T. By the uniqueness of solution for the KFP equation (e.g., Thm. I.2.2, p. 15 of [26]) we deduce \(\tilde m= \overline m\) and then, by the Comparison Principle for the HJB equation in the Appendix, \(\tilde v= \overline v\).

Finally, it is clear that Ψ can be made less than 1 by choosing either T, or \(\bar C_H\), or both L G and L F small enough. □

2.2 Examples and Remarks

Example 2.1

Integral costs. Consider F k and G k of the form

$$\displaystyle \begin{aligned} F_k(x, \mu)= F_o\left(x, \int_{\Omega} K(x,y) \mu(y) dy\right) , \quad G_k(x, \mu)= g_1(x)\int_{\Omega} \bar K(x,y) \cdot \mu(y) dy +g_2(x) \end{aligned}$$

with \(F_o : \overline \Omega \times {\mathbb R}^N\to {\mathbb R}\) measurable and Lipschitz in the second variable uniformly in the first, whereas K is an N × N matrix with components in L 2( Ω × Ω). Then F k satisfies (2.3). About G k we assume g 1, g 2 ∈ C 1( Ω), Dg 1 bounded, the vector \(\bar K\) and its Jacobian \(D_x\bar K\) with components in L 2( Ω × Ω). Then it satisfies (2.4). Of course all the data \(F_o, K, \bar K, g_i\) are allowed to change with the index k = 1, …, N.

Example 2.2

Local costs. Take G k = G k(x) independent of m(T) and F k of the form \( F_k(x,\mu )=F_k^l(x,\mu (x)) \) with \(F_k^l: \overline \Omega \times [0,+\infty )^N\to {\mathbb R}\) measurable and Lipschitz in the second variable uniformly in the first. Then F k satisfies (2.3).

Example 2.3

Costs depending on the moments. The mean value of the density μ, M(μ) =∫Ω (y)dy, and all its moments ∫Ω y j μ(y)dy, j = 2, 3, …, are Lipschitz in L 2 by Example 2.1. Then any F k (resp., G k) depending on μ only via these quantities satisfies (2.3) (resp., (2.4)) if it is Lipschitz with respect to them uniformly with respect to x.

Example 2.4

Convex Hamiltonians. The usual Hamiltonians in MFGs are those arising from classical Calculus of Variations, e.g., \(H _k(x,p)= b_k(x)(c_k + |p|{ }^2)^{\beta _k/2}\), which satisfies the assumption (2.2) if \(b_k\in C(\overline \Omega )\) and either c k > 0 or c k = 0 and β k ≥ 2.

A related class of Hamiltonians are those of Bellman type associated to nonlinear systems, affine in the control \(\alpha \in {\mathbb R}^d\),

$$\displaystyle \begin{aligned} H _k(x,p):=\sup_\alpha\{ -(f_k(x)+g_k(x)\alpha)\cdot p -L_k(x,\alpha)\} = -f_k(x)\cdot p + L_k^*\left(x, -g_k(x)^Tp\right) , \end{aligned} $$
(2.16)

where f k is a Lipschitz vector field, g k a Lipschitz square matrix, L k(x, α) is the running cost of using the control α (adding to F k(x, m) in the cost functional of a representative player), and \(L_k^*(x, \cdot )\) is its convex conjugate with respect to α. In this case one can check the assumption (2.2) on an explicit expression of \(L_k^*\). For instance, if L k(x, α) = |α|γγ then

$$\displaystyle \begin{aligned} H _k(x,p)= -f_k(x)\cdot p + \frac{\gamma -1}{\gamma}|g_k(x)^Tp|{}^{\gamma/(\gamma -1)} , \end{aligned}$$

which satisfies (2.2) if γ ≤ 2.

Example 2.5

Nonconvex Hamiltonians. Two-person 0-sum differential games give rise to the Isaacs Hamiltonians, which are defined in a way similar to (2.16) but as the inf-sup over two sets of controls. A motivation for considering these Hamiltonians in MFGs is proposed in [35]. A relevant example is the case of robust control, or nonlinear H control, studied in connection with MFGs by Bauso et al. [7] and Moon and Başar [32] (see also the references therein). In this class of problems a deterministic disturbance σ(x)β affects the control system (σ is a Lipschitz square matrix) and a worst case analysis is performed by assuming that \(\beta \in {\mathbb R}^d\) is the control of an adversary who wishes to maximise the cost functional of the representative agent; a term − δ|β|2∕2, with δ > 0 is added to the running cost to penalise the energy of the disturbance. The Hamiltonian for robust control then becomes

$$\displaystyle \begin{aligned} H_k^{(r)}(x,p) := H _k(x,p) + \inf_{\beta}\left\{-\sigma(x) \beta \cdot p + \delta |\beta|{}^2/2\right\} = H _k(x,p) - \frac{|\sigma(x)^Tp|{}^2}{2\delta}, \end{aligned} $$
(2.17)

which is the sum of the convex H k of the previous example and a concave function of p. Clearly it satisfies the condition (2.2) if and only if H k does.

Remark 2.1

Continuous dependence on data. Our proof of uniqueness can be adapted to show the Lipschitz dependence of solutions on some data. For instance, in Theorem 2.1 we may assume that \(\bar m(0,x)=\bar m_0(x)\) and \(\tilde m(0,x)=\tilde m_0(x)\), with \(\bar m_0, \tilde m_0 \in { \mathcal {P}_N}(\Omega )\). Then a simple variant of the proof allows to estimate

$$\displaystyle \begin{aligned}\|\tilde m(t,\cdot) - \bar m(t,\cdot)\|{}_2^2 \leq \frac C\delta \|\tilde m_0 - \bar m_0\|{}_2^2\end{aligned} $$

where 0 < δ ≤ 1 − Ψ and C depends on the same quantities as Ψ. A similar estimate holds for \(\|D\tilde v(t,\cdot ) - D\bar v(t,\cdot )\|{ }_2^2\). Under some further assumptions on the costs F and G one can also use results on the HJB equation to obtain the continuous dependence of v itself upon the initial data m 0. More precise results on continuous dependence of solutions with respect to data will be given elsewhere.

Remark 2.2

The statement of Theorem 2.1 holds with the same proof for solutions \({\mathbb Z}^d\)-periodic in the space variable x in the case that F k and G k are \({\mathbb Z}^d\)-periodic in x and without Neumann boundary conditions. In such case of periodic boundary conditions a uniqueness result for short T was presented by Lions in [31] for N = 1, regularizing running cost F, and for terminal cost G independent of m(T). He used estimates in L 1 norm for m and in L norm for Dv, instead of the L 2 norms we used here in (2.13) and (2.15). See also [5] for the case of a single population.

Remark 2.3

The constants C H and \(\bar C_H\) in the theorem depend only on the data of (2.1) if H k and D p H k are globally Lipschitz in p, uniformly in x, for all k. In this case the smallness condition Ψ < 1 does not depend on the solutions \(\tilde v, \overline v\). In the next section we reach the same conclusion for much more general Hamiltonians H k under some mild additional conditions on the costs F k, G k.

Remark 2.4

If the volatility is different among the populations the terms Δv k,  Δm k in (2.1) are replaced, respectively, by ν k Δv k and ν k Δm k. If the constants ν k are all positive, the theorem remains true with the function Ψ now depending also on ν 1, …, ν N and minor changes in the proof. The case of volatility depending on x leads to operators of the form \(\text{trace} (\sigma _k(x)\sigma ^T_k(x)D^2v_k)\) in the HJB equations and their adjoints in the KFP equations. This can also be treated, with some additional work in the proof, if such operators are uniformly elliptic, i.e., the minimal eigenvalue of the matrix \(\sigma _k(x)\sigma ^T_k(x)\) is bounded away from 0 for \(x\in \overline \Omega \).

Remark 2.5

The C 2, β regularity of  Ω can be weakened in Theorem 2.1. Here we used, e.g., Theorem IV.5.3 of [26] to produce a smooth test function φ in the proof of Proposition 2.2. However, we could work instead with a weak solution of the backward heat equation, which exists, for instance, if  Ω ∈ C 1, β by Theorem 6.49 of [30], or if it is “piecewise smooth” by Theorem III.5.1 in [26].

3 Special Cases and Applications

The function Ψ of Theorem 2.1 may depend on the solutions \(\tilde v, \overline v\) if the Hamiltonians H k are not globally Lipschitz or they have unbounded second derivatives, because the constants \(C_H, \bar C_H\) may depend on the range of \(D\tilde v\) and \(D \overline v\). Under some further assumptions we can estimate these quantities and therefore get a uniqueness result where the function Ψ depends only on the data of the problem (2.1). The additional assumptions are

$$\displaystyle \begin{aligned} |F_k(x,\mu)| \leq C_F , \quad |G_k(x,\mu)| \leq C_G , \quad \forall \, x\in\overline\Omega, \mu\in{ \mathcal{P}_N}(\Omega) , k=1,\dots,N, \end{aligned} $$
(3.1)
$$\displaystyle \begin{aligned} |H_k(x,p)| \leq \alpha (1+|p|{}^2) , \quad |D_pH_k(x,p)|(1+|p|)\leq \alpha (1+|p|{}^2) ,\quad \forall \, x , p , k, \end{aligned} $$
(3.2)

and x → G(x, μ) of class C 2, for all \(\mu \in \mathcal {P}_N(\Omega )\), with

$$\displaystyle \begin{aligned} \|DG_k(\cdot,\mu)\|{}_\infty + \|D^2 G_k(\cdot,\mu)\|{}_\infty \leq C^{\prime}_G , \forall \, k . \end{aligned} $$
(3.3)

Corollary 3.1

Assume (2.2), (2.3), (2.4), \(m_0\in { \mathcal {P}_N}(\Omega )\) , (3.1), (3.2), and (3.3). Then there exists \(\overline T>0\) such that for all \(T\in (0, \bar T]\) there can be at most one classical solution of (2.1).

Proof

By Assumption (3.1) the functions ± (C G + t(C F + α)) are, respectively, a super- and a subsolution of the HJB equation in (2.1) with homogeneous Neumann condition and terminal condition G k, for any k and m. Then the Comparison Principle in the Appendix gives for any solution of (2.1) the estimate

$$\displaystyle \begin{aligned} |v_k(t,x)| \leq C_G+TC_F ,\quad \forall\, (t,x)\in [0,T]\times \overline\Omega , k=1,\dots,N . \end{aligned}$$

Now we can use an estimate of Theorem V.7.2, p. 486 of [26], stating that there is a constant K, depending only on \(\max |v_k|, \alpha , C^{\prime }_G\), and  Ω, such that

$$\displaystyle \begin{aligned} |Dv_k(t,x)| \leq K ,\quad \forall\, (t,x)\in [0,T]\times \overline\Omega , k=1,\dots,N . \end{aligned}$$

Then the constant C H in (2.5) is bounded by \(C^{\prime }_H:=\alpha (1+K^2)/(1+K)\), and \(\bar C_H\) defined by (2.6) can be estimated by

$$\displaystyle \begin{aligned} \bar C^{\prime}_H := \max_{k= 1,\dots, N} \sup_{x\in\Omega,\, |p|, |q|\leq K} \frac{|D_pH_k(x,p)-D_pH_k(x,q)|}{|p-q|} . \end{aligned}$$

Now Theorem 2.1 gives the conclusion. □

Remark 3.1

The constant \(\bar T\) in the Corollary depends only on L F, L G, N, α, C F, C G, \(C^{\prime }_G\), maxkm 0,k, Ω and the constants \(C^{\prime }_H, \bar C^{\prime }_H\) built in the proof. A similar results holds if, instead of T small, we assume L F and L G suitably small.

Example 3.1

Costs satisfying the assumptions. The nonlocal costs F k and G k of Example 2.1 satisfy Assumption (3.1) if, for instance, \(K, \bar K\), and g i are bounded and F o is continuous.

The Assumption (3.3) is verified if \(g_1, g_2\in C^2(\overline \Omega )\) and \(|D^2_x\bar K(x,y)|+|D^2_x\bar K(x,y)|\leq C\) for all x, y.

For the local cost F k of Example 2.2, (3.1) holds if \(F^l_k\) is bounded.

3.1 Well-Posedness of Segregation Models

Next we combine this uniqueness result with an existence theorem for models of urban settlements and residential choice proposed in [1]. We take for simplicity

$$\displaystyle \begin{aligned} N=2 , \quad G_k\equiv 0 ,\quad H_k(x,p)=h_k(x, |p|) .\end{aligned} $$
(3.4)

We endow \(\mathcal {P}_2(\Omega )\) with the Kantorovitch-Rubinstein distance and strengthen condition (3.1) to

$$\displaystyle \begin{aligned} (F_1, F_2) : \overline\Omega\times \mathcal{P}_2(\Omega)\to {\mathbb R}^2 \; \mbox{ continuous and with bounded range in } C^{1,\beta}(\overline\Omega),\end{aligned} $$
(3.5)

for some β > 0. We also assume a compatibility condition and further regularity on m 0:

$$\displaystyle \begin{aligned} \partial _n m_{0,k} = 0 \; \mbox{ on } \partial\Omega , \quad m_{0,k}\in C^{2,\beta}(\overline\Omega) , \quad k=1, 2 .\end{aligned} $$
(3.6)

Corollary 3.2

Assume (2.2), (2.3), (3.2), (3.4), (3.5), (3.6), and \(H_k\in C^1(\overline \Omega \times {\mathbb R}^d)\) . Then there exists \(\overline T>0\) such that for all \(T\in (0, \bar T]\) there exists a unique classical solution of (2.1).

Proof

The existence of a solution (for any T) follows from Theorem 12 of [1]. Let us only note that, by (3.4), D p H k(x, p) =  |p| h k(x, |p|)p∕|p|, and then the compatibility condition in (3.6) and the Neumann condition for v k imply also the compatibility condition

$$\displaystyle \begin{aligned} \partial_n m_{0,k} + m_{0,k} D_p H_k(x, Dv_k(0,x)) \cdot n = 0 \quad \forall \, x\in\partial \Omega .\end{aligned} $$
(3.7)

The uniqueness of the solution for small T follows from Corollary 3.1. □

Remark 3.2

Here the constant \(\bar T\) depends on L F, α, C F, maxkm 0,k, Ω, and the constants \(C^{\prime }_H, \bar C^{\prime }_H\) built in the proof of Corollary 3.1. The solution m and Dv depend in a Lipschitz way from the initial condition m 0, as explained in Remark 2.1.

Example 3.2

Costs of Schelling type. Let \(K_k : \overline \Omega \times \overline \Omega \to {\mathbb R}\) be Lipschitz and such that, for some U(x) neighborhood of x, K k(x, y) = 1 for y ∈ U(x) and K k(x, y) = 0 for y out of a small neighborhood of U(x). Then

$$\displaystyle \begin{aligned} N_k(x,\mu_k ) := {\int_{\Omega} K_k(x,y) \mu_k(y) dy} \end{aligned}$$

represents the amount of population k around x. The cost functional for the k-th population introduced in [1] and inspired by the studies on segregation of Schelling [34] is of the form

$$\displaystyle \begin{aligned} F_k(x,\mu_1,\mu_2 ) :=\left(\frac{N_k(x,\mu_k)}{N_k(x,\mu_k) + N_{3-k}(x,\mu_{3-k}) + \eta} - a_k\right)^- ,\end{aligned} $$

where () denotes the negative part and η > 0 is very small. It means that if the ratio of the k-th population with respect to the total population in the neighborhood of x is above the threshold a k, then a representative agent of this population is happy because his cost is 0, whereas below the threshold the agent incurs in a cost and therefore he wants to move from the neighborhood. These costs fall within Example 3.1 and satisfy (2.3) and (3.1). Moreover \(F_k : \Omega \times \mathcal {P}_2(\Omega )\to {\mathbb R}\) is Lipschitz.

To meet the assumptions of Corollary 3.2 we assume the kernel K is of class C 2 in x and we approximate the negative part () with a smooth function, e.g.,

$$\displaystyle \begin{aligned} \varphi_\varepsilon (r) := \frac{\sqrt{r^2 + \varepsilon^2} - r}2 ,\end{aligned} $$

for a small ε > 0. Then the cost functionals

$$\displaystyle \begin{aligned} F_k^\varepsilon(x,\mu_1,\mu_2 ) :=\varphi_\varepsilon \left(\frac{N_k(x,\mu_k)}{N_k(x,\mu_k) + N_{3-k}(x,\mu_{3-k}) + \eta} - a_k\right)\end{aligned} $$

satisfy also (3.5).

Example 3.3

Hamiltonians. Typical examples are either H k(x, p) = b k(x)|p|2, with \(b_k\in C(\overline \Omega )\), or

$$\displaystyle \begin{aligned} H _k(x,p)=b_k(x)(1+|p|{}^2)^{\beta_k/2} ,\quad 0<\beta_k\leq 2 .\end{aligned} $$

They satisfy (2.2) and (3.2), moreover they are in \(C^1(\overline \Omega \times {\mathbb R}^d)\) if \(b_k\in C^1(\overline \Omega )\).

Remark 3.3

In the last Corollary 3.2 the simplifying assumption G k ≡ 0 can be dropped and replaced with \(G_k : \overline \Omega \times \mathcal {P}_2(\Omega )\to {\mathbb R}\) continuous, with bounded range in \(C^{2,\beta }(\overline \Omega )\), and satisfying (2.4). Then (3.3) holds and the constant \(\bar T\) depends also on L G, C G, and \(C^{\prime }_G\). Examples of such terminal costs can be given along the lines of Examples 2.1, 3.1, and 3.2.

3.2 Well-Posedness of Robust Mean Field Games

For simplicity we limit ourselves to a single population of agents, so N = 1 and we drop the subscripts k. The representative agent has the dynamics in \({\mathbb R}^d\)

$$\displaystyle \begin{aligned} dX_s = \left(f(X_s)+g(X_s)\alpha_s + \sigma(X_s) \beta_s\right) ds + dW_s , \end{aligned}$$

where f is a C 1 vector field in \(\overline \Omega \), g and σ are C 1 scalar functions in \(\overline \Omega \), W s is a d-dimensional Brownian motion, α s, β s take values in \({\mathbb R}^d\) and are, respectively, the control of the agent and a disturbance affecting the system. The cost functional is (for δ > 0)

$$\displaystyle \begin{aligned} \mathbb E\left[ \int_0^T \left(F(X_s, m(s, \cdot)) + \frac{|\alpha_s|{}^2}2 -\delta \frac{|\beta_s|{}^2}2 \right) \, ds +G(X_T, m(T,\cdot)) \right] \end{aligned}$$

that the agent wants to minimise whereas the disturbance, modeled as a second player in a 2-person 0-sum game, wants to maximise. This leads to the Hamiltonian

$$\displaystyle \begin{aligned} H (x,p)= -f(x)\cdot p + g^2(x)\frac{|p|{}^2}2 - \sigma^2(x)\frac{|p|{}^2}{2\delta} . \end{aligned} $$
(3.8)

Note that here g(x) and σ(x) are scalars, different from Examples 2.4 and 2.5. On the costs we assume

$$\displaystyle \begin{aligned} F, \, G : \overline\Omega\times \mathcal{P}_1(\Omega)\to {\mathbb R} \; \mbox{ continuous with bounded range, resp., in } C^{1,\beta}(\overline\Omega) \mbox{ and } C^{2,\beta}(\overline\Omega) \end{aligned} $$
(3.9)

for some β > 0. The compatibility condition and regularity on m 0 now are

$$\displaystyle \begin{aligned} \partial _n m_{0} - m_0 f\cdot n= 0 \; \mbox{ on } \partial\Omega , \qquad m_{0}\in C^{2,\beta}(\overline\Omega) . \end{aligned} $$
(3.10)

Corollary 3.3

Assume N = 1 with the Hamiltonian defined by (3.8), (2.3), (2.4), (3.9), and (3.10). Then for all T > 0 there is a classical solution of (2.1), and there exists \(\overline T>0\) such that for all \(T\in (0, \bar T]\) such solution is unique.

Proof

The existence of a solution follows from Theorem 12 of [1]. In fact, \(H\in C^1(\overline \Omega \times {\mathbb R}^d)\) and it has quadratic growth. Moreover

$$\displaystyle \begin{aligned}D_pH(x,p)=-f(x) + g^2(x)p - \frac{\sigma^2(x)}{\delta}p , \end{aligned}$$

and then the compatibility condition in (3.10) and the Neumann condition for v imply again the compatibility condition (3.7).

The uniqueness of the solution for small T follows from Corollary 3.1, since H satisfies also (2.2). □

Remark 3.4

Also here the solution m and Dv depend in a Lipschitz way from the initial condition m 0, as explained in Remark 2.1.

Remark 3.5

Our example of robust MFG is different from the one in [7]. In that paper the state space is \(\Omega ={\mathbb R}\), one-dimensional without boundary, the control system is linear in the state X s, and the volatility is σX s instead of 1, for some positive constant σ, so the parabolic operators in the HJB and KFP equations of (2.1) are degenerate at the origin. The well-posedness of the MFG system of PDEs in [7] is an open problem.