Abstract
A standard assumption in mean-field game (MFG) theory is that the coupling between the Hamilton–Jacobi equation and the transport equation is monotonically non-decreasing in the density of the population. In many cases, this assumption implies the existence and uniqueness of solutions. Here, we drop that assumption and construct explicit solutions for one-dimensional MFGs. These solutions exhibit phenomena not present in monotonically increasing MFGs: low-regularity, non-uniqueness, and the formation of regions with no agents.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Mean-field game (MFG) theory [3, 24, 28, 39] describes noncooperative differential games with infinitely many identical players. These games were introduced by Lasry and Lions [36,37,38] and, independently around the same time, by Huang et al. [34, 35]. Often, MFGs are given by a Hamilton–Jacobi equation coupled with a Fokker–Planck equation. A standard example is the stationary, one-dimensional, first-order MFG:
with its elliptic regularization,
where \(\epsilon >0\). Here, p is a fixed real number and the unknowns are the constant \({\overline{H}}\) and the functions u and m. The function g is \(C^\infty \) on \({\mathbb {R}}^+\). To simplify the presentation, we consider the periodic case and work in the one-dimensional torus, \({\mathbb {T}}\). Accordingly, \(V : {\mathbb {T}}\rightarrow {\mathbb {R}}\) is a \(C^\infty \) potential. We search for periodic solutions, \(u,m :{\mathbb {T}}\rightarrow {\mathbb {R}}\). Here, we examine this problem and attempt to understand its features in terms of the monotonicity properties of g.
A standard assumption in MFGs is that g is increasing. Heuristically, this assumption means that agents prefer sparsely populated areas. In this case, the existence and uniqueness of smooth solutions to (1.1) is well understood for stationary problems [25,26,27, 41], weakly coupled MFG systems [19], the obstacle MFG problem [18], and extended MFGs [20]. In the time-dependent setting, similar results are obtained in [21, 23, 31] for standard MFGs and in [22] for forward-forward problems. The theory of weak solutions is also well developed for first-order and second-order problems (see [4, 5, 7] and [6, 14, 42, 43], respectively). Congestion problems, see [12, 15, 30, 32], are also of interest and our results extend straightforwardly [16]. With suitable modifications, our methods also extend to non-local [40] and radially symmetric MFGs [13]. Extensions to multi-population problems were considered in [1, 9, 11].
The case of a non-monotonically increasing g is relevant: if g is decreasing, agents prefer clustering in high-density areas. The case where g first decreases and then increases is also natural; here, agents have a preferred density given by the minimum of g. However, little is known about the properties of (1.1) when g is not increasing. One of the few known cases is a second-order MFG with \(g(m)=-\ln m\) and a quadratic cost. In this case, due to the particular structure of the equations, there are explicit solutions, see [33, 39]. The elliptic case was studied in [10].
A triplet, \((u,m,{\overline{H}}),\) solves (1.1) if
-
(i)
u is a Lipschitz viscosity solution of the first equation in (1.1);
-
(ii)
m is a probability density; that is,
$$\begin{aligned} m\geqslant 0,\quad \int \limits _{{\mathbb {T}}} m =1; \end{aligned}$$ -
(iii)
m is a weak (distributional) solution of the second equation in (1.1).
Because (1.1) is invariant under addition of constants to u, we assume that \(u(0)=0\). Here, u is Lipschitz continuous. However, m can be discontinuous. In this case, viscosity solutions of the first equation in (1.1) are interpreted as discontinuous viscosity solutions; see, for example, [2] and the discussion in Sect. 6.
Our problem is one-dimensional, and the Hamiltonian is convex. If u is a piecewise \(C^1\) function and m is continuous, then u is a viscosity solution if the following conditions hold:
-
(a)
u solves the equation at the points where it is \(C^1\) and m is continuous;
-
(b)
\(\lim \limits _{x\rightarrow x_0^-}u_x(x) \geqslant \lim \limits _{x\rightarrow x_0^+}u_x(x)\) at points of discontinuity of \(u_x\).
When g is not increasing, (1.1) may not admit m continuous. Solutions must, therefore, be considered in the framework of discontinuous viscosity solutions. In this case, the above characterization of one-dimensional viscosity solutions is not valid, and (1.1) admits a large family of discontinuous viscosity solutions (see Sect. 6). On the other hand, solutions that satisfy the above conditions [(a) and (b)] have nice structural properties that we discuss in this paper. Furthermore, in their analysis, we see the appearance of discontinuities in m, which in turn motivates the study of discontinuous viscosity solutions. Overall, these conditions seem to be good selection criteria for discontinuous solutions of (1.1).
We call solutions that satisfy conditions (a) and (b) semiconcave. In this paper, we always consider semiconcave solutions except in Sect. 6, where we discuss general discontinuous viscosity solutions.
Our goal is to solve (1.1) explicitly and to understand the qualitative behavior of solutions. There are few MFGs for which explicit or semi-explicit solutions of MFGs can be computed. For stationary problems, a number of explicit examples can be found in [24] and an interesting Hopf–Cole-type formula was derived in [8]. For the time-dependent case, the analysis of one-dimensional MFGs was performed in [17, 29]. For that, in Sect. 2, we reformulate (1.1) in terms of the current,
From the second equation in (1.1), j is constant. Thus, the current becomes the main parameter in our analysis, and we consider p as a part of the solution for a given current. Furthermore, in Sects. 9 and 11 we analyze the dependence of p on j and rephrase our results in terms of p, see Proposition 11.2. This change of viewpoint is motivated by the fact that (1.1) and (1.2) are substantially simpler after the reduction (1.3).
While we focus our attention into non-increasing MFGs, our methods are also valid for increasing MFGs. To illustrate and contrast these two cases, we begin our analysis in Sect. 3 by addressing the latter. For \(j>0\), we show the existence of a unique smooth solution. However, for \(j=0\), we establish the existence of non-smooth solutions and present examples where uniqueness does not hold. While the non-uniqueness of a solution of the Hamilton–Jacobi equation for a fixed m is well known, our example is, we believe, the first one in the context of mean-field games. The non-uniqueness is due to the existence of regions where m vanishes; otherwise, the solution is unique.
In Sect. 4, we consider the elliptic regularization of monotone MFGs. We establish a new variational principle that gives the existence and uniqueness of smooth solutions. Moreover, we address the vanishing viscosity problem using \(\Gamma \)-convergence.
In Sect. 5, we study semiconcave solutions of (1.1) for non-increasing g. In this case, if \(j\ne 0\), \(m>0\). However, for certain values of j, (1.1) does not have continuous solutions. In contrast, if j is large enough, (1.1) has a unique smooth solution. Moreover, if V has a single point of maximum, there exists a unique solution of (1.1) for each \(j>0\). If V has multiple maxima, there are multiple solutions. If \(j=0\), the behavior of (1.1) is more complex and m can be discontinuous or vanish. Additionally, we uncover an interesting phenomenon that we call an unhappiness trap. It turns out that for solutions with a low-current (low mobility) agents prefer to be at a worse place but with more agents; that is, the density of the population is larger where the potential, V, is small. This means that the focusing effect modeled by a non-increasing g prevails the spatial preferences of the agents. Furthermore, for solutions with a high-current (high mobility) agents end up accumulating at better places; that is, the density of the population is high where the potential is large. This means that the high mobility of the agents allows them to choose a better location without compromising the larger density of agents around them. For the solutions with the intermediate-current level, the situation is mixed. See Propositions 5.1 and 5.4 and the discussion afterward for the details.
Somewhat similar results to our existence/nonexistence results for smooth solutions are obtained in [10]. In the latter, the author considers elliptic MFG with possibly decreasing g that has a power-like growth with exponent, \(\alpha \). The author proves that (i) if \(\alpha \) is sufficiently small the system obtains smooth solutions, (ii) for intermediate values of \(\alpha \) smooth solutions exist provided extra smallness assumption on g and (iii) for \(\alpha \) large enough smooth solutions do not exist generically. As we mentioned earlier, when g is non-increasing agents tend to accumulate. Hence, smooth solutions are obtained when there is a competing, smoothing mechanism that prevents agents from too strong accumulation. In the model in [10], the accumulation strength is the exponent, \(\alpha \), and the smoothing mechanism is the Brownian motion or the diffusion. Thus, in [10], the author finds the precise balance between the two competing mechanisms. In our case, we fix the accumulation strength by taking \(g(m)=-m\). Here, high-current provides a smoothing mechanism by spreading agents. In Sect. 8, we find the precise balance between aggregation and spreading effects.
Next, in Sect. 6, we consider MFGs with a decreasing nonlinearity, g, and discuss the properties of discontinuous viscosity solutions.
Subsequently, in Sect. 7, we study the elliptic regularization of anti-monotone MFGs. There, we use calculus of variations methods to prove the existence of a solution.
In Sect. 8, we examine the regularity of solutions as a function of the current and, in Sect. 9, we study the asymptotic behavior of solutions of (1.1) as j converges to 0 and \(\infty \). Finally, in Sects. 10 and 11, we analyze the regularity of \({\overline{H}}\) in terms of j and p.
2 The Current Formulation and Regularization
Here, we discuss the current formulation of (1.1) and (1.2). After some elementary computations, we show that the current formulation of (1.2) is the Euler–Lagrange equation of a suitable functional.
2.1 Current Formulation
Let j be given by (1.3). From the second equation in (1.1), j is constant. We split our analysis into the cases, \(j\ne 0\) and \(j=0\).
If \(j\ne 0\), \(m(x) \ne 0\) for all \(x \in {\mathbb {T}}\) and \(u_x+p=j/m\). Thus, (1.1) can be written as
where \(F_j(m)= \frac{j^2}{2m^2}-g(m)\). For each x, the first equation in (2.1) is an algebraic equation for m. If g is increasing and \(g(+\infty )=+\infty \), for each \(x \in {\mathbb {T}}\) and \({\overline{H}}\in {\mathbb {R}},\) there exists a unique solution. In contrast, if g is not increasing, there may exist multiple solutions, as we discuss later.
For \(j=0,\) (1.1) gives
From the last equation in (2.2), either \(m=0,\) in which case u solves
or \(m>0\) and \(g(m)+{\overline{H}}-V(x)=0\). Hence, if g is increasing or decreasing, m(x) is determined in a unique way; otherwise, multiple solutions can occur.
2.2 Elliptic Regularization
Now, we consider the elliptic MFG (1.2). From the second equation in that system, we conclude that
is constant. Thus, we solve for \(u_x\) and replace it in the first equation. Accordingly, we get
Then, using the identity
we obtain the following equation for m:
Now, let \(\Phi _j\) be such that \(\Phi _j'(m)=F_j(m)\); that is,
where \(G'(m)=g(m)\). Then, (2.4) is the Euler–Lagrange equation of the functional
under the constraint \(\int _{{\mathbb {T}}} m=1\); the constant \({\overline{H}}\) is the Lagrangian multiplier for the preceding constraint.
3 First-Order Monotone MFGs
We continue our analysis by considering monotonically increasing nonlinearities, g. In the case of a nonvanishing current, solutions are smooth. However, if the current vanishes, solutions can fail to be smooth, m can vanish, and u may not be unique.
The non-smooth behavior for a generic non-decreasing nonlinearity, g, was observed in Theorem 2.8 in [38] where the authors find limits of smooth solutions of second-order MFGs as the viscosity coefficient converges to 0.
3.1 \(j\ne 0\), g Increasing
Here, in contrast to the case \(j=0\), examined later, the solutions are smooth. Elementary computations give the following result.
Proposition 3.1
Let g be monotonically increasing. Then, for every \(j\ne 0\), (1.1) has a unique smooth solution, \((u_j,m_j,{\overline{H}}_j),\) with current j. This solution is given by
where \(p_j=\int \limits _{{\mathbb {T}}} \frac{j}{m_j(y)}\hbox {d}y,\ F_j(t)=\frac{j^2}{2t^2}-g(t),\) and \({\overline{H}}_j\) is such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\).
3.2 \(j=0\), g Increasing
To simplify the discussion and illustrate our methods, we consider (2.2) with \(g(m)=m\). The analysis is similar for other choices of an increasing function, g. Accordingly, we have
It is easy to see that \(m(x)=(V(x)-{\overline{H}})^+\) and \(\frac{(u_x+p)^2}{2}=(V(x)-{\overline{H}})^-\) for \( x \in {\mathbb {T}}\). The map \({\overline{H}}\mapsto \int \limits _{{\mathbb {T}}} (V(x)-{\overline{H}})^+ \hbox {d}x\) is decreasing (strictly decreasing at its positive values). Hence, there exists a unique number, \({\overline{H}}_{V}\), such that \(\int \limits _{{\mathbb {T}}} m(x)\hbox {d}x=1\). Moreover, \({\overline{H}}_V< \max V\) and \({\overline{H}}_V\geqslant \int _{\mathbb {T}}V-1\). Then, we get different solutions if \({\overline{H}}_V>\min \limits _{\mathbb {T}}V\) or if \({\overline{H}}_V<\min \limits _{\mathbb {T}}V\).
Proposition 3.2
Let \({\overline{H}}_V \in {\mathbb {R}}\) be the unique number such that
Then, we have that
Furthermore, the following statements are true.
-
(i)
If \(\min \limits _{\mathbb {T}}V<{\overline{H}}_V<\max \limits _{\mathbb {T}}V\), m is non-smooth and there are regions where it vanishes. Moreover, there are \(C^1\) solutions:
$$\begin{aligned} u^\pm (x) = \pm \int _{0}^{x} \sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y - px, \end{aligned}$$(3.3)with \(p=\pm \int \limits _{\mathbb {T}}\sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y\). Additionally, there exist Lipschitz solutions given by:
$$\begin{aligned} (u^{x_0})_x (x) = \sqrt{2(V(x)-{\overline{H}}_V)^-}\ \chi _{[0,x_0)}- \sqrt{2(V(x)-{\overline{H}}_V)^-}\ \chi _{(x_0,1)} - p^{x_0} \end{aligned}$$(3.4)where
$$\begin{aligned} p^{x_0}=\int _{y<x_0}\sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y - \int _{y>x_0} \sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y, \end{aligned}$$and \(x_0 \in {\mathbb {T}}\) is such that \(V(x_0)<{\overline{H}}_V\).
-
(ii)
If \({\overline{H}}_V<\min \limits _{\mathbb {T}}V\), m is smooth and positive. Moreover,
$$\begin{aligned} m(x)=V(x)-{\overline{H}}_V,\ u(x)=0,\ p=0,\ {\overline{H}}_V =\int _{{\mathbb {T}}} V-1. \end{aligned}$$
Proof
The first statement in the proposition is evident. Thus, we address (i) and (ii)
Case i It is clear that the triplets, \((u^{\pm },m,{\overline{H}})\), where \(u^{\pm }\) is given by (3.3) are classical solutions of (1.1).
Furthermore, there exists \(x_0 \in {\mathbb {T}}\) such that \(V(x_0)<{\overline{H}}_V\) because \(\min \limits _{\mathbb {T}}<{\overline{H}}_V\). Then, the triplet, \((u^{x_0},m,{\overline{H}}_V)\), where \(u^{x_0}\) is given by (3.4) is a pointwise solution for (1.1) everywhere except \(x_0\). Moreover, \((u^{x_0})_x\) has a negative jump at \(x_0\). Thus, \(u^{x_0}\) is a viscosity solution for the Hamilton–Jacobi equation in (1.1). Accordingly, the triplet \((u^{x_0},m,{\overline{H}}_V)\) solves (1.1).
Case ii Since \({\overline{H}}_V<\min \limits _{\mathbb {T}}V\) we get that \(m(x)=V(x)-{\overline{H}}_V\), and the rest follows readily. \(\square \)
To summarize, (3.1) has a unique, smooth solution if and only if \(u_x+p \equiv 0\) or, equivalently, \(m(x)=V(x)-{\overline{H}}_V\), where \({\overline{H}}_V\) is such that (3.2) holds. The latter holds if and only if
This is the case for V with small oscillation; that is, \(\text {osc} V \leqslant 1\).
For \(A\in {\mathbb {R}}\), set \(V_{A}(x)=A \sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\) and let \((m(x,A),\ {\overline{H}}(A))\) solve (3.1) for \(V=V_{A}\). In Fig. 1, we plot m(x, A) for \(0\leqslant A\leqslant 3\). We observe that m(x, A) is smooth for small values of A and becomes non-differentiable for large A, as expected from our analysis. If \(A=2\), (3.5) does not hold. Thus, m(x, 2) is singular and we have multiple solutions, u(x, 2). In Fig. 2, we plot m(x, 2) and two distinct solutions, u(x, 2).
4 Monotone Elliptic Mean-Field Games
To study (1.2), we examine the variational problem determined by (2.5). As before, for concreteness, we consider the case \(g(m)=m\). In this case, (2.5) becomes
The preceding functional is convex, and, as we prove next, the direct method in the calculus of variations gives the existence of a minimizer on the set
Proposition 4.1
For each \(j\in {\mathbb {R}}\), there exists a unique minimizer, m, of \(J_\epsilon [m]\) in \({\mathcal {A}}\). Moreover, \(m>0\) and solves
for some constant \({\overline{H}}\in {\mathbb {R}}\). Accordingly, the triplet, \((u,m,{\overline{H}})\), where
is the unique smooth solution of (1.2) for \(p=\int \limits _{{\mathbb {T}}} \frac{j}{m(y)}\hbox {d}y\).
Proof
The uniqueness of a positive minimizer is a consequence of the strict convexity of \(J_\epsilon \). The existence of a nonnegative minimizer requires separate arguments for the cases \(j\ne 0\) and \(j=0\).
We first examine the case \(j\ne 0\). We begin by taking a minimizing sequence, \(m_n\in {\mathcal {A}}\). Then, there exists a constant, \(C>0,\) such that
Thus, by Morrey’s theorem, the functions \(\sqrt{m_n}\) are equi-Hölder continuous of exponent \(\frac{1}{2}\). Therefore, because \(\int m_n=1\), this sequence is equibounded and, through some subsequence, \(m_n\rightarrow m\) for some function \(m\geqslant 0\). Moreover, by Fatou’s lemma,
Suppose that \(\min m=m(x_0)=0\). Then, because \(\sqrt{m}\) is Hölder continuous, we have \(m(x)\leqslant C|x-x_0|\). However,
is not finite, which is a contradiction. Thus, m is a strictly positive minimizer. Moreover, it solves the corresponding Euler–Lagrange equation.
For \(j=0\), we rewrite the Euler–Lagrange equation as
Let \(\mathcal {P}\) be the set of nonnegative functions in \(L^\infty ({\mathbb {T}}^d)\) and consider the map \(\Xi :\mathcal {P}\rightarrow \mathcal {P}\) defined as follows. Given \(\eta \in \mathcal {P}\), we solve the PDE
where \({\overline{H}}\) satisfies the compatibility condition
and \(w:{\mathbb {T}}\rightarrow {\mathbb {R}}\) is such that \(\int e^w \hbox {d}x=1\). An elementary argument shows that w is uniformly bounded from above and from below. Next, we set \(\Xi (\eta )=e^w\). The mapping \(\Xi \) is continuous and compact. Accordingly, by Schauder’s Fixed Point Theorem, there is a fixed point, m, that solves (4.2). By the convexity of the variational problem (4.1), this fixed point is the unique solution of the Euler–Lagrange equation. \(\square \)
Next, to study the convergence as \(\epsilon \rightarrow 0\), we investigate the \(\Gamma \)-convergence as \(\epsilon \rightarrow 0\) of \(J_\epsilon \).
Lemma 4.2
Let
Then, we have that
Proof
Let
Suppose that \(m,m_{\epsilon } \in \mathcal {A'}\) and \(m_{\epsilon } \rightharpoonup m\) in \(L^2({\mathbb {T}})\). Since \(W_j\) is convex, we have that
Therefore, we get
Finally, for arbitrary \(m\in \mathcal {A',}\) we take \(m_{\epsilon }=m\) as a recovery sequence. \(\square \)
In Fig. 3, we observe numerical evidence of this \(\Gamma \)-convergence.
5 Semiconcave Viscosity Solutions in Anti-monotone Mean-Field Games
Here, we investigate MFGs with decreasing g. To simplify, we assume that \(g(m)=-m\). However, our arguments are valid for a general decreasing g. In contrast with the monotone case, m may not be unique. Furthermore, m can be discontinuous and, thus, viscosity solutions of the Hamilton–Jacobi equation in (1.1) should be interpreted in the discontinuous sense. In this section, we are interested in semiconcave discontinuous viscosity solutions; that is, solutions satisfying conditions (a) and (b) stated in the Introduction. Here, we examine existence, uniqueness, and additional properties of such solutions. In Sect. 6, we prove that these solutions are indeed discontinuous viscosity solutions.
5.1 \(j \ne 0\), g Decreasing
To simplify the presentation, we consider \(j>0\).
With \(g(m)=-m,\) (2.1) becomes
The minimum of \(t\mapsto j^2/2t^2+t\) is attained at \(t_{\min }=j^{2/3}\). Thus, \(j^2/2t^2+t \geqslant 3j^{2/3}/2\) for \(t>0\). Therefore, a lower bound for \({\overline{H}}\) is
where the superscript \(^\mathrm{cr}\) stands for critical.
The function \(t \mapsto j^2/2t^2+t\) is decreasing on the interval \((0,t_{\min })\) and increasing on the interval \((t_{\min },+\infty )\). For any \({\overline{H}}\) satisfying (5.2), let \(m_{{\overline{H}}}^{-}\) and \(m_{{\overline{H}}}^{+}\) be the solutions of
with \(0\leqslant m_{{\overline{H}}}^-(x)\leqslant t_{\min }\leqslant m_{{\overline{H}}}^+(x)\). Due to (5.2), \(m_{{\overline{H}}}^{-}\) and \(m_{{\overline{H}}}^{+}\) are well defined. Furthermore, if \((u,m , {\overline{H}})\) solves (1.1), then m(x) agrees with either \(m^+_{{\overline{H}}}(x)\) or \(m^-_{{\overline{H}}}(x)\), almost everywhere in \({\mathbb {T}}\).
Let \(m_{j}^-:=m_{{\overline{H}}^\mathrm{cr}_j}^{-}\) and \(m_{j}^+:=m_{{\overline{H}}^\mathrm{cr}_j}^{+}\). Note that \(m_{j}^-(x)\leqslant m_{j}^+(x)\) for all \(x \in {\mathbb {T}}\), and the equality holds only at the maximum points of V. Hence, \(m_{j}^-(x)< m_{j}^+(x)\) on a set of positive Lebesgue measure unless V is constant.
The two fundamental quantities for our analysis are
If V is not constant, we have
for \(j>0\).
Proposition 5.1
Suppose that \(x=0\) is the single maximum of V. Then, for every \(j>0,\) there exists a unique number, \(p_j\), such that (1.1) has a semiconcave solution with a current level, j. Moreover, the solution of (5.1), \((u_j,m_j,{\overline{H}}_j)\), is unique and given as follows.
-
(i)
If \(\alpha ^+(j) \leqslant 1,\)
$$\begin{aligned} m_j(x)=m^{+}_{{\overline{H}}_j}(x),\quad u_j(x)=\int \limits _{0}^{x}\frac{j\hbox {d}y}{m_j(y)}-p_jx, \end{aligned}$$(5.4)where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j(y)}\) and \({\overline{H}}_j\) is such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\).
-
(ii)
If \(\alpha ^-(j) \geqslant 1,\)
$$\begin{aligned} m_j(x)=m^{-}_{{\overline{H}}_j}(x),\quad u_j(x)=\int \limits _{0}^{x}\frac{j\hbox {d}y}{m_j(y)}-p_jx, \end{aligned}$$(5.5)where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j(y)}\) and \({\overline{H}}_j\) is such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\).
-
(iii)
If \(\alpha ^-(j)< 1 < \alpha ^+(j)\), we have that \({\overline{H}}_j={\overline{H}}_j^\mathrm{cr}\), and
$$\begin{aligned} m_j(x)=m^{-}_{j}(x)\chi _{[0,d_j)}+m^{+}_{j}(x)\chi _{[d_j,1)},\ u_j(x)=\int \limits _{0}^{x}\frac{j\hbox {d}y}{m_j(y)}-p_jx, \end{aligned}$$(5.6)where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j(y)}\) and \(d_j\) is such that
$$\begin{aligned} \int \limits _{\mathbb {T}}m_j(x)\hbox {d}x=\int \limits _{0}^{d_j} m^-_{j}(x)\hbox {d}x+\int \limits _{d_j}^1 m^{+}_{j}(x)\hbox {d}x=1. \end{aligned}$$
Proof
Case i The function \(j^2/2t^2+t\) is increasing on the interval \((t_{\min },+\infty )\). Therefore, \({\overline{H}}\mapsto m_{{\overline{H}}}^+(x)\) is increasing for all x. Hence, the mapping
is increasing. By assumption, \(\int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j^\mathrm{cr}}^{+}(x)\hbox {d}x=\int \limits _{{\mathbb {T}}}m_j^+(x)\hbox {d}x\leqslant 1\). Therefore, there exists a unique \({\overline{H}}_j \geqslant {\overline{H}}_j^\mathrm{cr}\) such that \(\int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j}^{+}(x)\hbox {d}x=1\). Thus, \((u_j,m_j)\) given by (5.4) is the unique solution of (1.1) with \({\overline{H}}={\overline{H}}_j\) and \(p=p_j\).
Case ii The function \(j^2/2t^2+t\) is decreasing on the interval \((0,t_{\min })\). Therefore, \(m_{{\overline{H}}}^-(x)\) is decreasing in \({\overline{H}}\) for all x. Hence, the mapping
is decreasing. By assumption, \( \int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j^\mathrm{cr}}^{-}(x)\hbox {d}x=\int \limits _{{\mathbb {T}}}m_j^-(x)\hbox {d}x\geqslant 1. \) Thus, there exists a unique number, \({\overline{H}}_j \geqslant {\overline{H}}_j^\mathrm{cr}\), such that \(\int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j}^{-}(x)\hbox {d}x=1\). Hence, \((u_j,m_j)\) given by (5.5) is the unique solution of (1.1) with \({\overline{H}}={\overline{H}}_j\) and \(p=p_j\).
Case iii We first show that (1.1) does not have semiconcave solutions for \({\overline{H}}>{\overline{H}}_j^\mathrm{cr}\). By contradiction, suppose that (1.1) has a semiconcave solution, \((u,m,{\overline{H}}),\) for some \({\overline{H}}>{\overline{H}}_j^\mathrm{cr}\) and \(p \in {\mathbb {R}}\). Evidently, \( m(x)=m_{{\overline{H}}}^+(x)\chi _E+m_{{\overline{H}}}^-(x)\chi _{{\mathbb {T}}\setminus E} \) for some subset \(E \subset {\mathbb {T}}\). Furthermore,
because \({\overline{H}}>{\overline{H}}_j^\mathrm{cr}\). Moreover,
and
Therefore, neither E nor \({\mathbb {T}}\setminus E\) can be empty or have zero Lebesgue measure. Because E and \({\mathbb {T}}\setminus E\) are not negligible, there exists a real number, e, such that for every \(\varepsilon >0,\)
According to (5.7), m has a negative jump, \(m(e^-)-m(e^+)<0\), at \(x=e\). Hence, \(u_x=j/m-p\) has a positive jump, \(\frac{j}{m^-(e)}-\frac{j}{m^+(e)}>0\), at \(x=e\). However, derivatives of semiconcave solutions can only have negative jumps and, thus, this contradiction implies \({\overline{H}}_j={\overline{H}}^\mathrm{cr}_j\).
Next, we construct \(m_j\) and \(u_j\) and determine \(p_j\). We look for a function \(m_j\) of the form
Note that (5.8) is the only possibility for \(m_j\) because \(m_j\) can switch from \(m^+_j\) to \(m^-_j\) only if there is no jump at the switching point; that is, \(m^+_j\) and \(m^-_j\) are equal at that point, which only holds at maximum of V. Thus, by periodicity, \(m_j\) can switch to \(m^-_j\) from \(m^+_j\) only at \(x=0\) and \(x=1\).
It remains to choose \(d \in (0,1)\) such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\). Let
Because \(\phi (0)>1\) and \(\phi (1)<1\) and because \(\phi '(d)=m^-_{j}(d)-m^+_{j}(d)<0\) for \(d\in (0,1)\), there exists a unique \(d_j \in (0,1)\) such that \(\phi (d_j)=1\). The triplet defined by (5.6), \((u_j,m_j,{\overline{H}}_{j}),\) solves (1.1). \(\square \)
By the previous proposition, if V has a single maximum point then, for every current, \(j>0\), there exists a unique \(p_j\) and a unique triplet, \((u_j,m_j,{\overline{H}}_j)\), that solves (5.1) for \(p=p_j\). In contrast, as we show next, if V has multiple maxima and \(j>0\) is such that Case iii in Proposition 5.1 holds, there exist infinitely many solutions.
Proposition 5.2
Suppose that V attains a maximum at \(x=0\) and at \(x=x_0 \in (0,1)\). Let j be such that \(\alpha ^-(j)< 1 < \alpha ^+(j)\). Then, there exist infinitely many numbers, p, and pairs, (u, m), such that \((u,m,{\overline{H}}_j^\mathrm{cr})\) is a semiconcave solution of (1.1).
Proof
We look for solutions of the form
where \(0<d_1<x_0\) and \(x_0<d_2<1\). Note that \(m_j^{d_1,d_2}\) has two discontinuity points. At these points, \(m_j^{d_1,d_2}\) has positive jumps. Hence, if we define
where \(p_j^{d_1,d_2}=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j^{d_1,d_2}(y)}\), the triplet \((u_j^{d_1,d_2},m_j^{d_1,d_2},{\overline{H}}_j^\mathrm{cr})\) is a semiconcave solution of (1.1) if
To determine \(d_1\) and \(d_2\), we consider the function
We have that \(\phi (0,x_0)=\int \limits _{0}^1 m^{+}_{j}(x)\hbox {d}x>1\) and \(\phi (x_0,1)=\int \limits _{0}^1 m^{-}_{j}(x)\hbox {d}x<1\). Because \(\phi \) is continuous, there exists a pair, \((d_1,d_2)\in (0,x_0)\times (x_0,1),\) such that \(\phi (d_1,d_2)=1\). In fact, there are infinitely many such pairs. For arbitrary continuous curve \(\gamma \subset [0,x_0]\times [x_0,1]\) connecting the points \((0,x_0)\) and \((x_0,1)\), there exists at least one pair, \((d_1,d_2) \in \gamma ,\) such that \(\phi (d_1,d_2)=1\). To each such pair corresponds a triplet \((u_j^{d_1,d_2},m_j^{d_1,d_2},{\overline{H}}_j^\mathrm{cr})\) that is a semiconcave solution of (1.1). \(\square \)
Let \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\). Because V has a single maximum, Proposition 5.1 gives that (1.1) admits a unique semiconcave solution for all values of \(j>0\). In Figs. 4, 5 and 6, we plot m for different values of j. In Fig. 4, we plot m in the low-current regime, \(j=0.001;\) that is, Case i in Proposition 5.1. As we can see, m is smooth as predicted by the proposition. In Fig. 5, we plot m in the high-current regime, \(j=10; \) that is, Case ii in Proposition 5.1. As before, we observe that m is smooth. Finally, in Fig. 6, we plot m for the intermediate-current regime, \(j=0.5\); that is, Case iii in Proposition 5.1. As we can see, m is discontinuous.
Next, we consider the potential \(V(x)=\frac{1}{2}\sin \big (4\pi \big (x+\frac{1}{8}\big )\big ) \) that has two maxima. By Proposition 5.2, we have infinitely many two-jump solutions. In Fig. 7, we plot two such solutions.
5.2 \(j = 0\), g Decreasing
Now, we examine the case when the current vanishes, and, thus, we consider the system
Suppose that (5.9) has a solution. Because \(m \geqslant 0\), we have \({\overline{H}}-V(x)\geqslant ~0\) for\(\ x\in ~{\mathbb {T}}\). Thus, \({\overline{H}}\geqslant \max \limits _{{\mathbb {T}}} V\). On the other hand,
Consequently, \({\overline{H}}\geqslant 1+ \int \limits _{{\mathbb {T}}} V\). Therefore,
It turns out that \({\overline{H}}_0\) is the only possible value for \({\overline{H}}\) as we show next.
Proposition 5.3
The MFG (5.9) does not have semiconcave solutions for \({\overline{H}}>{\overline{H}}_0\).
Proof
Suppose that \({\overline{H}}>{\overline{H}}_0\) and that the triplet \((u,m,{\overline{H}})\) is a semiconcave solution of (5.9). If \(m(x)>0\), then \(u_x(x)+p=0\) and \(m={\overline{H}}-V(x)\). If \(m(x)=0\), then
Thus, on the set \(Z=\{x:m(x)=0\}\),
We have that \(\int \limits _{{\mathbb {T}}} \left( {\overline{H}}-V(x)\right) \hbox {d}x>1\). Hence, the set Z has a positive Lebesgue measure. Otherwise, \(m(x)={\overline{H}}-V(x)\) everywhere, and thus \(\int \limits _{{\mathbb {T}}}m(x)\hbox {d}x>1\). Consequently, \(u_x+p\) is either \(\sqrt{2({\overline{H}}-V(x))}\) or \(-\sqrt{2({\overline{H}}-V(x))}\) on Z. Suppose that \(u_x(x)+p\) takes the value \(-\sqrt{2({\overline{H}}-V(x))}\) at some point \(x \in {\mathbb {T}}\). Without loss of generality, we can assume that \(u_x(0)+p=-\sqrt{2({\overline{H}}-V(0))}\). Let
Then, at \(x=e\), the function \(u_x+p\) has a jump of size \(\sqrt{2({\overline{H}}-V(e))}\) or \(2\sqrt{2({\overline{H}}-V(e))}\). However, this is impossible because \(u_x\) is a semiconcave solution, and it cannot have positive jumps. Therefore, \(u_x(x)+p\) takes only the values \(\sqrt{2({\overline{H}}-V(x))}\) and 0. But then, \(u_x\) must have a positive jump from 0 to \(\sqrt{2({\overline{H}}-V(x))}\) at some point, which also contradicts the regularity property. \(\square \)
Now, we construct solutions to (5.9) with \({\overline{H}}={\overline{H}}_{0}\). It turns out that if V has a large oscillation, then (5.9) has infinitely many semiconcave solutions.
Proposition 5.4
We have that
-
(i)
if \(1+\int \limits _{{\mathbb {T}}}V \geqslant \max \limits _{{\mathbb {T}}} V\), then the triplet \((u_0,m_0,{\overline{H}}_0)\) with
$$\begin{aligned} m_0(x)={\overline{H}}_0-V(x),\ u_0(x)=0, \end{aligned}$$(5.10)solves (5.9) in the classical sense for \(p=0\);
-
(ii)
if \(\max \limits _{{\mathbb {T}}} V > 1+\int \limits _{{\mathbb {T}}}V\), define
$$\begin{aligned} m_0^{d_1,d_2}(x)={\left\{ \begin{array}{ll} {\overline{H}}_{0}-V(x),\ x\in [d_1,d_2] ,\\ 0,\ x\in {\mathbb {T}}\setminus [d_1,d_2], \end{array}\right. } \end{aligned}$$(5.11)and
$$\begin{aligned} u_0^{d_1,d_2}(x)=\int \limits _{0}^{x} (u_0^{d_1,d_2})_x(y) \hbox {d}y,\quad x\in {\mathbb {T}}, \end{aligned}$$(5.12)where
$$\begin{aligned} (u_0^{d_1,d_2})_x(x)={\left\{ \begin{array}{ll} \sqrt{2({\overline{H}}_{0}-V(x))}-p_0^{d_1,d_2},\ x\in [0,d_1) ,\\ -p_0^{d_1,d_2},\ x\in [d_1,d_2],\\ -\sqrt{2({\overline{H}}_{0}-V(x))}-p_0^{d_1,d_2},\ x\in (d_2,1], \end{array}\right. } \end{aligned}$$and \(p_0^{d_1,d_2}=\int \limits _{0}^{d_1} \sqrt{2({\overline{H}}_{0}-V(x))}\hbox {d}x-\int \limits _{d_2}^{1} \sqrt{2({\overline{H}}_{0}-V(x))}\hbox {d}x\). Then, for any pair, \((d_1,d_2)\), such that
$$\begin{aligned} \int \limits _{d_1}^{d_2} ({\overline{H}}_{0}-V(x))\hbox {d}x=1, \end{aligned}$$(5.13)the triplet \((u_0^{d_1,d_2},m_0^{d_1,d_2},{\overline{H}}_0)\) is a semiconcave solution for (5.9) for \(p=p_0^{d_1,d_2}\). Furthermore, there exist infinitely many pairs, \((d_1,d_2)\), such that (5.13) holds.
Proof
Case i In this case, \({\overline{H}}_0=1+\int \limits _{{\mathbb {T}}} V(x)\hbox {d}x\) and straightforward computations show that (5.10) defines a classical solution of (5.9).
Case ii In this case, we have that \({\overline{H}}_{0}=\max \limits _{{\mathbb {T}}} V\) and that \(\int \limits _{0}^{1} ({\overline{H}}_{0}-V(x))\hbox {d}x>1\). Without loss of generality, we assume that 0 is a point of maximum for V.
Note that \((u_0^{d_1,d_2})_x\) has only negative jumps and \(u_0^{d_1,d_2}\) satisfies (5.9) almost everywhere. Thus, the triplet \((u_0^{d_1,d_2},m_0^{d_1,d_2},{\overline{H}}_0)\) is a semiconcave solution of (5.9) if \(\int \limits _{{\mathbb {T}}}m_0^{d_1,d_2}(x)\hbox {d}x=1\). However, the latter is equivalent to (5.13). Since \(\int \limits _{0}^{1} ({\overline{H}}_{0}-V(x))\hbox {d}x>1,\) we can find infinitely many such pairs. We find \(p_0^{d_1,d_2}\) from the identity \(\int \limits _{{\mathbb {T}}} (u_0^{d_1,d_2})_x(x)\hbox {d}x=0\). \(\square \)
Figures 8 and 9 show the solutions of (5.9) for \(V(x)=5\sin \big (2\pi \big (x+\frac{1}{4}\big ) \big ),\ x \in {\mathbb {T}}\).
Remark 5.5
If V has multiple maxima and Case ii in Proposition 5.4 holds, there is a larger family of solutions. Let \(x=x_0 \in (0,1) \) be a point of maximum for V. For fixed real numbers, \(d_1<d_2<e_1<e_2\), define
and
where
Note that \((u_{d_1,d_2,e_1,e_2})_x(x)\) is periodic, only has negative jumps, and solves (5.9) almost everywhere. Hence, the triplet \((u_0^{d_1,d_2,e_1,e_2},m_0^{d_1,d_2,e_1,e_2},{\overline{H}}_{0})\) is a semiconcave solution of (5.9) if
for
The equality (5.16) is equivalent to
Since \(\int \limits _{0}^{1} ({\overline{H}}_{0}-V(x))\hbox {d}x>1\), we can find infinitely many quadruples \((d_1,d_2,e_1,e_2)\) such that (5.17) holds. Hence, we can generate infinitely many solutions of the form (5.14), (5.15).
From Propositions 5.1 and 5.4, for every semiconcave solution, m, of (1.1) in the low-current regime (\(j=0\) or Case i in Proposition 5.1), the smaller V(x) is, the larger m(x) is. This is paradoxical because V(x) represents the spatial preference of the agents and preferred regions correspond to high values of V. Thus, areas that are less desirable have a high populational density. Therefore, it is possible that the most preferred site is empty and agents aggregate at the least preferred site. For example, in (5.11), m vanishes near the maximum of V and is supported in the neighborhood of the minimum of V, as illustrated in Fig. 8. Hence, if agents do not move fast (low current), they prefer staying together rather than being in a better place, see Fig. 4. In the high-current regime (Case ii in Proposition 5.1), the opposite situation occurs: the larger V(x) is, the larger m(x) becomes, see Fig. 5. Therefore, preferred areas have a high population density. Hence, if the level of the current is high enough (we give quantitative estimates in the next section), agents are better off at preferred sites and with more agents. Finally, for the intermediate-current level (Case iii in Proposition 5.1), we observe a more complex situation. The solution, m, consists of two parts: \(m^-_j(x)\) and \(m^+_j(x)\). \(m^-_j(x)\) is larger where V(x) is larger and the opposite holds for \(m^+_j(x)\). Therefore, in the region where m is \(m^-_j\), the most preferred sites are more densely populated. In the region where m is \(m^+_j\) the less preferred sites are more densely populated. This is illustrated in Fig. 6.
6 Discontinuous Viscosity Solutions
In the anti-monotone case considered in the preceding section, m can be discontinuous. Thus, in addition to semiconcave solutions examined before, we need to consider viscosity solutions in the framework of discontinuous Hamiltonians. In what follows, we recall the main definitions in [2]. Given a locally bounded function, \(F:{\mathbb {T}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\), we define its lower and upper semicontinuous envelopes as
for \((x,q)\in {\mathbb {T}}\times {\mathbb {R}}.\) We say that a locally bounded function, \(u:{\mathbb {T}}\rightarrow {\mathbb {R}}\), is a viscosity solution of \(F(x, Du)=0\) if, for any smooth function, \(\phi :{\mathbb {T}}\rightarrow {\mathbb {R}},\) we have that
and
Let \(m:{\mathbb {T}}\rightarrow {\mathbb {R}}\), \(m\in L^\infty ({\mathbb {T}})\), and set
Suppose that \(V:{\mathbb {T}}\rightarrow {\mathbb {R}}\) is continuous. Then, for our setting, we have
Consequently,
Here, we look for piecewise smooth solutions of (1.1) for \(g(m)=-m\) that are not necessarily semiconcave; that is, the condition \(\lim \limits _{x\rightarrow x^-}u_x(x)\geqslant \lim \limits _{x\rightarrow x^+}u_x(x)\) is not necessarily satisfied. It turns out that there are infinitely many such solutions for all \(j\ne 0\) independent of properties of V, and the jump direction of \(u_x\) is irrelevant. This contrasts with the fact that for V with a single maximum, there exists just one semiconcave solution (Proposition 5.1).
Thus, we select a current level, \(j>0\) (\(j<0\) is analogous), and fix arbitrary points \(0\leqslant x_0<x_1<\cdots <x_n\leqslant 1\) and \({\overline{H}}\geqslant {\overline{H}}^\mathrm{cr}_j\). We search for solutions \((u,m,{\overline{H}})\) such that m is continuous on the intervals \((x_i,x_{i+1})\) for \(0\leqslant i\leqslant n-1\). From the above discussion, we have:
Proposition 6.1
Assume that \(j>0\) and that
-
\(m>0\) is continuous on \((x_i,x_{i+1})\) and
$$\begin{aligned} \frac{j^2}{2m(x)^2}+m(x)={\overline{H}}-V(x)\ \text {for all}\ x\ne x_i. \end{aligned}$$ -
\({\overline{H}}\) is such that \(\int \limits _{{\mathbb {T}}} m(x) \hbox {d}x=1\).
Then, the triplet \((u,m,{\overline{H}})\) solves (1.1), where
Proof
We postpone the proof to Appendix. \(\square \)
Remark 6.2
If g is increasing, the construction of piecewise smooth solutions with discontinuous m in the previous proposition fails because \(m(x_i^-)=m(x_i^+)\), necessarily. Therefore, the smooth solutions found in Proposition 3.1 are the only possible ones: there are no extra discontinuous solutions as in the case of decreasing g. This is yet another consequence of the regularizing effect of an increasing g.
7 Anti-monotone Elliptic Mean-Field Games
Now, we consider anti-monotone elliptic MFGs and the corresponding variational problem (2.5) with \(g(m)=-m\). We use the direct method in the calculus of variations to prove the existence of a minimizer of the functional
Proposition 7.1
For each \(j\in {\mathbb {R}}\), there exists a minimizer, m, of \(J_\epsilon [m]\) in
Moreover, if \(j\ne 0\) then \(m>0\), and it solves the Euler–Lagrange equation
for some \({\overline{H}}\in {\mathbb {R}}\).
If \(j=0, \) m is not necessarily positive. Nevertheless, the Euler–Lagrange equation
admits a smooth solution.
Proof
We consider the cases \(j\ne 0\) and \(j=0\) separately.
Case 1 \(j\ne 0\) We take a minimizing sequence, \(m_n\in \mathcal {A,}\) and note that there is a constant, C, such that
Therefore, we seek to control \(\int m^2\) by the integral expression on the left-hand side.
For that, we recall the Gagliardo–Nirenberg inequality,
for \(0\leqslant a \leqslant 1\), with
whenever \(\int _{{\mathbb {T}}} w=0\). With \(p=4\) and \(r=q=2\), we obtain \(a=\frac{1}{4}\). Using these values in (7.1), taking into account that \(\int m =1, \) and choosing \(w=\sqrt{m}\), we obtain
Thus, using a weighted Cauchy inequality,
Finally, we argue as in the proof of Proposition 4.1 and show the existence of a minimizer.
Case 2 \(\mathbf j= 0\) The proof of the existence of minimizers in the case \(j\ne 0\) is also valid for the case \(j=0\). Nevertheless, when \(j=0\) the minimizers are not necessarily positive and therefore do not necessarily solve the Euler–Lagrange equation. Hence, using a fixed point argument as in the proof of Proposition 4.1, we find a smooth solution for the Euler–Lagrange equation, which may not be a minimizer. For that, we rewrite the Euler–Lagrange equation as
and argue as before. \(\square \)
The preceding result does not give a unique minimizer or a unique solution of the Euler–Lagrange equation. Moreover, we note that as \(\epsilon \rightarrow 0\), numerical evidence suggests that in general there is no \(\Gamma \)-convergence to a minimizer. The reason is that \(J_{\epsilon }\) is not convex. Nevertheless, heuristically, the convex part of the functional becomes larger when j is large. Thus, there may still be \(\Gamma \)-convergence for large enough j. At least numerically, this behavior holds. In Figs. 10 and 11, we plot a solution for \(\epsilon =0.01\) versus the solution with \(\epsilon =0\) for \(j=0.001\) and \(j=1\), respectively, and we observe non-convergence. In Fig. 12, we plot a solution for \(\epsilon =0.01\) versus the solution for \(\epsilon =0\) for \(j=100\), and we observe convergence as we expected.
8 Regularity Regimes of the Current Equation for \(g(m)=-m\)
Now, we analyze the regularity regimes of (5.1); that is, we determine for which values of j (5.1) have or fail to have smooth solutions. For simplicity, we assume that 0 is the only point of maximum of V. Moreover, as before, we consider the case \(j\geqslant 0\), as the case \(j<0\) is analogous.
We begin by proving that \(\alpha ^+,\alpha ^-\), defined in (5.3), are monotone.
Proposition 8.1
We have that
-
(i)
\(\alpha ^+\) and \(\alpha ^-\) are increasing on \((0,\infty )\);
-
(ii)
\(\lim \limits _{j\rightarrow +\infty }\alpha ^+(j)=\lim \limits _{j\rightarrow +\infty }\alpha ^-(j)=\infty \);
-
(iii)
\(\lim \limits _{j\rightarrow 0} \alpha ^+(j)=\max \limits _{{\mathbb {T}}}V-\int \limits _{{\mathbb {T}}} V(x) \hbox {d}x, \ \lim \limits _{j\rightarrow 0} \alpha ^-(j)=0\).
Proof
- (i):
-
First, we prove that \(m^+_j(x)\) and \(m^-_j(x)\) (see Sect. 5.1 for the definition) are increasing in j at every point \(x \in {\mathbb {T}}\). We fix x and set \(h=(\max \limits _{{\mathbb {T}}} V )-V(x)\). If \(h=0\), then \(m^-_j(x)=j^{2/3}\), which is an increasing function of j. Next, for \(h>0\) let \(t(j)=m^-_j(x)<j^{2/3}\). We have that
$$\begin{aligned} \frac{j^2}{2t(j)^2}+t(j)-\frac{3}{2}j^{2/3}=h. \end{aligned}$$By the implicit function theorem, t(j) is differentiable. Differentiating the previous equation in j gives
$$\begin{aligned} t'(j)=\frac{j^{-1/3}-\frac{j}{t^2}}{1-\frac{j^2}{t^3}}. \end{aligned}$$Because \(0<t(j)<j^{2/3},\ t'(j)>0\). Hence, t(j) is increasing. The proof for \(m^+_j(x)\) is identical.
- (ii):
-
By definition, \(m^+_j(x) \geqslant j^{2/3}\). Hence, \(\lim \limits _{j\rightarrow \infty }\alpha ^+(j)=\infty \). On the other hand, for j large enough, we have
$$\begin{aligned} \frac{j^2}{2(j^{2/3}/2)^2}+\frac{j^{2/3}}{2}-\frac{3}{2}j^{2/3}=j^{2/3}&>\max \limits _{{\mathbb {T}}}V- V(x)\\ \nonumber&=\frac{j^2}{2(m^-_j(x))^2}+m^-_j(x)-\frac{3}{2}j^{2/3}. \end{aligned}$$Therefore, \(m^-_j(x)>j^{2/3}/2 \) and \(\lim \limits _{j\rightarrow \infty }\alpha ^-(j)=\infty \).
- (iii):
-
Because \(m^-_j(x) \leqslant j^{2/3}\) for every \(x\in {\mathbb {T}}\), \(\lim \limits _{j\rightarrow 0}m^-_j(x)=0\) for all \(x\in {\mathbb {T}}\). Thus, \(\lim \limits _{j\rightarrow 0}\alpha ^-(j)=0\). On the other hand, \(m^+_j(x) \geqslant j^{2/3}\). Thus, \(0\leqslant \frac{j^2}{(m_j^+(x))^2} \leqslant j^{2/3}\). Therefore,
$$\begin{aligned} \lim \limits _{j \rightarrow 0} m^+_j(x)=\lim \limits _{j \rightarrow 0}\left( \frac{3}{2}j^{2/3}-\frac{j^2}{2(m_j^+(x))^2}+\max \limits _{{\mathbb {T}}}V-V(x)\right) =\max \limits _{{\mathbb {T}}}V-V(x). \end{aligned}$$Thus,
$$\begin{aligned} \lim \limits _{j\rightarrow 0} \alpha ^+(j)=\max \limits _{{\mathbb {T}}}V-\int \limits _{{\mathbb {T}}} V(x) \hbox {d}x. \end{aligned}$$\(\square \)
Next, we define two numbers that characterize regularity regimes of (1.1):
and
Proposition 8.2
Let \(j_\mathrm{lower}\) and \(j_\mathrm{upper}\) be given by (8.1) and (8.2). Then
-
(i)
\(0\leqslant j_\mathrm{lower}< j_\mathrm{upper}<\infty \);
-
(ii)
for \(j\geqslant j_\mathrm{upper}\), the system (1.1) has smooth solutions;
-
(iii)
for \(j_\mathrm{lower}<j< j_\mathrm{upper}\), the system (1.1) has only discontinuous solutions;
-
(iv)
if \(j_\mathrm{lower}>0\), the system (1.1) has smooth solutions for \(0<j\leqslant j_\mathrm{lower}\).
Proof
The proof is a straightforward application of Propositions 5.1 and 8.1. \(\square \)
Finally, we characterize the regularity at \(j=0\).
Proposition 8.3
The system (5.9) admits smooth solutions if and only if
Proof
The proof follows from iii in Proposition 8.1 and i in Proposition 5.4. \(\square \)
Let \(V(x)=A \sin (2\pi (x+1/4))\). In Fig. 13, we plot \(\alpha ^+\) and \(\alpha ^-\) for \(A=0.5\) and \(A=5\). From Proposition 8.1, \(\alpha ^+(0)=A\). Thus, if \(A=0.5,\) we have \(\alpha ^+(0)<1\) and, for \(A=5,\) we have \(\alpha ^+(0)>1\). Therefore, \(j_\mathrm{lower}>0\) for \(A=0.5\) and \(j_\mathrm{lower}=0\) for \(A=5\). Hence, if \(A=0.5,\) (1.1) has smooth solutions for a low enough current level (\(j\leqslant 0.218\)) or for a high enough current level (\(j\geqslant 1.750\)). In contrast, if \(A=5,\) there are no smooth solutions for low currents, only for large currents (\(j\geqslant 3.203\)).
We end the section with an a priori estimate for the current level for smooth solutions.
Proposition 8.4
(A priori estimate) Suppose that \(\max \limits _{{\mathbb {T}}} V >1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\) and let \((u,m,{\overline{H}})\) be a smooth solution of (1.1) with \(m>0\). Then, there exists a constant, \(c(V)>0\), such that
Proof
From Proposition 8.1, we have that \(\alpha ^+(0)>1\). Thus, by Proposition 8.3, (1.1) does not have smooth solutions for \(j=0\). Additionally, \(j_\mathrm{lower}=0\). Next, take \(c(V)=j_\mathrm{upper}\) and by Proposition 8.2, we conclude (8.3). \(\square \)
The previous Proposition shows that if the potential, V, has a large oscillation (this happens in the example for \(A=5\), Fig. 13), then only high-current solutions are smooth.
9 Asymptotic Behavior of Solutions as \(j \rightarrow 0\) and \(j \rightarrow +\infty \)
In Sect. 5.1, we studied semiconcave solutions of (1.1) with a current level \(j>0\). Here, we continue the analysis of the decreasing nonlinearity, \(g(m)=-m,\) and examine the asymptotic behavior of semiconcave solutions as \(j \rightarrow 0\) and \(j \rightarrow \infty \).
As before, we assume that V has a single maximum at 0. First, we address the case \(j\rightarrow \infty \).
Proposition 9.1
For \(j>0\), let \((u_j,m_j,{\overline{H}}_j)\) solve (5.1). We have that
-
(i)
\(\lim \limits _{j \rightarrow \infty } {\overline{H}}_j=\infty \);
-
(ii)
For \(x \in {\mathbb {T}}\), \(\lim \limits _{j \rightarrow \infty } m_j(x)=1\), \(\lim \limits _{j \rightarrow \infty } u_j(x)=0\), and \(\lim \limits _{j\rightarrow \infty }p_j=\infty \).
Proof
-
(i)
According to (5.2), we have that \({\overline{H}}_j \geqslant \frac{3j^{2/3}}{2}+\max \limits _{{\mathbb {T}}} V\). Thus, \(\lim \limits _{j \rightarrow \infty } {\overline{H}}_j=\infty \).
-
(ii)
For \(j\geqslant j_\mathrm{upper}\), solutions of (5.1) are given by (5.5). Hence, \(m_j\) consists only of the \(m^-\) branch. Thus, \(m_j(x) \leqslant j^{2/3}\), which yields \(\frac{j^2}{m_j(x)^2}\geqslant m_j(x)\). Therefore, \(\frac{j^2}{2m_j(x)^2}+m_j(x)\leqslant \frac{3j^2}{2m_j(x)^2}. \)Consequently, using this inequality in (5.1), we get
$$\begin{aligned} \frac{j}{\sqrt{2({\overline{H}}_j-V(x))}} \leqslant m_j(x)\leqslant \frac{\sqrt{3}j}{\sqrt{2({\overline{H}}_j-V(x))}}. \end{aligned}$$(9.1)
Integrating the previous inequality and taking into account that \(\int \limits _{{\mathbb {T}}}m_j(x)=1\), we get
Because \({\overline{H}}_j\) converges to \(\infty ,\) and, for every \(x,y \in {\mathbb {T}}\), where V is bounded, we have that
Hence, for large enough j, we have
Let \(\bar{x}\) be such that
Then, by (9.1), (9.3), and (9.2), we get
Similarly, we have
Furthermore, we have that
Thus,
Finally, because \(m_j\) is bounded and its integral is 1, we get from (9.5) that
for all \(x \in {\mathbb {T}}\). The preceding limit implies that \(\lim \limits _{j \rightarrow \infty }m_j(x)=1\) for all \(x \in {\mathbb {T}}\). In fact, (9.6) gives precise asymptotics of \({\overline{H}}_j\), namely
Now, we compute the limit of \(u_j(x)\). We have that \((u_j)_x=\frac{j}{m_j(x)}-p_j,\) where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j}{m_j(y)}\hbox {d}y\). From (5.1), we have that \(\frac{j}{m_j(x)}=\sqrt{2({\overline{H}}_j-V(x)-m_j(x))}\).
So,
as \(j\rightarrow \infty \). Hence,
when \(j \rightarrow \infty \). Consequently, \(\lim \limits _{j \rightarrow \infty }u_j(x)=\lim \limits _{j \rightarrow \infty } \int \limits _{0}^{x} (u_j)_x(y)\hbox {d}y=0\). \(\square \)
Next, we study the behavior of solutions as \(j \rightarrow 0\).
Proposition 9.2
We have that
-
(i)
\(\lim \limits _{j \rightarrow 0} {\overline{H}}_j=\max \left( \max \limits _{{\mathbb {T}}} V,1+\int \limits _{{\mathbb {T}}} V\right) ={\overline{H}}_0\);
-
(ii)
if \(1+\int \limits _{{\mathbb {T}}} V > \max \limits _{{\mathbb {T}}} V\), then
$$\begin{aligned} \lim \limits _{j \rightarrow 0} m_j(x)=1+\int \limits _{{\mathbb {T}}} V-V(x),\quad \lim \limits _{j \rightarrow 0} u_j(x)=0,\quad \text {and}\ \lim \limits _{j\rightarrow 0}p_j=0 \end{aligned}$$for all \(x \in {\mathbb {T}}\);
-
(iii)
if \(1+\int \limits _{{\mathbb {T}}} V \leqslant \max \limits _{{\mathbb {T}}} V\), then
$$\begin{aligned} \lim \limits _{j \rightarrow 0} m_j(x)= & {} m_{d,1}(x),\ \ \ \lim \limits _{j \rightarrow 0} u_j(x)=u_{d,1}(x),\ \text {and}\ \lim \limits _{j\rightarrow 0}p_j\\= & {} \int \limits _{0}^{d} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x \end{aligned}$$for all \(x \in {\mathbb {T}}\), where \(m_{d,1}\) and \(u_{d,1}\) are given by (5.11) and (5.12).
Proof
-
(i)
There are two possible cases: \(j_\mathrm{lower}>0\) and \(j_\mathrm{lower}=0\). If \(j_\mathrm{lower}=0\), then \(\alpha ^+(j)>1\) for all \(j>0\) and \(\alpha ^{-}(j)<1\) for small enough j. Hence, by the results in Sect. 5.1, we have that \({\overline{H}}_j={\overline{H}}_j^\mathrm{cr}=\frac{3}{2}j^{2/3}+\max V\). Thus, \(\lim \limits _{j \rightarrow 0}{\overline{H}}_j=\max V\). On the other hand, \(j_\mathrm{lower}=0\) means that \(\lim \limits _{j\rightarrow 0}\alpha ^+(j)\geqslant 1\). Consequently, by Proposition 8.1, \(\max \limits _{{\mathbb {T}}} V-\int \limits _{{\mathbb {T}}}V\geqslant 1\). Thus, \(\max \left( \max \limits _{{\mathbb {T}}} V,1+\int \limits _{{\mathbb {T}}} V\right) =\max \limits _{{\mathbb {T}}} V=\lim \limits _{j\rightarrow 0}{\overline{H}}_j\).
If \(j_\mathrm{lower}>0\), then \(\alpha ^+(j)<1\) for \(j<j_\mathrm{lower}\) and solutions \((m_j,u_j,{\overline{H}}_j)\) are given by (5.4). Hence, \(m_j(x)\geqslant j^{2/3}\) and
$$\begin{aligned} 0<\frac{j^2}{2m_j(x)^2}\leqslant \frac{j^{2/3}}{2}. \end{aligned}$$(9.8)Therefore,
$$\begin{aligned} \lim \limits _{j\rightarrow 0}{\overline{H}}_j=\lim \limits _{j\rightarrow 0}\left( \int \limits _{{\mathbb {T}}} V+\int \limits _{{\mathbb {T}}}m_j+\int \limits _{{\mathbb {T}}}\frac{j^2}{2m_j(x)^2}\right) = \int \limits _{{\mathbb {T}}} V+ 1. \end{aligned}$$But \(\max \limits _{{\mathbb {T}}} V-\int \limits _{{\mathbb {T}}}V=\lim \limits _{j\rightarrow 0} \alpha ^+(j)<1\), so \(\max \left( \max \limits _{{\mathbb {T}}} V,1+\int \limits _{{\mathbb {T}}} V\right) =1+\int \limits _{{\mathbb {T}}} V=\lim \limits _{j\rightarrow 0}{\overline{H}}_j\).
-
(ii)
Since \(\lim \limits _{j\rightarrow 0}\alpha ^+(j)=\max V -\int V\), we have that the condition \(1+\int \limits _{{\mathbb {T}}} V> \max \limits _{{\mathbb {T}}} V\) is equivalent to the condition \(j_\mathrm{lower}>0\). In this case, we have that \(\lim \limits _{j \rightarrow 0}{\overline{H}}_j=1+\int \limits _{{\mathbb {T}}}V\). Therefore, from (9.8), we have that
$$\begin{aligned} \lim \limits _{j \rightarrow 0} m_j(x)=\lim \limits _{j \rightarrow 0} \left( {\overline{H}}_j-V(x)-\frac{j^2}{2m_j^2(x)}\right) =1+\int \limits _{{\mathbb {T}}} V-V(x). \end{aligned}$$Furthermore,
$$\begin{aligned} \lim \limits _{j \rightarrow 0} u_j(x)=\lim \limits _{j \rightarrow 0}\int \limits _{0}^x \left( \frac{j}{m_j(y)}-\int \limits _{{\mathbb {T}}}\frac{j}{m_j(z)}dz\right) \hbox {d}y=0. \end{aligned}$$ -
(iii)
The inequality \(1+\int \limits _{{\mathbb {T}}} V \leqslant \max \limits _{{\mathbb {T}}} V\) is equivalent to \(j_\mathrm{lower}=0\). Hence, for \(0<j<j_\mathrm{upper}\) solutions are given by (5.6).
Because \(0<m_j^-(x)\leqslant j^{2/3}\), \(\lim \limits _{j \rightarrow 0} m^-_j(x)=0\). Furthermore, \(m_j^+(x)\geqslant j^{2/3}\). Thus,
$$\begin{aligned} \lim \limits _{j\rightarrow 0}\frac{j^2}{2(m_j^+(x))^2}=0. \end{aligned}$$Therefore,
$$\begin{aligned} \lim \limits _{j\rightarrow 0}m_j^+(x)=\lim \limits _{j \rightarrow 0}\left( {\overline{H}}_j-V(x)-\frac{j^2}{2(m_j^+(x))^2}\right) =\max V-V(x). \end{aligned}$$Suppose that the jump points, \(d_j,\) of \(m_j(x)\) (see (5.6)) converge to some \(d \in [0,1]\) through a subsequence. Then, through that subsequence \(\lim \limits _{j\rightarrow 0} m_j(x)=m_0^{d,1}(x),\) where \(m_0^{d,1}\) is defined in (5.11). Hence,
$$\begin{aligned} 1=\int \limits _{{\mathbb {T}}} m_0^{d,1}(x)\hbox {d}x=\int \limits _d^1 \left( \max V- V(x)\right) \hbox {d}x. \end{aligned}$$Because V has a single maximum, d is defined uniquely by the previous equation. Hence, \(\lim \limits _{j\rightarrow 0}d_j=d\) and \(\lim \limits _{j\rightarrow 0} m_j(x)=m_0^{d,1}(x)\), globally (not only through some subsequence). Consequently,
$$\begin{aligned} \lim \limits _{j \rightarrow 0} u_j(x)=u_0^{d,1}(x),\ \lim \limits _{j\rightarrow 0}p_j=p_0^{d,1}=\int \limits _{0}^{d} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x, \end{aligned}$$where d is such that \(\int \limits _d^1 \left( \max V- V(x)\right) =1\).
\(\square \)
From Proposition 9.2, we see that we recover only part of the solutions for \(j=0\) as limits of solutions for \(j>0\). If we consider the solutions of (5.1) for which m takes negative values, we recover all solutions described in Sect. 5.2. Indeed, the first equation in (5.1) is a cubic equation in m(x). Thus, for every \(x \in {\mathbb {T}},\) there are three solutions: two positive and one negative. Because we are interested in the MFG interpretation of (5.1), we neglect solutions with negative m. However, we can construct solutions for (5.1) without the constraint \(m>0\). As j converges to 0, the negative parts of these solutions converge to 0, and, in the limit, we obtain all nonnegative solutions of (5.9) given in Proposition 5.4.
10 Properties of \({\overline{H}}_j\)
In this section, we study various properties of the effective Hamiltonian, \({\overline{H}}_j\), as a function of j. In the following proposition, we collect several properties of \({\overline{H}}_j\).
Proposition 10.1
We have that
-
(i)
For every \(j \in {\mathbb {R}},\) there exists a unique number, \({\overline{H}}_j\), such that (1.1) has solutions with a current level j;
-
(ii)
\({\overline{H}}_j\) is even; that is, \({\overline{H}}_j={\overline{H}}_{-j}\);
-
(iii)
\({\overline{H}}_j\) is continuous;
-
(iv)
\({\overline{H}}_j\) increasing on \((0,\infty )\) and decreasing on \((-\infty ,0)\);
-
(v)
\(\min \limits _{j \in {\mathbb {R}}}{\overline{H}}_j={\overline{H}}_0=\max \left( \max \limits _{{\mathbb {T}}} V, 1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\right) \);
-
(vi)
\(\lim \limits _{|j|\rightarrow \infty } \frac{{\overline{H}}_j}{j^2/2}=1\).
Proof
- (i)
-
(ii)
This follows from the fact that \(j\mapsto j^2/2t+t\) is an even function for all \(t>0\).
-
(iii)
Continuity of \({\overline{H}}_j\) follows from the continuity of the mapping \((j,t) \mapsto j^2/2t^2+t\) for \(j,t>0\).
-
(iv)
Since \({\overline{H}}_j\) is even, it suffices to show that it is increasing on \((0,+\infty )\). First, we show that \({\overline{H}}_j\) is increasing on \((j_\mathrm{upper},\infty )\). For that, we fix \(j_0 >j_\mathrm{upper}\). We have that \({\overline{H}}_{j_0}\geqslant {\overline{H}}_{j_0}^\mathrm{cr}\). Hence, for any \(j_\mathrm{upper}<j<j_0\) we have that \({\overline{H}}_{j_0}\geqslant {\overline{H}}_{j_0}^\mathrm{cr}>{\overline{H}}_{j}^\mathrm{cr}\). Therefore, the function, \(\tilde{m}_j\), determined by
$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{j^2}{2(\tilde{m}_j(x))^2}+\tilde{m}_j(x)={\overline{H}}_{j_0}-V(x),\\ \tilde{m}_j(x) \leqslant j^{2/3}, \end{array}\right. } \end{aligned}$$(10.1)is well defined for all \(j_\mathrm{upper}<j<j_0\). Next, we show that the mapping
$$\begin{aligned} j \mapsto \tilde{m}_j(x) \end{aligned}$$is increasing in \((j_\mathrm{upper},j_0)\) for all \(x \in {\mathbb {T}}\). Indeed, fix \(x\in {\mathbb {T}}\) and differentiate (10.1) in j to obtain
$$\begin{aligned} \frac{d\tilde{m}_j(x)}{dj}=-\frac{j\tilde{m}_j(x)}{j^2-\tilde{m}_j(x)^3}<0. \end{aligned}$$Hence, \(\tilde{m}_j(x)<m_{j_0}(x),\ x\in {\mathbb {T}}\). Accordingly,
$$\begin{aligned} \int \limits _{{\mathbb {T}}} \tilde{m}_j(x)\hbox {d}x<\int \limits _{{\mathbb {T}}} m_{j_0}(x)\hbox {d}x=1. \end{aligned}$$Finally, the previous inequality implies \({\overline{H}}_j<{\overline{H}}_{j_0}\).
The monotonicity of \({\overline{H}}_j\) on \((0,j_\mathrm{lower})\) (in the case \(j_\mathrm{lower}>0\)) can be proven analogously.
Next, for \(j_\mathrm{lower}<j<j_\mathrm{upper}\), we have that \({\overline{H}}_j={\overline{H}}_j^\mathrm{cr}=\frac{3}{2}j^{2/3}+\max \limits _{{\mathbb {T}}} V\). \({\overline{H}}_j\) is thus evidently monotone.
-
(v)
This follows from the previous properties of \({\overline{H}}_j\) and Proposition 9.2.
-
(vi)
We have proven this in (9.7).
\(\square \)
In Fig. 14 we plot \({\overline{H}}_j\) as a function of j for \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\).
11 Analysis in Terms of p
Now, we analyze (1.1) in terms of the variable p. If g(m) is increasing, for every \(p\in {\mathbb {R}},\) there exists a unique number, \({\overline{H}}(p)\), for which (1.1) has a solution. This solution is unique if \(m>0\) (see, e.g., [38]). Here, we show that, if g(m) is not increasing, there may be different values of \({\overline{H}}(p)\) for which (1.1) has a semiconcave solution. The uniqueness of \({\overline{H}}\) depends both on the monotonicity of g and on the properties of V. For example, if \(g(m)=-m,\) \({\overline{H}}\) is uniquely determined by p if and only if V has a single maximum. Moreover, our prior characterization of semiconcave solutions of (1.1) implies that, for V with a single maximum point, (1.1) admits a unique semiconcave solution for every \(p\in {\mathbb {R}}\).
We start with an auxiliary lemma.
Lemma 11.1
Let \(x=0\) be the single maximum point of V.
-
(i)
For every \(j\ne 0,\) there exists a unique number, \(p_{j}\), such that (1.1) has a semiconcave solution. Furthermore, the map \(j\mapsto p_{j}\) is increasing on \((0,\infty )\) and \((-\infty ,0)\).
-
(ii)
If \(1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x \geqslant \max \limits _{{\mathbb {T}}} V\), then \(p_0=0\) is the unique number for which (1.1) has a semiconcave solution with \(j=0.\) Moreover, \(\lim \limits _{j\rightarrow 0}p_{j}=0\).
-
(iii)
If \(1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x < \max \limits _{{\mathbb {T}}} V\), then
$$\begin{aligned} p_j>\int \limits _{0}^{d_1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x,\quad j>0 \end{aligned}$$and
$$\begin{aligned} p_j<-\int \limits _{d_2}^{1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x,\quad j<0, \end{aligned}$$where \(d_1,d_2 \in (0,1)\) are such that
$$\begin{aligned} \int \limits _{0}^{d_1} (\max \limits _{{\mathbb {T}}} V-V(x))\hbox {d}x=\int \limits _{d_2}^{1} (\max \limits _{{\mathbb {T}}} V-V(x))\hbox {d}x=1. \end{aligned}$$Consequently, (1.1) has a semiconcave solution for \(j=0\) if and only if
$$\begin{aligned} -\int \limits _{d_2}^{1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x\leqslant p\leqslant \int \limits _{0}^{d_1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x. \end{aligned}$$(11.1)
Proof
i. According to Proposition 5.1, for every \(j>0,\) there exists a unique number, \(p_j\), such that (1.1) has a semiconcave solution with a current level j. Let \((u_j,m_j,{\overline{H}}_j)\) be the solution of (1.1) given by (5.4), (5.5) or (5.6). Because
to prove that \(p_j\) is increasing it suffices to show that \(j\mapsto \frac{j}{m_j(x)}\) is increasing for all \(x\in {\mathbb {T}}\). First, we prove the monotonicity for \(j_\mathrm{lower}<j<j_\mathrm{upper}\). Let \(n_j(x):=\frac{j}{m_j(x)}\). We have that
Because the maps \(j\mapsto m_j^+(x)\) and \(j\mapsto m_j^-(x)\) are increasing for all \(x \in {\mathbb {T}}\), the map \(j\mapsto d_j\) is also increasing. Assume that j is such that \(d_j\ne x\). We differentiate in j the first equation in (11.2) and take into account that \({\overline{H}}_j=\frac{3}{2}j^{2/3}+\max \limits _{{\mathbb {T}}} V\) for \(j_\mathrm{lower}<j<j_\mathrm{upper}\) to get
Let \(j^x\) be such that \(x=d_{j^x}\). For \(j>j^x,\) we have \(d_j>x\). Thus, \(n_j(x)=j/m_j^-(x)>j^{1/3}\), which implies \(\frac{dn_j(x)}{dj}>0\). Similarly, for \(j<j^x,\) we have \(d_j<x\). Therefore, \(n_j(x)=j/m_j^+(x)<j^{1/3}\), which implies \(\frac{dn_j(x)}{dj}>0\).
Next, we analyze the behavior of \(n_j\) at \(j^x\). For \(j>j^x,\) \(n_j(x)=\frac{j}{m_j^-(x)},\) and, for \(j<j^x,\) \(n_j(x)=\frac{j}{m_j^+(x)}\). Thus, \(n_j(x)\) takes a positive jump, \(\frac{j}{m_j^-(x)}-\frac{j}{m_j^+(x)}>0,\) at \(j=j^x\). Therefore, \(j\mapsto n_j(x)\) has positive derivatives whenever \(j\ne j^x\) and a positive jump at \(j=j^x\). It is thus increasing for \(j_\mathrm{lower}<j<j_\mathrm{upper}\).
Next, we show that \(j\mapsto n_j(x)\) is increasing on \((j_\mathrm{upper},\infty )\). As before, we have
Because \(m_j(x)<j^{2/3}\), we have \(n_j(x)>j^{1/3}\). Therefore, if \({\overline{H}}'_j\geqslant 1/n_j(x),\) the map \(j\mapsto n_j(x)\) is increasing.
Fix \(j_0\) and, for \(j>j_0,\) consider \(\tilde{H}_j:={\overline{H}}_{j_0}+(j-j_0)\frac{\min \limits _{{\mathbb {T}}} m_{j_0}(x)}{j_0}\). Define
Note that \(\tilde{H}_{j_0}={\overline{H}}_{j_0}\) and \(\tilde{m}_{j_0}=m_{j_0}\). Now, we compute the derivative of the map \(j \rightarrow \tilde{m}_j(x)\) at \(j=j_0\). Because \(m_{j_0}<j_0^{2/3}\), we have
Thus,
Hence, for small \(j>j_0\) close to \(j_0,\) we get
Consequently, for those values of the current, we have that \({\overline{H}}_j\geqslant \tilde{H}_j\). Hence,
which completes the monotonicity proof for \(j>j_\mathrm{upper}\). The monotonicity for \(j<j_\mathrm{lower}\) is similar.
(ii) & (iii) These claims follow from the monotonicity of \(j \mapsto p_j\) and Propositions 5.4 and 9.2. \(\square \)
In Fig. 15, we plot p as a function of j for \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\).
Proposition 11.2
Let \(x=0\) be the only point of maximum of V. Then,
-
(i)
for every \(p \in {\mathbb {R}},\) there exists a unique number, \({\overline{H}}(p),\) for which (1.1) has a semiconcave solution;
-
(ii)
for every \(p\in {\mathbb {R}}, \) (1.1) has a unique semiconcave solution;
-
(iii)
if \(\max \limits _{{\mathbb {T}}} V> 1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\), \({\overline{H}}(p)\) is flat at the origin;
-
(iv)
\({\overline{H}}(p)\) is increasing on \((0,\infty )\) and decreasing on \((-\infty ,0)\). Thus,
$$\begin{aligned} \min \limits _{p\in {\mathbb {R}}} {\overline{H}}(p)={\overline{H}}(0)=\max \left( \max \limits _{{\mathbb {T}}} V, 1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\right) ; \end{aligned}$$ -
(v)
\(\lim \limits _{|p| \rightarrow \infty } \frac{{\overline{H}}(p)}{p^2/2}=1\).
Proof
(i) & (ii) From Lemma 11.1 and (ii) in Proposition 9.1, we have that for every p, there exists a unique j such that (1.1) has semiconcave solutions. From Proposition 10.1, we have that for every j there exists a unique number, \({\overline{H}}\), such that (1.1) has a semiconcave solution. Therefore, for every p, the constant \({\overline{H}}\) is determined uniquely. Moreover, if \(j=j(p)\ne 0\), Proposition 5.1 gives that (1.1) has a unique semiconcave solution.
Furthermore, if \(j=j(p)=0,\) we have that (1.1) has semiconcave solutions by Proposition 5.4. Moreover, the observations in the proof of the Proposition 5.3 yield that the solutions given in Proposition 5.4 are the only semiconcave solutions when V admits a single maximum at \(x=0\). Therefore, if we are in the Case i in Proposition 5.4, we get that \(p=0\), and the only semiconcave solution is given by (5.10). Next, if we are in Case ii in Proposition 5.4, we have that for a given \(d_1 \in (0,1)\) there exists a unique \(d_2\in (0,1)\) such that (5.13) holds, and the mapping, \(d_1\mapsto d_2\), is increasing. Consequently, the mapping, \(d_1\mapsto p_0^{d_1,d_2}\) is also increasing. Therefore, for a given p there exists a unique pair \((d_1,d_2)\) such that \(p=p_0^{d_1,d_2}\). Hence, there is a unique semiconcave solution corresponding to p.
(iii) From (iii) in Lemma 11.1, we have that if p satisfies (11.1), then \({\overline{H}}(p)={\overline{H}}_0\).
(iv) This follows from (i) in Lemma 11.1 and (iv) and (v) in Proposition 10.1.
(v) This follows from (vi) in Proposition 10.1, (ii) in Proposition 9.1, and the formula \(p_j=\int \limits _{{\mathbb {T}}}\frac{j}{m_j(y)}\hbox {d}y\).
\(\square \)
In Fig. 16, we show \({\overline{H}}(p)\) for \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\).
Remark 11.3
We conjecture that, if V has only one maximum point, \({\overline{H}}(p)\) is convex. Let \(p_1<p_2\) and \((u_1,m_1,{\overline{H}}(p_1))\) and \(\ (u_2,m_2,{\overline{H}}(p_2))\) solve (1.1) for \(p=p_1\) and \(p=p_2\), respectively. Consider the trajectories, \(y_1,y_2,\) determined by
As in weak Kolmogorov–Arnold–Moser (KAM) or classical KAM theory, we would like to show that
Furthermore, let \(j_1\) and \(j_2\) be the current values corresponding to \(p_1\) and \(p_2\). From the proof of (i) in Lemma 11.1, we have that
Hence, if (11.3) holds, we get
which implies \(D_p{\overline{H}}(p_1) \leqslant D_p{\overline{H}}(p_2)\). Thus, \({\overline{H}}(p)\) is convex.
Remark 11.4
Assume that V is non-constant and has at least two maximum points. Then, from Proposition 5.2, we know that there are infinitely many two-jump semiconcave solutions for a fixed current level \(j\in (j_\mathrm{lower},j_\mathrm{upper})\). Therefore, for every \(j\in (j_\mathrm{lower},j_\mathrm{upper})\) there are infinitely many values of p such that (1.1) has semiconcave solutions corresponding to the same constant, \({\overline{H}}={\overline{H}}_j^\mathrm{cr}\). This observation hints that there may be \(p\in {\mathbb {R}}\) such that (1.1) has semiconcave solutions for two different values of \({\overline{H}}\).
Indeed, for every \(j\in (j_\mathrm{lower},j_\mathrm{upper})\) there is still a unique one-jump semiconcave solution described in (iii) of Proposition 5.1. Suppose that \(p_j\) is the value of p corresponding to this one-jump solution. Then, one can prove that the mapping \(j\mapsto p_j\) is increasing and continuous by the same proof as in (i) of Lemma 11.1. Suppose that \(j_0\in (j_\mathrm{lower},j_\mathrm{upper})\) and \(p^0\) corresponds to some two-jump semiconcave solution \(m_{j_0}^{d_1,d_2}\) given in Proposition 5.2. If for some V and \(j_0\)
there exists \(j_1 \in (j_\mathrm{lower},j_\mathrm{upper})\) such that \(p_{j_1}=p^0\). Thus, for \(p=p^0=p_{j_1}\) (1.1) has semiconcave solutions for \({\overline{H}}={\overline{H}}_{j_0}^\mathrm{cr}\) and \({\overline{H}}={\overline{H}}_{j_1}^\mathrm{cr}\).
We confirm our observations by a numerical evidence. We consider \(V(x)=\frac{1}{2}\sin \left( 4\pi \left( x+\frac{1}{8}\right) \right) \). For this V, \(j_\mathrm{lower}=0.218242\) and \(j_\mathrm{upper}=1.74875\). Furthermore, we fix \(j_0=0.5\) and find two-jump solution \(m_{j_0}^{d_1,d_2}\) described in Proposition 5.2 with \(d_1=0.20626\) and \(d_2=0.70626\). The corresponding value of p is \(p_{j_0}^{d_1,d_2}=0.787246\).
Furthermore, we consider one-jump solutions, \(m_j\), described in Proposition 5.1 and denote by \(p_j\) the corresponding values of p. Then, we find that \(p_{j_1}=p_{j_0}^{d_1,d_2}=0.787246\) for \(j_1=0.5132\). Thus, for \(p=0.787246,\) (1.1) has at least two semiconcave solutions for \({\overline{H}}={\overline{H}}_{j_0}^\mathrm{cr}=1.44494\) and \({\overline{H}}={\overline{H}}_{j1}^\mathrm{cr}=1.4615\). In Fig. 17, we plot the one-jump solution, \(m_{j_1}\), and the two-jump solution, \(m_{j_0}^{d_1,d_2}\).
References
Achdou Y, Bardi M, Cirant M (2017) Mean field games models of segregation. Math Models Methods Appl Sci 27(1):75–113
Barles G (1994) Solutions de Viscosité des Équations de Hamilton–Jacobi. Mathématiques & Applications (Berlin) [Mathematics & Applications], vol 17. Springer, Paris
Cardaliaguet P (2011) Notes on mean-field games
Cardaliaguet P (2013) Long time average of first order mean field games and weak KAM theory. Dyn Games Appl 3(4):473–488. doi:10.1007/s13235-013-0091-x
Cardaliaguet P (2013) Weak solutions for first order mean-field games with local coupling (preprint)
Cardaliaguet P, Garber P, Porretta A, Tonon D (2014) Second order mean field games with degenerate diffusion and local coupling (preprint)
Cardaliaguet P, Graber PJ (2015) Mean field games systems of first order. ESAIM Control Optim Calc Var 21(3):690–722
Cirant M (2015) A generalization of the Hopf-Cole transformation for stationary mean-field games systems. C R Math Acad Sci Paris 353(9):807–811
Cirant M (2015) Multi-population mean field games systems with Neumann boundary conditions. J Math Pures Appl (9) 103(5):1294–1315
Cirant M (2016) Stationary focusing mean-field games. Commun Partial Differ Equ 41(8):1324–1346
Cirant M., Verzini G (2015) Bifurcation and segregation in quadratic two-populations mean field games systems (preprint)
Evangelista D, Gomes D (2016) On the existence of solutions for stationary mean-field games with congestion (preprint)
Evangelista D, Gomes D, Nurbekyan L. Radially symmetric mean-field-games with congestion. ArXiv preprint. arXiv:1703.07594v1 [math.AP]
Ferreira R, Gomes D. Existence of weak solutions for stationary mean-field games through variational inequalities (preprint)
Gomes D, Mitake H (2015) Existence for stationary mean-field games with congestion and quadratic Hamiltonians. NoDEA Nonlinear Differ Equ Appl 22(6):1897–1910
Gomes D, Nurbekyan L, Prazeres M (2016) Explicit solutions of one-dimensional, first-order, stationary mean-field games with congestion (preprint)
Gomes D, Nurbekyan L, Sedjro M (2016) One-dimensional forward-forward mean-field games. Appl Math Optim 74(3):619–642
Gomes D, Patrizi S (2015) Obstacle mean-field game problem. Interfaces Free Bound 17(1):55–68
Gomes DA, Patrizi S (2016) Weakly coupled mean-field game systems. Nonlinear Anal 144:110–138. doi:10.1016/j.na.2016.05.017
Gomes D, Patrizi S, Voskanyan V (2014) On the existence of classical solutions for stationary extended mean field games. Nonlinear Anal 99:49–79
Gomes D, Pimentel E (2015) Time dependent mean-field games with logarithmic nonlinearities. SIAM J Math Anal 47(5):3798–3812. doi:10.1137/140984622
Gomes DA, Pimentel E (2015) Regularity for mean-field games with initial-initial boundary conditions. In: Bourguignon JP, Jeltsch R, Pinto A, Viana M (eds) Dynamics, games and science III, CIM-MS. Springer
Gomes D, Pimentel E (2016) Local regularity for mean-field games in the whole space. Minimax Theory Appl 1(1):65–82. http://www.heldermann.de/MTA/MTA01/MTA011/mta01005.htm
Gomes D, Pimentel E, Voskanyan V (2016) Regularity theory for mean-field game Regularity theory for mean-field game systems. SpringerBriefs in Mathematics. Springer, Berlin [Cham]
Gomes D, Pires GE, Sánchez-Morgado H (2012) A-priori estimates for stationary mean-field games. Netw Heterog Media 7(2):303–314
Gomes D, Ribeiro R (2013) Mean field games with logistic population dynamics. In: 52nd IEEE conference on decision and control, Florence, Dec 2013
Gomes D, Sánchez Morgado. H (2014) A stochastic Evans–Aronsson problem. Trans Am Math Soc 366(2):903–929
Gomes D, Saúde J (2014) Mean field games models—a brief survey. Dyn Games Appl 4(2):110–154
Gomes D, Sedjro M (2017) One-dimensional, forward-forward mean-field games with congestion (preprint)
Gomes D, Voskanyan V (2015) Short-time existence of solutions for mean-field games with congestion. J Lond Math Soc 92(3):778–799. doi:10.1112/jlms/jdv052
Gomes D, Voskanyan V (2016) Extended deterministic mean-field games. SIAM J Control Optim 54(2):1030–1055
Graber J (2015) Weak solutions for mean field games with congestion (preprint)
Guéant O (2009) A reference case for mean field games models. J Math Pures Appl (9) 92(3):276–294
Huang M, Caines PE, Malhamé RP (2007) Large-population cost-coupled LQG problems with nonuniform agents: individual-mass behavior and decentralized \(\epsilon \)-Nash equilibria. IEEE Trans Autom Control 52(9):1560–1571
Huang M, Malhamé RP, Caines PE (2006) Large population stochastic dynamic games: closed-loop McKean–Vlasov systems and the Nash certainty equivalence principle. Commun Inf Syst 6(3):221–251
Lasry J-M, Lions P-L (2006) Jeux à champ moyen. I. Le cas stationnair. C R Math Acad Sci Paris 343(9):619–625
Lasry J-M, Lions P-L (2006) Jeux à champ moyen. II. Horizon fini et contrôle optimal. C R Math Acad Sci Paris 343(10):679–684
Lasry J-M, Lions P-L (2007) Mean field games. Jpn J Math 2(1):229–260
Lasry J-M, Lions P-L, Guéant O (2010) Mean field games and applications. Paris-Princeton lectures on Mathematical Finance
Nurbekyan L (2017) One-dimensional, non-local, first-order, stationary mean-field games with congestion: a Fourier approach. ArXiv preprint. arXiv:1703.03954v1 [math.AP]
Pimentel E, Voskanyan V. Regularity for second-order stationary mean-field games. Indiana Univ Math J (to appear)
Porretta A (2014) On the planning problem for the mean field games system. Dyn Games Appl 4(2):231–256
Porretta A (2015) Weak solutions to Fokker–Planck equations and mean field games. Arch Ration Mech Anal 216(1):1–62
Author information
Authors and Affiliations
Corresponding author
Additional information
D. Gomes, L. Nurbekyan and M. Prazeres were partially supported by KAUST baseline and start-up funds.
Appendix
Appendix
Proof of Proposition 6.1
We have that \(u_x+p=\frac{j}{m}\) a.e.. Thus, the second equation in (1.1) holds in the sense of distributions. Next, we observe that u is differentiable for all \(x\ne x_i\) and that the first equation in (1.1) is satisfied in the classical sense at those points. Thus, we just need to check the viscosity condition at \(x=x_i\).
There are two possible cases:
-
1.
\(m(x_i^-)>m(x_i^+)\). In this case, \(m^*(x_i)=m(x_i^-). \) Moreover,
$$\begin{aligned} u_x(x_i^-)=j/m(x_i^-)-p<j/m(x_i^+)-p=u_x(x_i^+). \end{aligned}$$Hence, there is no smooth function touching u from above; u can only be touched from below. Therefore, we need to check that, for any \(\phi \) touching u from below at \(x_i\), we have
$$\begin{aligned} \frac{(\phi _x(x_i)+p)^2}{2}+V(x_i)+m(x_i^-)-{\overline{H}}\geqslant 0. \end{aligned}$$Because (1.1) is satisfied at \(x\ne x_i\) in the classical sense, we have that
$$\begin{aligned} \frac{(u_x(x_i^\pm )+p)^2}{2}+V(x_i)+m(x_i^\pm )-{\overline{H}}=0. \end{aligned}$$Because \(\phi \) touches u from below and \(j>0\), we have
$$\begin{aligned} 0<u_x(x_i^-)+p\leqslant \phi _x(x_i)+p\leqslant u_x(x_i^+)+p. \end{aligned}$$Hence,
$$\begin{aligned}&\frac{(\phi _x(x_i)+p)^2}{2}+V(x_i)+m(x_i^-)-{\overline{H}}\\&\geqslant \frac{(u_x(x_i^-)+p)^2}{2}+V(x_i)+m(x_i^-)-{\overline{H}}= 0. \end{aligned}$$ -
2.
\(m(x_i-)<m(x_i+)\). In this case, \(m_*(x_i)=m(x_i^-)\) and
$$\begin{aligned} u_x(x_i^-)=j/m(x_i^-)-p>j/m(x_i^+)-p=u_x(x_i^+). \end{aligned}$$Hence, there is no smooth function touching u from below – only from above. Therefore, for any \(\phi \) touching u from above at \(x_i\), we have
$$\begin{aligned} \frac{(\phi _x(x_i)+p)^2}{2}+V(x_i)+m(x_i^-)-{\overline{H}}\leqslant 0. \end{aligned}$$(11.4)Because (1.1) holds in the classical sense for \(x\ne x_i,\) we have that
$$\begin{aligned} \frac{(u_x(x_i^\pm )+p)^2}{2}+V(x_i)+m(x_i^\pm )-{\overline{H}}=0. \end{aligned}$$Because \(\phi \) touches u from above, we have \(0<u_x(x_i^+)+p\leqslant \phi _x(x_i)+p\leqslant u_x(x_i^-)+p. \) Hence, (11.4) holds.
\(\square \)
Rights and permissions
About this article
Cite this article
Gomes, D.A., Nurbekyan, L. & Prazeres, M. One-Dimensional Stationary Mean-Field Games with Local Coupling. Dyn Games Appl 8, 315–351 (2018). https://doi.org/10.1007/s13235-017-0223-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13235-017-0223-9