1 Introduction

Mean-field game (MFG) theory [3, 24, 28, 39] describes noncooperative differential games with infinitely many identical players. These games were introduced by Lasry and Lions [36,37,38] and, independently around the same time, by Huang et al. [34, 35]. Often, MFGs are given by a Hamilton–Jacobi equation coupled with a Fokker–Planck equation. A standard example is the stationary, one-dimensional, first-order MFG:

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{(u_x+p)^2}{2}+V(x)=g(m)+{\overline{H}},\\ -(m(u_x+p))_x =0, \end{array}\right. } \end{aligned}$$
(1.1)

with its elliptic regularization,

$$\begin{aligned} {\left\{ \begin{array}{ll} -\epsilon u_{xx}+\frac{(u_x+p)^2}{2}+V(x)={\overline{H}}+ g(m)\\ -\epsilon m_{xx}-((p+u_x) m)_x=0, \end{array}\right. } \end{aligned}$$
(1.2)

where \(\epsilon >0\). Here, p is a fixed real number and the unknowns are the constant \({\overline{H}}\) and the functions u and m. The function g is \(C^\infty \) on \({\mathbb {R}}^+\). To simplify the presentation, we consider the periodic case and work in the one-dimensional torus, \({\mathbb {T}}\). Accordingly, \(V : {\mathbb {T}}\rightarrow {\mathbb {R}}\) is a \(C^\infty \) potential. We search for periodic solutions, \(u,m :{\mathbb {T}}\rightarrow {\mathbb {R}}\). Here, we examine this problem and attempt to understand its features in terms of the monotonicity properties of g.

A standard assumption in MFGs is that g is increasing. Heuristically, this assumption means that agents prefer sparsely populated areas. In this case, the existence and uniqueness of smooth solutions to (1.1) is well understood for stationary problems [25,26,27, 41], weakly coupled MFG systems [19], the obstacle MFG problem [18], and extended MFGs [20]. In the time-dependent setting, similar results are obtained in [21, 23, 31] for standard MFGs and in [22] for forward-forward problems. The theory of weak solutions is also well developed for first-order and second-order problems (see [4, 5, 7] and [6, 14, 42, 43], respectively). Congestion problems, see [12, 15, 30, 32], are also of interest and our results extend straightforwardly [16]. With suitable modifications, our methods also extend to non-local [40] and radially symmetric MFGs [13]. Extensions to multi-population problems were considered in [1, 9, 11].

The case of a non-monotonically increasing g is relevant: if g is decreasing, agents prefer clustering in high-density areas. The case where g first decreases and then increases is also natural; here, agents have a preferred density given by the minimum of g. However, little is known about the properties of (1.1) when g is not increasing. One of the few known cases is a second-order MFG with \(g(m)=-\ln m\) and a quadratic cost. In this case, due to the particular structure of the equations, there are explicit solutions, see [33, 39]. The elliptic case was studied in [10].

A triplet, \((u,m,{\overline{H}}),\) solves (1.1) if

  1. (i)

    u is a Lipschitz viscosity solution of the first equation in (1.1);

  2. (ii)

    m is a probability density; that is,

    $$\begin{aligned} m\geqslant 0,\quad \int \limits _{{\mathbb {T}}} m =1; \end{aligned}$$
  3. (iii)

    m is a weak (distributional) solution of the second equation in (1.1).

Because (1.1) is invariant under addition of constants to u, we assume that \(u(0)=0\). Here, u is Lipschitz continuous. However, m can be discontinuous. In this case, viscosity solutions of the first equation in (1.1) are interpreted as discontinuous viscosity solutions; see, for example, [2] and the discussion in Sect. 6.

Our problem is one-dimensional, and the Hamiltonian is convex. If u is a piecewise \(C^1\) function and m is continuous, then u is a viscosity solution if the following conditions hold:

  1. (a)

    u solves the equation at the points where it is \(C^1\) and m is continuous;

  2. (b)

    \(\lim \limits _{x\rightarrow x_0^-}u_x(x) \geqslant \lim \limits _{x\rightarrow x_0^+}u_x(x)\) at points of discontinuity of \(u_x\).

When g is not increasing, (1.1) may not admit m continuous. Solutions must, therefore, be considered in the framework of discontinuous viscosity solutions. In this case, the above characterization of one-dimensional viscosity solutions is not valid, and (1.1) admits a large family of discontinuous viscosity solutions (see Sect. 6). On the other hand, solutions that satisfy the above conditions [(a) and (b)] have nice structural properties that we discuss in this paper. Furthermore, in their analysis, we see the appearance of discontinuities in m, which in turn motivates the study of discontinuous viscosity solutions. Overall, these conditions seem to be good selection criteria for discontinuous solutions of (1.1).

We call solutions that satisfy conditions (a) and (b) semiconcave. In this paper, we always consider semiconcave solutions except in Sect. 6, where we discuss general discontinuous viscosity solutions.

Our goal is to solve (1.1) explicitly and to understand the qualitative behavior of solutions. There are few MFGs for which explicit or semi-explicit solutions of MFGs can be computed. For stationary problems, a number of explicit examples can be found in [24] and an interesting Hopf–Cole-type formula was derived in [8]. For the time-dependent case, the analysis of one-dimensional MFGs was performed in [17, 29]. For that, in Sect. 2, we reformulate (1.1) in terms of the current,

$$\begin{aligned} j=m(u_x+p). \end{aligned}$$
(1.3)

From the second equation in (1.1), j is constant. Thus, the current becomes the main parameter in our analysis, and we consider p as a part of the solution for a given current. Furthermore, in Sects. 9 and 11 we analyze the dependence of p on j and rephrase our results in terms of p, see Proposition 11.2. This change of viewpoint is motivated by the fact that (1.1) and (1.2) are substantially simpler after the reduction (1.3).

While we focus our attention into non-increasing MFGs, our methods are also valid for increasing MFGs. To illustrate and contrast these two cases, we begin our analysis in Sect. 3 by addressing the latter. For \(j>0\), we show the existence of a unique smooth solution. However, for \(j=0\), we establish the existence of non-smooth solutions and present examples where uniqueness does not hold. While the non-uniqueness of a solution of the Hamilton–Jacobi equation for a fixed m is well known, our example is, we believe, the first one in the context of mean-field games. The non-uniqueness is due to the existence of regions where m vanishes; otherwise, the solution is unique.

In Sect. 4, we consider the elliptic regularization of monotone MFGs. We establish a new variational principle that gives the existence and uniqueness of smooth solutions. Moreover, we address the vanishing viscosity problem using \(\Gamma \)-convergence.

In Sect. 5, we study semiconcave solutions of (1.1) for non-increasing g. In this case, if \(j\ne 0\), \(m>0\). However, for certain values of j, (1.1) does not have continuous solutions. In contrast, if j is large enough, (1.1) has a unique smooth solution. Moreover, if V has a single point of maximum, there exists a unique solution of (1.1) for each \(j>0\). If V has multiple maxima, there are multiple solutions. If \(j=0\), the behavior of (1.1) is more complex and m can be discontinuous or vanish. Additionally, we uncover an interesting phenomenon that we call an unhappiness trap. It turns out that for solutions with a low-current (low mobility) agents prefer to be at a worse place but with more agents; that is, the density of the population is larger where the potential, V, is small. This means that the focusing effect modeled by a non-increasing g prevails the spatial preferences of the agents. Furthermore, for solutions with a high-current (high mobility) agents end up accumulating at better places; that is, the density of the population is high where the potential is large. This means that the high mobility of the agents allows them to choose a better location without compromising the larger density of agents around them. For the solutions with the intermediate-current level, the situation is mixed. See Propositions 5.1 and 5.4 and the discussion afterward for the details.

Somewhat similar results to our existence/nonexistence results for smooth solutions are obtained in [10]. In the latter, the author considers elliptic MFG with possibly decreasing g that has a power-like growth with exponent, \(\alpha \). The author proves that (i) if \(\alpha \) is sufficiently small the system obtains smooth solutions, (ii) for intermediate values of \(\alpha \) smooth solutions exist provided extra smallness assumption on g and (iii) for \(\alpha \) large enough smooth solutions do not exist generically. As we mentioned earlier, when g is non-increasing agents tend to accumulate. Hence, smooth solutions are obtained when there is a competing, smoothing mechanism that prevents agents from too strong accumulation. In the model in [10], the accumulation strength is the exponent, \(\alpha \), and the smoothing mechanism is the Brownian motion or the diffusion. Thus, in [10], the author finds the precise balance between the two competing mechanisms. In our case, we fix the accumulation strength by taking \(g(m)=-m\). Here, high-current provides a smoothing mechanism by spreading agents. In Sect. 8, we find the precise balance between aggregation and spreading effects.

Next, in Sect. 6, we consider MFGs with a decreasing nonlinearity, g, and discuss the properties of discontinuous viscosity solutions.

Subsequently, in Sect. 7, we study the elliptic regularization of anti-monotone MFGs. There, we use calculus of variations methods to prove the existence of a solution.

In Sect. 8, we examine the regularity of solutions as a function of the current and, in Sect. 9, we study the asymptotic behavior of solutions of (1.1) as j converges to 0 and \(\infty \). Finally, in Sects. 10 and 11, we analyze the regularity of \({\overline{H}}\) in terms of j and p.

2 The Current Formulation and Regularization

Here, we discuss the current formulation of (1.1) and (1.2). After some elementary computations, we show that the current formulation of (1.2) is the Euler–Lagrange equation of a suitable functional.

2.1 Current Formulation

Let j be given by (1.3). From the second equation in (1.1), j is constant. We split our analysis into the cases, \(j\ne 0\) and \(j=0\).

If \(j\ne 0\), \(m(x) \ne 0\) for all \(x \in {\mathbb {T}}\) and \(u_x+p=j/m\). Thus, (1.1) can be written as

$$\begin{aligned} {\left\{ \begin{array}{ll} F_j(m)={\overline{H}}-V(x),\\ m>0,\ \int \limits _{{\mathbb {T}}} m \hbox {d}x=1,\\ \int \limits _{{\mathbb {T}}} \frac{1}{m} \hbox {d}x=\frac{p}{j}, \end{array}\right. } \end{aligned}$$
(2.1)

where \(F_j(m)= \frac{j^2}{2m^2}-g(m)\). For each x, the first equation in (2.1) is an algebraic equation for m. If g is increasing and \(g(+\infty )=+\infty \), for each \(x \in {\mathbb {T}}\) and \({\overline{H}}\in {\mathbb {R}},\) there exists a unique solution. In contrast, if g is not increasing, there may exist multiple solutions, as we discuss later.

For \(j=0,\) (1.1) gives

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{(u_x+p)^2}{2}-g(m)={\overline{H}}-V(x),\\ m \geqslant 0,\ \int \limits _{{\mathbb {T}}} m \hbox {d}x=1,\\ m(u_x+p)=0. \end{array}\right. } \end{aligned}$$
(2.2)

From the last equation in (2.2), either \(m=0,\) in which case u solves

$$\begin{aligned} \frac{(u_x+p)^2}{2}-g(0)={\overline{H}}-V(x), \end{aligned}$$

or \(m>0\) and \(g(m)+{\overline{H}}-V(x)=0\). Hence, if g is increasing or decreasing, m(x) is determined in a unique way; otherwise, multiple solutions can occur.

2.2 Elliptic Regularization

Now, we consider the elliptic MFG (1.2). From the second equation in that system, we conclude that

$$\begin{aligned} j=\epsilon m_x+m (p+u_x) \end{aligned}$$

is constant. Thus, we solve for \(u_x\) and replace it in the first equation. Accordingly, we get

$$\begin{aligned} -\epsilon \left( \frac{j-\epsilon m_x}{m}\right) _{x}+\frac{(j-\epsilon m_x)^2}{2 m^2}+V(x)={\overline{H}}+ g(m). \end{aligned}$$
(2.3)

Then, using the identity

$$\begin{aligned} \epsilon \frac{(j-\epsilon m_x) m_x}{m^2}+\frac{(j-\epsilon m_x)^2}{2 m^2}=\frac{j^2-\epsilon ^2 m_x^2}{2m^2}, \end{aligned}$$

we obtain the following equation for m:

$$\begin{aligned} \epsilon ^2 \frac{m_{xx}}{m} - \epsilon ^2 \frac{m_x^2}{2 m^2}+F_j(m)={\overline{H}}-V(x). \end{aligned}$$
(2.4)

Now, let \(\Phi _j\) be such that \(\Phi _j'(m)=F_j(m)\); that is,

$$\begin{aligned} \Phi _j(m)=-\frac{j^2}{2 m}-G(m), \end{aligned}$$

where \(G'(m)=g(m)\). Then, (2.4) is the Euler–Lagrange equation of the functional

$$\begin{aligned} \int _{{\mathbb {T}}} \epsilon ^2 \frac{m_x^2}{2 m}-\Phi _j(m)-V(x) m \ \hbox {d}x \end{aligned}$$
(2.5)

under the constraint \(\int _{{\mathbb {T}}} m=1\); the constant \({\overline{H}}\) is the Lagrangian multiplier for the preceding constraint.

3 First-Order Monotone MFGs

We continue our analysis by considering monotonically increasing nonlinearities, g. In the case of a nonvanishing current, solutions are smooth. However, if the current vanishes, solutions can fail to be smooth, m can vanish, and u may not be unique.

The non-smooth behavior for a generic non-decreasing nonlinearity, g, was observed in Theorem 2.8 in [38] where the authors find limits of smooth solutions of second-order MFGs as the viscosity coefficient converges to 0.

3.1 \(j\ne 0\), g Increasing

Here, in contrast to the case \(j=0\), examined later, the solutions are smooth. Elementary computations give the following result.

Proposition 3.1

Let g be monotonically increasing. Then, for every \(j\ne 0\), (1.1) has a unique smooth solution, \((u_j,m_j,{\overline{H}}_j),\) with current j. This solution is given by

$$\begin{aligned} m_j(x)= F_j^{-1}({\overline{H}}_j-V(x)),\quad u_j(x)=\int \limits _{0}^{x} \frac{j}{m_j(y)}\hbox {d}y-p_j x, \end{aligned}$$

where \(p_j=\int \limits _{{\mathbb {T}}} \frac{j}{m_j(y)}\hbox {d}y,\ F_j(t)=\frac{j^2}{2t^2}-g(t),\) and \({\overline{H}}_j\) is such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\).

3.2 \(j=0\), g Increasing

To simplify the discussion and illustrate our methods, we consider (2.2) with \(g(m)=m\). The analysis is similar for other choices of an increasing function, g. Accordingly, we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{(u_x+p)^2}{2}-m={\overline{H}}-V(x);\\ m \geqslant 0,\ \int \limits _{{\mathbb {T}}} m \hbox {d}x=1;\\ m(u_x+p)=0. \end{array}\right. } \end{aligned}$$
(3.1)

It is easy to see that \(m(x)=(V(x)-{\overline{H}})^+\) and \(\frac{(u_x+p)^2}{2}=(V(x)-{\overline{H}})^-\) for \( x \in {\mathbb {T}}\). The map \({\overline{H}}\mapsto \int \limits _{{\mathbb {T}}} (V(x)-{\overline{H}})^+ \hbox {d}x\) is decreasing (strictly decreasing at its positive values). Hence, there exists a unique number, \({\overline{H}}_{V}\), such that \(\int \limits _{{\mathbb {T}}} m(x)\hbox {d}x=1\). Moreover, \({\overline{H}}_V< \max V\) and \({\overline{H}}_V\geqslant \int _{\mathbb {T}}V-1\). Then, we get different solutions if \({\overline{H}}_V>\min \limits _{\mathbb {T}}V\) or if \({\overline{H}}_V<\min \limits _{\mathbb {T}}V\).

Proposition 3.2

Let \({\overline{H}}_V \in {\mathbb {R}}\) be the unique number such that

$$\begin{aligned} \int \limits _{{\mathbb {T}}} (V(x)-{\overline{H}}_V)^+ \hbox {d}x=1. \end{aligned}$$
(3.2)

Then, we have that

$$\begin{aligned} m(x)=(V(x)-{\overline{H}}_V)^+. \end{aligned}$$

Furthermore, the following statements are true.

  1. (i)

    If \(\min \limits _{\mathbb {T}}V<{\overline{H}}_V<\max \limits _{\mathbb {T}}V\), m is non-smooth and there are regions where it vanishes. Moreover, there are \(C^1\) solutions:

    $$\begin{aligned} u^\pm (x) = \pm \int _{0}^{x} \sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y - px, \end{aligned}$$
    (3.3)

    with \(p=\pm \int \limits _{\mathbb {T}}\sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y\). Additionally, there exist Lipschitz solutions given by:

    $$\begin{aligned} (u^{x_0})_x (x) = \sqrt{2(V(x)-{\overline{H}}_V)^-}\ \chi _{[0,x_0)}- \sqrt{2(V(x)-{\overline{H}}_V)^-}\ \chi _{(x_0,1)} - p^{x_0} \end{aligned}$$
    (3.4)

    where

    $$\begin{aligned} p^{x_0}=\int _{y<x_0}\sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y - \int _{y>x_0} \sqrt{2(V(y)-{\overline{H}}_V)^-}\ \hbox {d}y, \end{aligned}$$

    and \(x_0 \in {\mathbb {T}}\) is such that \(V(x_0)<{\overline{H}}_V\).

  2. (ii)

    If \({\overline{H}}_V<\min \limits _{\mathbb {T}}V\), m is smooth and positive. Moreover,

    $$\begin{aligned} m(x)=V(x)-{\overline{H}}_V,\ u(x)=0,\ p=0,\ {\overline{H}}_V =\int _{{\mathbb {T}}} V-1. \end{aligned}$$

Proof

The first statement in the proposition is evident. Thus, we address (i) and (ii)

Case i It is clear that the triplets, \((u^{\pm },m,{\overline{H}})\), where \(u^{\pm }\) is given by (3.3) are classical solutions of (1.1).

Furthermore, there exists \(x_0 \in {\mathbb {T}}\) such that \(V(x_0)<{\overline{H}}_V\) because \(\min \limits _{\mathbb {T}}<{\overline{H}}_V\). Then, the triplet, \((u^{x_0},m,{\overline{H}}_V)\), where \(u^{x_0}\) is given by (3.4) is a pointwise solution for (1.1) everywhere except \(x_0\). Moreover, \((u^{x_0})_x\) has a negative jump at \(x_0\). Thus, \(u^{x_0}\) is a viscosity solution for the Hamilton–Jacobi equation in (1.1). Accordingly, the triplet \((u^{x_0},m,{\overline{H}}_V)\) solves (1.1).

Case ii Since \({\overline{H}}_V<\min \limits _{\mathbb {T}}V\) we get that \(m(x)=V(x)-{\overline{H}}_V\), and the rest follows readily. \(\square \)

To summarize, (3.1) has a unique, smooth solution if and only if \(u_x+p \equiv 0\) or, equivalently, \(m(x)=V(x)-{\overline{H}}_V\), where \({\overline{H}}_V\) is such that (3.2) holds. The latter holds if and only if

$$\begin{aligned} \int \limits _{{\mathbb {T}}} V(x)\hbox {d}x \leqslant 1+\min \limits _{{\mathbb {T}}} V. \end{aligned}$$
(3.5)

This is the case for V with small oscillation; that is, \(\text {osc} V \leqslant 1\).

For \(A\in {\mathbb {R}}\), set \(V_{A}(x)=A \sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\) and let \((m(x,A),\ {\overline{H}}(A))\) solve (3.1) for \(V=V_{A}\). In Fig. 1, we plot m(xA) for \(0\leqslant A\leqslant 3\). We observe that m(xA) is smooth for small values of A and becomes non-differentiable for large A, as expected from our analysis. If \(A=2\), (3.5) does not hold. Thus, m(x, 2) is singular and we have multiple solutions, u(x, 2). In Fig. 2, we plot m(x, 2) and two distinct solutions, u(x, 2).

Fig. 1
figure 1

m(xA)

Fig. 2
figure 2

m(x, 2) (left) and two distinct solutions u(x, 2) (right)

4 Monotone Elliptic Mean-Field Games

To study (1.2), we examine the variational problem determined by (2.5). As before, for concreteness, we consider the case \(g(m)=m\). In this case, (2.5) becomes

$$\begin{aligned} J_\epsilon [m]= \int _{{\mathbb {T}}} \left( \epsilon ^2 \frac{m_x^2}{2 m}+\frac{j^2}{2 m}+\frac{m^2}{2} -V(x) m\right) \hbox {d}x. \end{aligned}$$
(4.1)

The preceding functional is convex, and, as we prove next, the direct method in the calculus of variations gives the existence of a minimizer on the set

$$\begin{aligned} \mathcal {A}=\left\{ m\in W^{1,2}({\mathbb {T}}):m\geqslant 0 \wedge \int _{{\mathbb {T}}} m =1\right\} . \end{aligned}$$

Proposition 4.1

For each \(j\in {\mathbb {R}}\), there exists a unique minimizer, m, of \(J_\epsilon [m]\) in \({\mathcal {A}}\). Moreover, \(m>0\) and solves

$$\begin{aligned} -\epsilon ^2\left( \frac{m_x}{m}\right) _x-\frac{j^2}{2 m^2}+m+{\overline{H}}-V(x)=0 \end{aligned}$$

for some constant \({\overline{H}}\in {\mathbb {R}}\). Accordingly, the triplet, \((u,m,{\overline{H}})\), where

$$\begin{aligned} u(x)=\int \limits _{0}^{x} \frac{j-\epsilon m_x(y)}{m(y)}\hbox {d}y-p x,\quad x \in {\mathbb {T}}\end{aligned}$$

is the unique smooth solution of (1.2) for \(p=\int \limits _{{\mathbb {T}}} \frac{j}{m(y)}\hbox {d}y\).

Proof

The uniqueness of a positive minimizer is a consequence of the strict convexity of \(J_\epsilon \). The existence of a nonnegative minimizer requires separate arguments for the cases \(j\ne 0\) and \(j=0\).

We first examine the case \(j\ne 0\). We begin by taking a minimizing sequence, \(m_n\in {\mathcal {A}}\). Then, there exists a constant, \(C>0,\) such that

$$\begin{aligned} \int _{{\mathbb {T}}} \frac{(m_n)_x^2}{m_n}+\frac{1}{m_n}\hbox {d}x\leqslant C. \end{aligned}$$

Thus, by Morrey’s theorem, the functions \(\sqrt{m_n}\) are equi-Hölder continuous of exponent \(\frac{1}{2}\). Therefore, because \(\int m_n=1\), this sequence is equibounded and, through some subsequence, \(m_n\rightarrow m\) for some function \(m\geqslant 0\). Moreover, by Fatou’s lemma,

$$\begin{aligned} \int _{{\mathbb {T}}} \frac{1}{m}\hbox {d}x\leqslant C. \end{aligned}$$

Suppose that \(\min m=m(x_0)=0\). Then, because \(\sqrt{m}\) is Hölder continuous, we have \(m(x)\leqslant C|x-x_0|\). However,

$$\begin{aligned} \int _{{\mathbb {T}}} \frac{1}{|x-x_0|}\hbox {d}x \end{aligned}$$

is not finite, which is a contradiction. Thus, m is a strictly positive minimizer. Moreover, it solves the corresponding Euler–Lagrange equation.

For \(j=0\), we rewrite the Euler–Lagrange equation as

$$\begin{aligned} -\epsilon ^2(\ln m)_{xx} + m -V(x) = -{\overline{H}}. \end{aligned}$$
(4.2)

Let \(\mathcal {P}\) be the set of nonnegative functions in \(L^\infty ({\mathbb {T}}^d)\) and consider the map \(\Xi :\mathcal {P}\rightarrow \mathcal {P}\) defined as follows. Given \(\eta \in \mathcal {P}\), we solve the PDE

$$\begin{aligned} -\epsilon ^2w_{xx} + \eta -V(x) = -{\overline{H}}, \end{aligned}$$

where \({\overline{H}}\) satisfies the compatibility condition

$$\begin{aligned} {\overline{H}}= \int _{{\mathbb {T}}} V \hbox {d}x - 1, \end{aligned}$$

and \(w:{\mathbb {T}}\rightarrow {\mathbb {R}}\) is such that \(\int e^w \hbox {d}x=1\). An elementary argument shows that w is uniformly bounded from above and from below. Next, we set \(\Xi (\eta )=e^w\). The mapping \(\Xi \) is continuous and compact. Accordingly, by Schauder’s Fixed Point Theorem, there is a fixed point, m,   that solves (4.2). By the convexity of the variational problem (4.1), this fixed point is the unique solution of the Euler–Lagrange equation. \(\square \)

Next, to study the convergence as \(\epsilon \rightarrow 0\), we investigate the \(\Gamma \)-convergence as \(\epsilon \rightarrow 0\) of \(J_\epsilon \).

Lemma 4.2

Let

$$\begin{aligned} J[m]= \int _{{\mathbb {T}}} \left( \frac{j^2}{2 m}+\frac{m^2}{2} -V(x) m\right) \hbox {d}x,\ m \in \mathcal {A'}=\mathcal {A} \cap \{m:\sqrt{m}\in W^{1,2}({\mathbb {T}})\}. \end{aligned}$$

Then, we have that

$$\begin{aligned} \Gamma -\lim \limits _{\epsilon \rightarrow 0} J_\epsilon = J,\ \text{ in }\ \mathcal {A'}\ \text{ with } \text{ respect } \text{ to } \text{ the } \text{ weak } \text{ convergence } \text{ in }\ L^2({\mathbb {T}}). \end{aligned}$$

Proof

Let

$$\begin{aligned} W_j(m)=\frac{j^2}{2 m}+\frac{m^2}{2} -V(x) m. \end{aligned}$$

Suppose that \(m,m_{\epsilon } \in \mathcal {A'}\) and \(m_{\epsilon } \rightharpoonup m\) in \(L^2({\mathbb {T}})\). Since \(W_j\) is convex, we have that

$$\begin{aligned} \liminf \limits _{\epsilon \rightarrow 0} \int \limits _{{\mathbb {T}}}W_j(m_{\epsilon })\hbox {d}x \geqslant \int \limits _{{\mathbb {T}}}W_j(m)\hbox {d}x. \end{aligned}$$

Therefore, we get

$$\begin{aligned} \liminf \limits _{\epsilon \rightarrow 0} J_{\epsilon }[m_{\epsilon }] \geqslant \liminf \limits _{\epsilon \rightarrow 0} \int \limits _{{\mathbb {T}}}W_j(m_{\epsilon })\hbox {d}x \geqslant \int \limits _{{\mathbb {T}}}W_j(m)\hbox {d}x=J[m]. \end{aligned}$$

Finally, for arbitrary \(m\in \mathcal {A',}\) we take \(m_{\epsilon }=m\) as a recovery sequence. \(\square \)

In Fig. 3, we observe numerical evidence of this \(\Gamma \)-convergence.

Fig. 3
figure 3

Solution m of (1.2) when \(g(m)=m,\ j=1,\ V(x)= \sin (2 \pi (x + 1/4))\) for \(\epsilon =0.01\) (dashed) and for \(\epsilon =0\) (solid)

5 Semiconcave Viscosity Solutions in Anti-monotone Mean-Field Games

Here, we investigate MFGs with decreasing g. To simplify, we assume that \(g(m)=-m\). However, our arguments are valid for a general decreasing g. In contrast with the monotone case, m may not be unique. Furthermore, m can be discontinuous and, thus, viscosity solutions of the Hamilton–Jacobi equation in (1.1) should be interpreted in the discontinuous sense. In this section, we are interested in semiconcave discontinuous viscosity solutions; that is, solutions satisfying conditions (a) and (b) stated in the Introduction. Here, we examine existence, uniqueness, and additional properties of such solutions. In Sect. 6, we prove that these solutions are indeed discontinuous viscosity solutions.

5.1 \(j \ne 0\), g Decreasing

To simplify the presentation, we consider \(j>0\).

With \(g(m)=-m,\) (2.1) becomes

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{j^2}{2m^2}+m={\overline{H}}-V(x);\\ m>0,\ \int \limits _{{\mathbb {T}}} m \hbox {d}x=1;\\ \int \limits _{{\mathbb {T}}} \frac{1}{m} \hbox {d}x=\frac{p}{j}. \end{array}\right. } \end{aligned}$$
(5.1)

The minimum of \(t\mapsto j^2/2t^2+t\) is attained at \(t_{\min }=j^{2/3}\). Thus, \(j^2/2t^2+t \geqslant 3j^{2/3}/2\) for \(t>0\). Therefore, a lower bound for \({\overline{H}}\) is

$$\begin{aligned} {\overline{H}}\geqslant {\overline{H}}_j^\mathrm{cr} =\max _{{\mathbb {T}}} V+\frac{3j^{2/3}}{2}, \end{aligned}$$
(5.2)

where the superscript \(^\mathrm{cr}\) stands for critical.

The function \(t \mapsto j^2/2t^2+t\) is decreasing on the interval \((0,t_{\min })\) and increasing on the interval \((t_{\min },+\infty )\). For any \({\overline{H}}\) satisfying (5.2), let \(m_{{\overline{H}}}^{-}\) and \(m_{{\overline{H}}}^{+}\) be the solutions of

$$\begin{aligned} \frac{j^2}{2\left( m_{{\overline{H}}}^\pm (x)\right) ^2}+m_{{\overline{H}}}^\pm (x)={\overline{H}}-V(x), \end{aligned}$$

with \(0\leqslant m_{{\overline{H}}}^-(x)\leqslant t_{\min }\leqslant m_{{\overline{H}}}^+(x)\). Due to (5.2), \(m_{{\overline{H}}}^{-}\) and \(m_{{\overline{H}}}^{+}\) are well defined. Furthermore, if \((u,m , {\overline{H}})\) solves (1.1), then m(x) agrees with either \(m^+_{{\overline{H}}}(x)\) or \(m^-_{{\overline{H}}}(x)\), almost everywhere in \({\mathbb {T}}\).

Let \(m_{j}^-:=m_{{\overline{H}}^\mathrm{cr}_j}^{-}\) and \(m_{j}^+:=m_{{\overline{H}}^\mathrm{cr}_j}^{+}\). Note that \(m_{j}^-(x)\leqslant m_{j}^+(x)\) for all \(x \in {\mathbb {T}}\), and the equality holds only at the maximum points of V. Hence, \(m_{j}^-(x)< m_{j}^+(x)\) on a set of positive Lebesgue measure unless V is constant.

The two fundamental quantities for our analysis are

$$\begin{aligned} {\left\{ \begin{array}{ll} \alpha ^{+}(j)=\int \limits _{0}^{1}m_{j}^{+}(x)\hbox {d}x,\\ \alpha ^{-}(j)=\int \limits _{0}^{1}m_{j}^{-}(x)\hbox {d}x. \end{array}\right. } \end{aligned}$$
(5.3)

If V is not constant, we have

$$\begin{aligned} \alpha ^{-}(j)<\alpha ^{+}(j) \end{aligned}$$

for \(j>0\).

Proposition 5.1

Suppose that \(x=0\) is the single maximum of V. Then, for every \(j>0,\) there exists a unique number, \(p_j\), such that (1.1) has a semiconcave solution with a current level, j. Moreover, the solution of (5.1), \((u_j,m_j,{\overline{H}}_j)\), is unique and given as follows.

  1. (i)

    If \(\alpha ^+(j) \leqslant 1,\)

    $$\begin{aligned} m_j(x)=m^{+}_{{\overline{H}}_j}(x),\quad u_j(x)=\int \limits _{0}^{x}\frac{j\hbox {d}y}{m_j(y)}-p_jx, \end{aligned}$$
    (5.4)

    where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j(y)}\) and \({\overline{H}}_j\) is such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\).

  2. (ii)

    If \(\alpha ^-(j) \geqslant 1,\)

    $$\begin{aligned} m_j(x)=m^{-}_{{\overline{H}}_j}(x),\quad u_j(x)=\int \limits _{0}^{x}\frac{j\hbox {d}y}{m_j(y)}-p_jx, \end{aligned}$$
    (5.5)

    where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j(y)}\) and \({\overline{H}}_j\) is such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\).

  3. (iii)

    If \(\alpha ^-(j)< 1 < \alpha ^+(j)\), we have that \({\overline{H}}_j={\overline{H}}_j^\mathrm{cr}\), and

    $$\begin{aligned} m_j(x)=m^{-}_{j}(x)\chi _{[0,d_j)}+m^{+}_{j}(x)\chi _{[d_j,1)},\ u_j(x)=\int \limits _{0}^{x}\frac{j\hbox {d}y}{m_j(y)}-p_jx, \end{aligned}$$
    (5.6)

    where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j(y)}\) and \(d_j\) is such that

    $$\begin{aligned} \int \limits _{\mathbb {T}}m_j(x)\hbox {d}x=\int \limits _{0}^{d_j} m^-_{j}(x)\hbox {d}x+\int \limits _{d_j}^1 m^{+}_{j}(x)\hbox {d}x=1. \end{aligned}$$

Proof

Case i The function \(j^2/2t^2+t\) is increasing on the interval \((t_{\min },+\infty )\). Therefore, \({\overline{H}}\mapsto m_{{\overline{H}}}^+(x)\) is increasing for all x. Hence, the mapping

$$\begin{aligned} {\overline{H}}\mapsto \int \limits _{{\mathbb {T}}}m_{{\overline{H}}}^{+}(x)\hbox {d}x, \end{aligned}$$

is increasing. By assumption, \(\int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j^\mathrm{cr}}^{+}(x)\hbox {d}x=\int \limits _{{\mathbb {T}}}m_j^+(x)\hbox {d}x\leqslant 1\). Therefore, there exists a unique \({\overline{H}}_j \geqslant {\overline{H}}_j^\mathrm{cr}\) such that \(\int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j}^{+}(x)\hbox {d}x=1\). Thus, \((u_j,m_j)\) given by (5.4) is the unique solution of (1.1) with \({\overline{H}}={\overline{H}}_j\) and \(p=p_j\).

Case ii The function \(j^2/2t^2+t\) is decreasing on the interval \((0,t_{\min })\). Therefore, \(m_{{\overline{H}}}^-(x)\) is decreasing in \({\overline{H}}\) for all x. Hence, the mapping

$$\begin{aligned} {\overline{H}}\mapsto \int \limits _{{\mathbb {T}}}m_{{\overline{H}}}^{-}(x)\hbox {d}x \end{aligned}$$

is decreasing. By assumption, \( \int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j^\mathrm{cr}}^{-}(x)\hbox {d}x=\int \limits _{{\mathbb {T}}}m_j^-(x)\hbox {d}x\geqslant 1. \) Thus, there exists a unique number, \({\overline{H}}_j \geqslant {\overline{H}}_j^\mathrm{cr}\), such that \(\int \limits _{{\mathbb {T}}}m_{{\overline{H}}_j}^{-}(x)\hbox {d}x=1\). Hence, \((u_j,m_j)\) given by (5.5) is the unique solution of (1.1) with \({\overline{H}}={\overline{H}}_j\) and \(p=p_j\).

Case iii We first show that (1.1) does not have semiconcave solutions for \({\overline{H}}>{\overline{H}}_j^\mathrm{cr}\). By contradiction, suppose that (1.1) has a semiconcave solution, \((u,m,{\overline{H}}),\) for some \({\overline{H}}>{\overline{H}}_j^\mathrm{cr}\) and \(p \in {\mathbb {R}}\). Evidently, \( m(x)=m_{{\overline{H}}}^+(x)\chi _E+m_{{\overline{H}}}^-(x)\chi _{{\mathbb {T}}\setminus E} \) for some subset \(E \subset {\mathbb {T}}\). Furthermore,

$$\begin{aligned} \inf _{{\mathbb {T}}} \left( m_{{\overline{H}}}^+(x)-m_{{\overline{H}}}^-(x)\right) >0 \end{aligned}$$
(5.7)

because \({\overline{H}}>{\overline{H}}_j^\mathrm{cr}\). Moreover,

$$\begin{aligned} \int \limits _{{\mathbb {T}}}m(x)\hbox {d}x=\int \limits _{E}m_{{\overline{H}}}^+(x)\hbox {d}x+\int \limits _{{\mathbb {T}}\setminus E}m_{{\overline{H}}}^-(x)\hbox {d}x \end{aligned}$$

and

$$\begin{aligned} \int \limits _{{\mathbb {T}}}m^-_{{\overline{H}}}(x)\hbox {d}x<\int \limits _{{\mathbb {T}}}m^-_{j}(x)\hbox {d}x<1<\int \limits _{{\mathbb {T}}}m^+_{j}(x)\hbox {d}x<\int \limits _{{\mathbb {T}}}m^+_{{\overline{H}}}(x)\hbox {d}x. \end{aligned}$$

Therefore, neither E nor \({\mathbb {T}}\setminus E\) can be empty or have zero Lebesgue measure. Because E and \({\mathbb {T}}\setminus E\) are not negligible, there exists a real number, e,  such that for every \(\varepsilon >0,\)

$$\begin{aligned} (e-\varepsilon ,e) \cup E \ne \emptyset \quad \text {and}\quad (e,e+\varepsilon ) \cup E^c \ne \emptyset . \end{aligned}$$

According to (5.7), m has a negative jump, \(m(e^-)-m(e^+)<0\), at \(x=e\). Hence, \(u_x=j/m-p\) has a positive jump, \(\frac{j}{m^-(e)}-\frac{j}{m^+(e)}>0\), at \(x=e\). However, derivatives of semiconcave solutions can only have negative jumps and, thus, this contradiction implies \({\overline{H}}_j={\overline{H}}^\mathrm{cr}_j\).

Next, we construct \(m_j\) and \(u_j\) and determine \(p_j\). We look for a function \(m_j\) of the form

$$\begin{aligned} m_j(x)={\left\{ \begin{array}{ll} m^-_{j}(x),\ x\in [0,d),\\ m^+_{j}(x),\ x\in [d,1). \end{array}\right. } \end{aligned}$$
(5.8)

Note that (5.8) is the only possibility for \(m_j\) because \(m_j\) can switch from \(m^+_j\) to \(m^-_j\) only if there is no jump at the switching point; that is, \(m^+_j\) and \(m^-_j\) are equal at that point, which only holds at maximum of V. Thus, by periodicity, \(m_j\) can switch to \(m^-_j\) from \(m^+_j\) only at \(x=0\) and \(x=1\).

It remains to choose \(d \in (0,1)\) such that \(\int \limits _{{\mathbb {T}}} m_j(x)\hbox {d}x=1\). Let

$$\begin{aligned} \phi (d)=\int \limits _{0}^1 m_j(x)\hbox {d}x=\int \limits _{0}^d m^-_{j}(x)\hbox {d}x+\int \limits _{d}^1 m^{+}_{j}(x)\hbox {d}x. \end{aligned}$$

Because \(\phi (0)>1\) and \(\phi (1)<1\) and because \(\phi '(d)=m^-_{j}(d)-m^+_{j}(d)<0\) for \(d\in (0,1)\), there exists a unique \(d_j \in (0,1)\) such that \(\phi (d_j)=1\). The triplet defined by (5.6), \((u_j,m_j,{\overline{H}}_{j}),\) solves (1.1). \(\square \)

By the previous proposition, if V has a single maximum point then, for every current, \(j>0\), there exists a unique \(p_j\) and a unique triplet, \((u_j,m_j,{\overline{H}}_j)\), that solves (5.1) for \(p=p_j\). In contrast, as we show next, if V has multiple maxima and \(j>0\) is such that Case iii in Proposition 5.1 holds, there exist infinitely many solutions.

Proposition 5.2

Suppose that V attains a maximum at \(x=0\) and at \(x=x_0 \in (0,1)\). Let j be such that \(\alpha ^-(j)< 1 < \alpha ^+(j)\). Then, there exist infinitely many numbers, p,  and pairs, (um),  such that \((u,m,{\overline{H}}_j^\mathrm{cr})\) is a semiconcave solution of (1.1).

Proof

We look for solutions of the form

$$\begin{aligned} m_j^{d_1,d_2}(x)={\left\{ \begin{array}{ll} m^-_{j}(x),\ x\in [0,d_1) \cup [x_0,d_1),\\ m^+_{j}(x),\ x\in [d_1,x_0)\cup [d_2,1), \end{array}\right. } \end{aligned}$$

where \(0<d_1<x_0\) and \(x_0<d_2<1\). Note that \(m_j^{d_1,d_2}\) has two discontinuity points. At these points, \(m_j^{d_1,d_2}\) has positive jumps. Hence, if we define

$$\begin{aligned} u_j^{d_1,d_2}(x)=\int \limits _{0}^{x}\frac{j\hbox {d}y}{m_j^{d_1,d_2}(y)}-p_j^{d_1,d_2}x,\quad x \in {\mathbb {T}}, \end{aligned}$$

where \(p_j^{d_1,d_2}=\int \limits _{{\mathbb {T}}}\frac{j\hbox {d}y}{m_j^{d_1,d_2}(y)}\), the triplet \((u_j^{d_1,d_2},m_j^{d_1,d_2},{\overline{H}}_j^\mathrm{cr})\) is a semiconcave solution of (1.1) if

$$\begin{aligned} \int \limits _{{\mathbb {T}}}m_j^{d_1,d_2}(x)\hbox {d}x=1. \end{aligned}$$

To determine \(d_1\) and \(d_2\), we consider the function

$$\begin{aligned} \phi (d_1,d_2)=&\int \limits _{0}^1 m_j^{d_1,d_2}(x)\hbox {d}x=\int \limits _{0}^{d_1} m^-_{j}(x)\hbox {d}x+\int \limits _{d_1}^{x_0} m^{+}_{j}(x)\hbox {d}x\\ \nonumber&+\int \limits _{x_0}^{d_2} m^-_{j}(x)\hbox {d}x+\int \limits _{d_2}^{1} m^{+}_{j}(x)\hbox {d}x, \quad (d_1,d_2) \in (0,x_0)\times (x_0,1). \end{aligned}$$

We have that \(\phi (0,x_0)=\int \limits _{0}^1 m^{+}_{j}(x)\hbox {d}x>1\) and \(\phi (x_0,1)=\int \limits _{0}^1 m^{-}_{j}(x)\hbox {d}x<1\). Because \(\phi \) is continuous, there exists a pair, \((d_1,d_2)\in (0,x_0)\times (x_0,1),\) such that \(\phi (d_1,d_2)=1\). In fact, there are infinitely many such pairs. For arbitrary continuous curve \(\gamma \subset [0,x_0]\times [x_0,1]\) connecting the points \((0,x_0)\) and \((x_0,1)\), there exists at least one pair, \((d_1,d_2) \in \gamma ,\) such that \(\phi (d_1,d_2)=1\). To each such pair corresponds a triplet \((u_j^{d_1,d_2},m_j^{d_1,d_2},{\overline{H}}_j^\mathrm{cr})\) that is a semiconcave solution of (1.1). \(\square \)

Fig. 4
figure 4

Solution m for \(j=0.001\) and \(V(x)=\frac{1}{2} \sin (2 \pi (x + 1/4))\)

Fig. 5
figure 5

Solution m for \(j=10\) and \(V(x)=\frac{1}{2} \sin (2 \pi (x + 1/4))\)

Fig. 6
figure 6

Solution \(m_j\) for \(j=0.5\) and \(V(x)=\frac{1}{2} \sin (2 \pi (x + 1/4))\)

Fig. 7
figure 7

Two distinct solutions for \(j=0.5\) and \(V(x)=\frac{1}{2} \sin (4 \pi (x + 1/8))\)

Let \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\). Because V has a single maximum, Proposition 5.1 gives that (1.1) admits a unique semiconcave solution for all values of \(j>0\). In Figs. 4, 5 and 6, we plot m for different values of j. In Fig. 4, we plot m in the low-current regime, \(j=0.001;\) that is, Case i in Proposition 5.1. As we can see, m is smooth as predicted by the proposition. In Fig. 5, we plot m in the high-current regime, \(j=10; \) that is, Case ii in Proposition 5.1. As before, we observe that m is smooth. Finally, in Fig. 6, we plot m for the intermediate-current regime, \(j=0.5\); that is, Case iii in Proposition 5.1. As we can see, m is discontinuous.

Next, we consider the potential \(V(x)=\frac{1}{2}\sin \big (4\pi \big (x+\frac{1}{8}\big )\big ) \) that has two maxima. By Proposition 5.2, we have infinitely many two-jump solutions. In Fig. 7, we plot two such solutions.

5.2 \(j = 0\), g Decreasing

Now, we examine the case when the current vanishes, and, thus, we consider the system

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{(u_x+p)^2}{2}+m={\overline{H}}-V(x);\\ m \geqslant 0,\ \int \limits _{{\mathbb {T}}} m \hbox {d}x=1;\\ m(u_x+p)=0. \end{array}\right. } \end{aligned}$$
(5.9)

Suppose that (5.9) has a solution. Because \(m \geqslant 0\), we have \({\overline{H}}-V(x)\geqslant ~0\) for\(\ x\in ~{\mathbb {T}}\). Thus, \({\overline{H}}\geqslant \max \limits _{{\mathbb {T}}} V\). On the other hand,

$$\begin{aligned} \int \limits _{{\mathbb {T}}} \left( {\overline{H}}-V(x)\right) \hbox {d}x \geqslant \int \limits _{{\mathbb {T}}} m \hbox {d}x=1. \end{aligned}$$

Consequently, \({\overline{H}}\geqslant 1+ \int \limits _{{\mathbb {T}}} V\). Therefore,

$$\begin{aligned} {\overline{H}}\geqslant \max \left( \max \limits _{{\mathbb {T}}} V,1+\int \limits _{{\mathbb {T}}}V\right) =:{\overline{H}}_0. \end{aligned}$$

It turns out that \({\overline{H}}_0\) is the only possible value for \({\overline{H}}\) as we show next.

Proposition 5.3

The MFG (5.9) does not have semiconcave solutions for \({\overline{H}}>{\overline{H}}_0\).

Proof

Suppose that \({\overline{H}}>{\overline{H}}_0\) and that the triplet \((u,m,{\overline{H}})\) is a semiconcave solution of (5.9). If \(m(x)>0\), then \(u_x(x)+p=0\) and \(m={\overline{H}}-V(x)\). If \(m(x)=0\), then

$$\begin{aligned} (u_x(x)+p)^2=2({\overline{H}}-V(x)). \end{aligned}$$

Thus, on the set \(Z=\{x:m(x)=0\}\),

$$\begin{aligned} u_x(x)+p=\sqrt{2({\overline{H}}-V(x))}\ \text {or}\ u_x(x)+p=-\sqrt{2({\overline{H}}-V(x))}. \end{aligned}$$

We have that \(\int \limits _{{\mathbb {T}}} \left( {\overline{H}}-V(x)\right) \hbox {d}x>1\). Hence, the set Z has a positive Lebesgue measure. Otherwise, \(m(x)={\overline{H}}-V(x)\) everywhere, and thus \(\int \limits _{{\mathbb {T}}}m(x)\hbox {d}x>1\). Consequently, \(u_x+p\) is either \(\sqrt{2({\overline{H}}-V(x))}\) or \(-\sqrt{2({\overline{H}}-V(x))}\) on Z. Suppose that \(u_x(x)+p\) takes the value \(-\sqrt{2({\overline{H}}-V(x))}\) at some point \(x \in {\mathbb {T}}\). Without loss of generality, we can assume that \(u_x(0)+p=-\sqrt{2({\overline{H}}-V(0))}\). Let

$$\begin{aligned} e=\sup \left\{ x \in (0,1)\ \text {s.t.}\ u_x(x)+p=-\sqrt{2({\overline{H}}-V(x))}\right\} . \end{aligned}$$

Then, at \(x=e\), the function \(u_x+p\) has a jump of size \(\sqrt{2({\overline{H}}-V(e))}\) or \(2\sqrt{2({\overline{H}}-V(e))}\). However, this is impossible because \(u_x\) is a semiconcave solution, and it cannot have positive jumps. Therefore, \(u_x(x)+p\) takes only the values \(\sqrt{2({\overline{H}}-V(x))}\) and 0. But then, \(u_x\) must have a positive jump from 0 to \(\sqrt{2({\overline{H}}-V(x))}\) at some point, which also contradicts the regularity property. \(\square \)

Now, we construct solutions to (5.9) with \({\overline{H}}={\overline{H}}_{0}\). It turns out that if V has a large oscillation, then (5.9) has infinitely many semiconcave solutions.

Proposition 5.4

We have that

  1. (i)

    if \(1+\int \limits _{{\mathbb {T}}}V \geqslant \max \limits _{{\mathbb {T}}} V\), then the triplet \((u_0,m_0,{\overline{H}}_0)\) with

    $$\begin{aligned} m_0(x)={\overline{H}}_0-V(x),\ u_0(x)=0, \end{aligned}$$
    (5.10)

    solves (5.9) in the classical sense for \(p=0\);

  2. (ii)

    if \(\max \limits _{{\mathbb {T}}} V > 1+\int \limits _{{\mathbb {T}}}V\), define

    $$\begin{aligned} m_0^{d_1,d_2}(x)={\left\{ \begin{array}{ll} {\overline{H}}_{0}-V(x),\ x\in [d_1,d_2] ,\\ 0,\ x\in {\mathbb {T}}\setminus [d_1,d_2], \end{array}\right. } \end{aligned}$$
    (5.11)

    and

    $$\begin{aligned} u_0^{d_1,d_2}(x)=\int \limits _{0}^{x} (u_0^{d_1,d_2})_x(y) \hbox {d}y,\quad x\in {\mathbb {T}}, \end{aligned}$$
    (5.12)

    where

    $$\begin{aligned} (u_0^{d_1,d_2})_x(x)={\left\{ \begin{array}{ll} \sqrt{2({\overline{H}}_{0}-V(x))}-p_0^{d_1,d_2},\ x\in [0,d_1) ,\\ -p_0^{d_1,d_2},\ x\in [d_1,d_2],\\ -\sqrt{2({\overline{H}}_{0}-V(x))}-p_0^{d_1,d_2},\ x\in (d_2,1], \end{array}\right. } \end{aligned}$$

    and \(p_0^{d_1,d_2}=\int \limits _{0}^{d_1} \sqrt{2({\overline{H}}_{0}-V(x))}\hbox {d}x-\int \limits _{d_2}^{1} \sqrt{2({\overline{H}}_{0}-V(x))}\hbox {d}x\). Then, for any pair, \((d_1,d_2)\), such that

    $$\begin{aligned} \int \limits _{d_1}^{d_2} ({\overline{H}}_{0}-V(x))\hbox {d}x=1, \end{aligned}$$
    (5.13)

    the triplet \((u_0^{d_1,d_2},m_0^{d_1,d_2},{\overline{H}}_0)\) is a semiconcave solution for (5.9) for \(p=p_0^{d_1,d_2}\). Furthermore, there exist infinitely many pairs, \((d_1,d_2)\), such that (5.13) holds.

Proof

Case i In this case, \({\overline{H}}_0=1+\int \limits _{{\mathbb {T}}} V(x)\hbox {d}x\) and straightforward computations show that (5.10) defines a classical solution of (5.9).

Case ii In this case, we have that \({\overline{H}}_{0}=\max \limits _{{\mathbb {T}}} V\) and that \(\int \limits _{0}^{1} ({\overline{H}}_{0}-V(x))\hbox {d}x>1\). Without loss of generality, we assume that 0 is a point of maximum for V.

Note that \((u_0^{d_1,d_2})_x\) has only negative jumps and \(u_0^{d_1,d_2}\) satisfies (5.9) almost everywhere. Thus, the triplet \((u_0^{d_1,d_2},m_0^{d_1,d_2},{\overline{H}}_0)\) is a semiconcave solution of (5.9) if \(\int \limits _{{\mathbb {T}}}m_0^{d_1,d_2}(x)\hbox {d}x=1\). However, the latter is equivalent to (5.13). Since \(\int \limits _{0}^{1} ({\overline{H}}_{0}-V(x))\hbox {d}x>1,\) we can find infinitely many such pairs. We find \(p_0^{d_1,d_2}\) from the identity \(\int \limits _{{\mathbb {T}}} (u_0^{d_1,d_2})_x(x)\hbox {d}x=0\). \(\square \)

Figures 8 and 9 show the solutions of (5.9) for \(V(x)=5\sin \big (2\pi \big (x+\frac{1}{4}\big ) \big ),\ x \in {\mathbb {T}}\).

Fig. 8
figure 8

\(m_0\) as defined in (5.11) for \(V(x)=5 \sin \big (2 \pi \big (x+\frac{1}{4}\big )\big )\) with \(d_2=0.5\) and \(d_1\) such that (5.13) holds

Fig. 9
figure 9

\(u_0\) (left) and \((u_0)_x\) (right) as defined in (5.11) for \(V(x)=5 \sin \big (2 \pi \big (x+\frac{1}{4}\big )\big )\) with \(d_2=0.5\) and \(d_1\) such that (5.13) holds

Remark 5.5

If V has multiple maxima and Case ii in Proposition 5.4 holds, there is a larger family of solutions. Let \(x=x_0 \in (0,1) \) be a point of maximum for V. For fixed real numbers, \(d_1<d_2<e_1<e_2\), define

$$\begin{aligned} m_0^{d_1,d_2,e_1,e_2}(x)={\left\{ \begin{array}{ll} {\overline{H}}_{0}-V(x),\ x\in [d_1,d_2]\cup [e_1,e_2],\\ 0,\ \text {elsewhere}, \end{array}\right. } \end{aligned}$$
(5.14)

and

$$\begin{aligned} u_0^{d_1,d_2,e_1,e_2}(x)=\int \limits _{0}^{x} (u_0^{d_1,d_2,e_1,e_2})_x(y) \hbox {d}y,\quad x\in {\mathbb {T}}, \end{aligned}$$
(5.15)

where

$$\begin{aligned} (u_{d_1,d_2,e_1,e_2})_x(x)={\left\{ \begin{array}{ll} \sqrt{2({\overline{H}}_{0}-V(x))}-p_0^{d_1,d_2,e_1,e_2},\ x\in [0,d_1) \cup [x_0,e_1),\\ -\sqrt{2({\overline{H}}_{0}-V(x))}-p_0^{d_1,d_2,e_1,e_2},\ x\in (d_2,x_0] \cup (e_2,1],\\ -p_0^{d_1,d_2,e_1,e_2},\ \text {elsewhere}. \end{array}\right. } \end{aligned}$$

Note that \((u_{d_1,d_2,e_1,e_2})_x(x)\) is periodic, only has negative jumps, and solves (5.9) almost everywhere. Hence, the triplet \((u_0^{d_1,d_2,e_1,e_2},m_0^{d_1,d_2,e_1,e_2},{\overline{H}}_{0})\) is a semiconcave solution of (5.9) if

$$\begin{aligned} \int \limits _{0}^{1} m_{d_1,d_2,e_1,e_2}(x) \hbox {d}x=1 \end{aligned}$$
(5.16)

for

$$\begin{aligned} p_0^{d_1,d_2,e_1,e_2}=\int \limits _{[0,d_1) \cup [x_0,e_1)} \sqrt{2({\overline{H}}_{0}-V(x))}\hbox {d}x-\int \limits _{(d_2,x_0] \cup (e_2,1]} \sqrt{2({\overline{H}}_{0}-V(x))}\hbox {d}x. \end{aligned}$$

The equality (5.16) is equivalent to

$$\begin{aligned} \int \limits _{d_1}^{d_2} ({\overline{H}}_{0}-V(x))\hbox {d}x+\int \limits _{e_1}^{e_2} ({\overline{H}}_{0}-V(x))\hbox {d}x=1. \end{aligned}$$
(5.17)

Since \(\int \limits _{0}^{1} ({\overline{H}}_{0}-V(x))\hbox {d}x>1\), we can find infinitely many quadruples \((d_1,d_2,e_1,e_2)\) such that (5.17) holds. Hence, we can generate infinitely many solutions of the form (5.14), (5.15).

From Propositions 5.1 and 5.4, for every semiconcave solution, m,  of (1.1) in the low-current regime (\(j=0\) or Case i in Proposition 5.1), the smaller V(x) is, the larger m(x) is. This is paradoxical because V(x) represents the spatial preference of the agents and preferred regions correspond to high values of V. Thus, areas that are less desirable have a high populational density. Therefore, it is possible that the most preferred site is empty and agents aggregate at the least preferred site. For example, in (5.11), m vanishes near the maximum of V and is supported in the neighborhood of the minimum of V, as illustrated in Fig. 8. Hence, if agents do not move fast (low current), they prefer staying together rather than being in a better place, see Fig. 4. In the high-current regime (Case ii in Proposition 5.1), the opposite situation occurs: the larger V(x) is, the larger m(x) becomes, see Fig. 5. Therefore, preferred areas have a high population density. Hence, if the level of the current is high enough (we give quantitative estimates in the next section), agents are better off at preferred sites and with more agents. Finally, for the intermediate-current level (Case iii in Proposition 5.1), we observe a more complex situation. The solution, m,  consists of two parts: \(m^-_j(x)\) and \(m^+_j(x)\). \(m^-_j(x)\) is larger where V(x) is larger and the opposite holds for \(m^+_j(x)\). Therefore, in the region where m is \(m^-_j\), the most preferred sites are more densely populated. In the region where m is \(m^+_j\) the less preferred sites are more densely populated. This is illustrated in Fig. 6.

6 Discontinuous Viscosity Solutions

In the anti-monotone case considered in the preceding section, m can be discontinuous. Thus, in addition to semiconcave solutions examined before, we need to consider viscosity solutions in the framework of discontinuous Hamiltonians. In what follows, we recall the main definitions in [2]. Given a locally bounded function, \(F:{\mathbb {T}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\), we define its lower and upper semicontinuous envelopes as

$$\begin{aligned} F_*(x,q)=\liminf \limits _{(y,r)\rightarrow (x,q)} F(y,r), \quad F^*(x,q)=\limsup \limits _{(y,r)\rightarrow (x,q)} F(y,r) \end{aligned}$$

for \((x,q)\in {\mathbb {T}}\times {\mathbb {R}}.\) We say that a locally bounded function, \(u:{\mathbb {T}}\rightarrow {\mathbb {R}}\), is a viscosity solution of \(F(x, Du)=0\) if, for any smooth function, \(\phi :{\mathbb {T}}\rightarrow {\mathbb {R}},\) we have that

$$\begin{aligned} F_*(x,\phi _x)\leqslant 0\quad \text {for all}\quad x\in {\text {argmax}}(u-\phi ) \end{aligned}$$

and

$$\begin{aligned} F^*(x,\phi _x)\geqslant 0\quad \text {for all}\quad x\in {\text {argmin}}(u-\phi ). \end{aligned}$$

Let \(m:{\mathbb {T}}\rightarrow {\mathbb {R}}\), \(m\in L^\infty ({\mathbb {T}})\), and set

$$\begin{aligned} m_*(x)=\liminf \limits _{y\rightarrow x} m(y), \quad m^*(x)=\limsup \limits _{y\rightarrow x} m(y). \end{aligned}$$

Suppose that \(V:{\mathbb {T}}\rightarrow {\mathbb {R}}\) is continuous. Then, for our setting, we have

$$\begin{aligned} F(x,q)=\frac{q^2}{2}+V(x)+m(x)-{\overline{H}}. \end{aligned}$$

Consequently,

$$\begin{aligned} F_*(x,q)=\frac{q^2}{2}+V(x)+m_*(x)-{\overline{H}}, \quad F^*(x,q)=\frac{q^2}{2}+V(x)+m^*(x)-{\overline{H}}. \end{aligned}$$

Here, we look for piecewise smooth solutions of (1.1) for \(g(m)=-m\) that are not necessarily semiconcave; that is, the condition \(\lim \limits _{x\rightarrow x^-}u_x(x)\geqslant \lim \limits _{x\rightarrow x^+}u_x(x)\) is not necessarily satisfied. It turns out that there are infinitely many such solutions for all \(j\ne 0\) independent of properties of V, and the jump direction of \(u_x\) is irrelevant. This contrasts with the fact that for V with a single maximum, there exists just one semiconcave solution (Proposition 5.1).

Thus, we select a current level, \(j>0\) (\(j<0\) is analogous), and fix arbitrary points \(0\leqslant x_0<x_1<\cdots <x_n\leqslant 1\) and \({\overline{H}}\geqslant {\overline{H}}^\mathrm{cr}_j\). We search for solutions \((u,m,{\overline{H}})\) such that m is continuous on the intervals \((x_i,x_{i+1})\) for \(0\leqslant i\leqslant n-1\). From the above discussion, we have:

Proposition 6.1

Assume that \(j>0\) and that

  • \(m>0\) is continuous on \((x_i,x_{i+1})\) and

    $$\begin{aligned} \frac{j^2}{2m(x)^2}+m(x)={\overline{H}}-V(x)\ \text {for all}\ x\ne x_i. \end{aligned}$$
  • \({\overline{H}}\) is such that \(\int \limits _{{\mathbb {T}}} m(x) \hbox {d}x=1\).

Then, the triplet \((u,m,{\overline{H}})\) solves (1.1), where

$$\begin{aligned} u(x)=\int \limits _{0}^{x}\frac{j}{m(y)}\hbox {d}y-px,\quad p=\int \limits _{{\mathbb {T}}}\frac{j}{m(y)}\hbox {d}y. \end{aligned}$$

Proof

We postpone the proof to Appendix. \(\square \)

Remark 6.2

If g is increasing, the construction of piecewise smooth solutions with discontinuous m in the previous proposition fails because \(m(x_i^-)=m(x_i^+)\), necessarily. Therefore, the smooth solutions found in Proposition 3.1 are the only possible ones: there are no extra discontinuous solutions as in the case of decreasing g. This is yet another consequence of the regularizing effect of an increasing g.

7 Anti-monotone Elliptic Mean-Field Games

Now, we consider anti-monotone elliptic MFGs and the corresponding variational problem (2.5) with \(g(m)=-m\). We use the direct method in the calculus of variations to prove the existence of a minimizer of the functional

$$\begin{aligned} J_\epsilon [m] = \int _{\mathbb {T}}\left( \epsilon ^2 \frac{m_x^2}{2m}+\frac{j^2}{2m} - \frac{m^2}{2} -V(x) m \right) \hbox {d}x. \end{aligned}$$

Proposition 7.1

For each \(j\in {\mathbb {R}}\), there exists a minimizer, m, of \(J_\epsilon [m]\) in

$$\begin{aligned} \mathcal {A} = \left\{ m\in W^{1,2}({\mathbb {T}}): m\geqslant 0 \wedge \int _{{\mathbb {T}}} m =1 \right\} . \end{aligned}$$

Moreover, if \(j\ne 0\) then \(m>0\), and it solves the Euler–Lagrange equation

$$\begin{aligned} -\epsilon ^2 \left( \frac{m_x}{m}\right) _x - \frac{j^2}{2m^2} -m + {\overline{H}}-V(x) =0 \end{aligned}$$

for some \({\overline{H}}\in {\mathbb {R}}\).

If \(j=0, \) m is not necessarily positive. Nevertheless, the Euler–Lagrange equation

$$\begin{aligned} -\epsilon ^2 \left( \frac{m_x}{m}\right) _x -m + {\overline{H}}-V(x) =0 \end{aligned}$$

admits a smooth solution.

Proof

We consider the cases \(j\ne 0\) and \(j=0\) separately.

Case 1 \(j\ne 0\) We take a minimizing sequence, \(m_n\in \mathcal {A,}\) and note that there is a constant, C,  such that

$$\begin{aligned} \int _{\mathbb {T}}\epsilon ^2 \frac{(m_n)_x^2}{2m_n}+\frac{j^2}{2m_n} \hbox {d}x \leqslant C + \int _{\mathbb {T}}\frac{m_n^2}{2}. \end{aligned}$$

Therefore, we seek to control \(\int m^2\) by the integral expression on the left-hand side.

Fig. 10
figure 10

Solution m when \(g(m)=-m,\ j=0.001,\ V(x)=\frac{1}{2} \sin (2 \pi (x + 1/4))\) for \(\epsilon =0.01\) (dashed) and for \(\epsilon =0\) (solid)

For that, we recall the Gagliardo–Nirenberg inequality,

$$\begin{aligned} \left| \left| w\right| \right| _{L^p} \leqslant \left| \left| w_x\right| \right| _{L^r}^a \left| \left| w\right| \right| _{L^q}^{1-a} \end{aligned}$$
(7.1)

for \(0\leqslant a \leqslant 1\), with

$$\begin{aligned} \frac{1}{p} = a \left( \frac{1}{r} - 1\right) + (1-a) \frac{1}{q}, \end{aligned}$$

whenever \(\int _{{\mathbb {T}}} w=0\). With \(p=4\) and \(r=q=2\), we obtain \(a=\frac{1}{4}\). Using these values in (7.1), taking into account that \(\int m =1, \) and choosing \(w=\sqrt{m}\), we obtain

$$\begin{aligned} \int _{{\mathbb {T}}} m_n^2 \leqslant C+C \left( \int _{{\mathbb {T}}}\frac{(m_n)_x^2}{2m_n}\right) ^\frac{1}{2}. \end{aligned}$$

Thus, using a weighted Cauchy inequality,

$$\begin{aligned} \int _{\mathbb {T}}\epsilon ^2 \frac{(m_n)_x^2}{2m_n}+\frac{j^2}{2m_n} \hbox {d}x \leqslant C. \end{aligned}$$

Finally, we argue as in the proof of Proposition 4.1 and show the existence of a minimizer.

Case 2 \(\mathbf j= 0\) The proof of the existence of minimizers in the case \(j\ne 0\) is also valid for the case \(j=0\). Nevertheless, when \(j=0\) the minimizers are not necessarily positive and therefore do not necessarily solve the Euler–Lagrange equation. Hence, using a fixed point argument as in the proof of Proposition 4.1, we find a smooth solution for the Euler–Lagrange equation, which may not be a minimizer. For that, we rewrite the Euler–Lagrange equation as

$$\begin{aligned} -\epsilon ^2(\ln m)_{xx} - m -V(x) = -{\overline{H}}\end{aligned}$$

and argue as before. \(\square \)

The preceding result does not give a unique minimizer or a unique solution of the Euler–Lagrange equation. Moreover, we note that as \(\epsilon \rightarrow 0\), numerical evidence suggests that in general there is no \(\Gamma \)-convergence to a minimizer. The reason is that \(J_{\epsilon }\) is not convex. Nevertheless, heuristically, the convex part of the functional becomes larger when j is large. Thus, there may still be \(\Gamma \)-convergence for large enough j. At least numerically, this behavior holds. In Figs. 10 and 11, we plot a solution for \(\epsilon =0.01\) versus the solution with \(\epsilon =0\) for \(j=0.001\) and \(j=1\), respectively, and we observe non-convergence. In Fig. 12, we plot a solution for \(\epsilon =0.01\) versus the solution for \(\epsilon =0\) for \(j=100\), and we observe convergence as we expected.

Fig. 11
figure 11

Solution m when \(g(m)=-m,\ j=1,\ V(x)=\frac{1}{2} \sin (2 \pi (x + 1/4))\) for \(\epsilon =0.01\) (dashed) and for \(\epsilon =0\) (solid)

Fig. 12
figure 12

Solution m when \(g(m)=-m,\ j=100,\ V(x)=\frac{1}{2} \sin (2 \pi (x + 1/4))\) for \(\epsilon =0.01\) (dashed) and for \(\epsilon =0\) (solid)

8 Regularity Regimes of the Current Equation for \(g(m)=-m\)

Now, we analyze the regularity regimes of (5.1); that is, we determine for which values of j (5.1) have or fail to have smooth solutions. For simplicity, we assume that 0 is the only point of maximum of V. Moreover, as before, we consider the case \(j\geqslant 0\), as the case \(j<0\) is analogous.

We begin by proving that \(\alpha ^+,\alpha ^-\), defined in (5.3), are monotone.

Proposition 8.1

We have that

  1. (i)

    \(\alpha ^+\) and \(\alpha ^-\) are increasing on \((0,\infty )\);

  2. (ii)

    \(\lim \limits _{j\rightarrow +\infty }\alpha ^+(j)=\lim \limits _{j\rightarrow +\infty }\alpha ^-(j)=\infty \);

  3. (iii)

    \(\lim \limits _{j\rightarrow 0} \alpha ^+(j)=\max \limits _{{\mathbb {T}}}V-\int \limits _{{\mathbb {T}}} V(x) \hbox {d}x, \ \lim \limits _{j\rightarrow 0} \alpha ^-(j)=0\).

Proof

(i):

First, we prove that \(m^+_j(x)\) and \(m^-_j(x)\) (see Sect. 5.1 for the definition) are increasing in j at every point \(x \in {\mathbb {T}}\). We fix x and set \(h=(\max \limits _{{\mathbb {T}}} V )-V(x)\). If \(h=0\), then \(m^-_j(x)=j^{2/3}\), which is an increasing function of j. Next, for \(h>0\) let \(t(j)=m^-_j(x)<j^{2/3}\). We have that

$$\begin{aligned} \frac{j^2}{2t(j)^2}+t(j)-\frac{3}{2}j^{2/3}=h. \end{aligned}$$

By the implicit function theorem, t(j) is differentiable. Differentiating the previous equation in j gives

$$\begin{aligned} t'(j)=\frac{j^{-1/3}-\frac{j}{t^2}}{1-\frac{j^2}{t^3}}. \end{aligned}$$

Because \(0<t(j)<j^{2/3},\ t'(j)>0\). Hence, t(j) is increasing. The proof for \(m^+_j(x)\) is identical.

(ii):

By definition, \(m^+_j(x) \geqslant j^{2/3}\). Hence, \(\lim \limits _{j\rightarrow \infty }\alpha ^+(j)=\infty \). On the other hand, for j large enough, we have

$$\begin{aligned} \frac{j^2}{2(j^{2/3}/2)^2}+\frac{j^{2/3}}{2}-\frac{3}{2}j^{2/3}=j^{2/3}&>\max \limits _{{\mathbb {T}}}V- V(x)\\ \nonumber&=\frac{j^2}{2(m^-_j(x))^2}+m^-_j(x)-\frac{3}{2}j^{2/3}. \end{aligned}$$

Therefore, \(m^-_j(x)>j^{2/3}/2 \) and \(\lim \limits _{j\rightarrow \infty }\alpha ^-(j)=\infty \).

(iii):

Because \(m^-_j(x) \leqslant j^{2/3}\) for every \(x\in {\mathbb {T}}\), \(\lim \limits _{j\rightarrow 0}m^-_j(x)=0\) for all \(x\in {\mathbb {T}}\). Thus, \(\lim \limits _{j\rightarrow 0}\alpha ^-(j)=0\). On the other hand, \(m^+_j(x) \geqslant j^{2/3}\). Thus, \(0\leqslant \frac{j^2}{(m_j^+(x))^2} \leqslant j^{2/3}\). Therefore,

$$\begin{aligned} \lim \limits _{j \rightarrow 0} m^+_j(x)=\lim \limits _{j \rightarrow 0}\left( \frac{3}{2}j^{2/3}-\frac{j^2}{2(m_j^+(x))^2}+\max \limits _{{\mathbb {T}}}V-V(x)\right) =\max \limits _{{\mathbb {T}}}V-V(x). \end{aligned}$$

Thus,

$$\begin{aligned} \lim \limits _{j\rightarrow 0} \alpha ^+(j)=\max \limits _{{\mathbb {T}}}V-\int \limits _{{\mathbb {T}}} V(x) \hbox {d}x. \end{aligned}$$

\(\square \)

Next, we define two numbers that characterize regularity regimes of (1.1):

$$\begin{aligned} j_\mathrm{lower}=\inf \{j>0\ \text {s.t.} \ \alpha ^{+}(j)> 1\}, \end{aligned}$$
(8.1)

and

$$\begin{aligned} j_\mathrm{upper}=\inf \{j>0\ \text {s.t.} \ \alpha ^{-}(j)> 1\}. \end{aligned}$$
(8.2)

Proposition 8.2

Let \(j_\mathrm{lower}\) and \(j_\mathrm{upper}\) be given by (8.1) and (8.2). Then

  1. (i)

    \(0\leqslant j_\mathrm{lower}< j_\mathrm{upper}<\infty \);

  2. (ii)

    for \(j\geqslant j_\mathrm{upper}\), the system (1.1) has smooth solutions;

  3. (iii)

    for \(j_\mathrm{lower}<j< j_\mathrm{upper}\), the system (1.1) has only discontinuous solutions;

  4. (iv)

    if \(j_\mathrm{lower}>0\), the system (1.1) has smooth solutions for \(0<j\leqslant j_\mathrm{lower}\).

Proof

The proof is a straightforward application of Propositions 5.1 and 8.1. \(\square \)

Finally, we characterize the regularity at \(j=0\).

Proposition 8.3

The system (5.9) admits smooth solutions if and only if

$$\begin{aligned} \alpha ^+(0)\leqslant 1. \end{aligned}$$

Proof

The proof follows from iii in Proposition 8.1 and i in Proposition 5.4. \(\square \)

Let \(V(x)=A \sin (2\pi (x+1/4))\). In Fig. 13, we plot \(\alpha ^+\) and \(\alpha ^-\) for \(A=0.5\) and \(A=5\). From Proposition 8.1, \(\alpha ^+(0)=A\). Thus, if \(A=0.5,\) we have \(\alpha ^+(0)<1\) and, for \(A=5,\) we have \(\alpha ^+(0)>1\). Therefore, \(j_\mathrm{lower}>0\) for \(A=0.5\) and \(j_\mathrm{lower}=0\) for \(A=5\). Hence, if \(A=0.5,\) (1.1) has smooth solutions for a low enough current level (\(j\leqslant 0.218\)) or for a high enough current level (\(j\geqslant 1.750\)). In contrast, if \(A=5,\) there are no smooth solutions for low currents, only for large currents (\(j\geqslant 3.203\)).

We end the section with an a priori estimate for the current level for smooth solutions.

Proposition 8.4

(A priori estimate) Suppose that \(\max \limits _{{\mathbb {T}}} V >1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\) and let \((u,m,{\overline{H}})\) be a smooth solution of (1.1) with \(m>0\). Then, there exists a constant, \(c(V)>0\), such that

$$\begin{aligned} j \geqslant c(V). \end{aligned}$$
(8.3)

Proof

From Proposition 8.1, we have that \(\alpha ^+(0)>1\). Thus, by Proposition 8.3, (1.1) does not have smooth solutions for \(j=0\). Additionally, \(j_\mathrm{lower}=0\). Next, take \(c(V)=j_\mathrm{upper}\) and by Proposition 8.2, we conclude (8.3). \(\square \)

The previous Proposition shows that if the potential, V,  has a large oscillation (this happens in the example for \(A=5\), Fig. 13), then only high-current solutions are smooth.

Fig. 13
figure 13

\(\alpha ^+\) and \(\alpha ^-\) for \(V(x)=A \sin (2 \pi (x + 1/4))\). \(j_\mathrm{lower}=0.218,\ j_\mathrm{upper}=1.750\ (A=0.5);\ j_\mathrm{lower}=0,\ j_\mathrm{upper}=3.203\ (A=5)\)

9 Asymptotic Behavior of Solutions as \(j \rightarrow 0\) and \(j \rightarrow +\infty \)

In Sect. 5.1, we studied semiconcave solutions of (1.1) with a current level \(j>0\). Here, we continue the analysis of the decreasing nonlinearity, \(g(m)=-m,\) and examine the asymptotic behavior of semiconcave solutions as \(j \rightarrow 0\) and \(j \rightarrow \infty \).

As before, we assume that V has a single maximum at 0. First, we address the case \(j\rightarrow \infty \).

Proposition 9.1

For \(j>0\), let \((u_j,m_j,{\overline{H}}_j)\) solve (5.1). We have that

  1. (i)

    \(\lim \limits _{j \rightarrow \infty } {\overline{H}}_j=\infty \);

  2. (ii)

    For \(x \in {\mathbb {T}}\), \(\lim \limits _{j \rightarrow \infty } m_j(x)=1\), \(\lim \limits _{j \rightarrow \infty } u_j(x)=0\), and \(\lim \limits _{j\rightarrow \infty }p_j=\infty \).

Proof

  1. (i)

    According to (5.2), we have that \({\overline{H}}_j \geqslant \frac{3j^{2/3}}{2}+\max \limits _{{\mathbb {T}}} V\). Thus, \(\lim \limits _{j \rightarrow \infty } {\overline{H}}_j=\infty \).

  2. (ii)

    For \(j\geqslant j_\mathrm{upper}\), solutions of (5.1) are given by (5.5). Hence, \(m_j\) consists only of the \(m^-\) branch. Thus, \(m_j(x) \leqslant j^{2/3}\), which yields \(\frac{j^2}{m_j(x)^2}\geqslant m_j(x)\). Therefore, \(\frac{j^2}{2m_j(x)^2}+m_j(x)\leqslant \frac{3j^2}{2m_j(x)^2}. \)Consequently, using this inequality in (5.1), we get

    $$\begin{aligned} \frac{j}{\sqrt{2({\overline{H}}_j-V(x))}} \leqslant m_j(x)\leqslant \frac{\sqrt{3}j}{\sqrt{2({\overline{H}}_j-V(x))}}. \end{aligned}$$
    (9.1)

Integrating the previous inequality and taking into account that \(\int \limits _{{\mathbb {T}}}m_j(x)=1\), we get

$$\begin{aligned} \int _{{\mathbb {T}}}\frac{j}{\sqrt{2({\overline{H}}_j-V(x))}} \hbox {d}x \leqslant 1\leqslant \int _{{\mathbb {T}}}\frac{\sqrt{3}j}{\sqrt{2({\overline{H}}_j-V(x))}}\hbox {d}x. \end{aligned}$$
(9.2)

Because \({\overline{H}}_j\) converges to \(\infty ,\) and, for every \(x,y \in {\mathbb {T}}\), where V is bounded, we have that

$$\begin{aligned} \lim \limits _{j\rightarrow \infty }\frac{\sqrt{2({\overline{H}}_j-V(y))}}{\sqrt{2({\overline{H}}_j-V(x))}}=1. \end{aligned}$$

Hence, for large enough j, we have

$$\begin{aligned} \sqrt{2({\overline{H}}_j-V(x))} \leqslant 2 \sqrt{2({\overline{H}}_j-V(y))},\quad x,y \in {\mathbb {T}}. \end{aligned}$$
(9.3)

Let \(\bar{x}\) be such that

$$\begin{aligned} \frac{j}{\sqrt{2({\overline{H}}_j-V(\bar{x}))}}=\int _{{\mathbb {T}}}\frac{j}{\sqrt{2({\overline{H}}_j-V(x))}} \hbox {d}x. \end{aligned}$$

Then, by (9.1), (9.3), and (9.2), we get

$$\begin{aligned} m_j(x)\leqslant \frac{\sqrt{3}j}{\sqrt{2({\overline{H}}_j-V(x))}}\leqslant \frac{2\sqrt{3}j}{\sqrt{2({\overline{H}}_j-V(\bar{x}))}}\leqslant 2\sqrt{3}. \end{aligned}$$

Similarly, we have

$$\begin{aligned} m_j(x)\geqslant \frac{j}{\sqrt{2({\overline{H}}_j-V(x))}}\geqslant \frac{j}{2\sqrt{2({\overline{H}}_j-V(\bar{x}))}}\leqslant \frac{1}{2\sqrt{3}}. \end{aligned}$$

Furthermore, we have that

$$\begin{aligned} \frac{j^2}{2m_j^2(x)}=\frac{{\overline{H}}_j-V(x)}{1+\frac{2m_j^3(x)}{j^2}}. \end{aligned}$$
(9.4)

Thus,

$$\begin{aligned} m_j(x)=\frac{1}{\sqrt{1+\frac{2m_j^3(x)}{j^2}}}\frac{j}{\sqrt{2({\overline{H}}_j-V(x))}}. \end{aligned}$$
(9.5)

Finally, because \(m_j\) is bounded and its integral is 1, we get from (9.5) that

$$\begin{aligned} \lim \limits _{j \rightarrow \infty } \frac{j}{\sqrt{2({\overline{H}}_j-V(x))}}=1 \end{aligned}$$
(9.6)

for all \(x \in {\mathbb {T}}\). The preceding limit implies that \(\lim \limits _{j \rightarrow \infty }m_j(x)=1\) for all \(x \in {\mathbb {T}}\). In fact, (9.6) gives precise asymptotics of \({\overline{H}}_j\), namely

$$\begin{aligned} \lim \limits _{j \rightarrow \infty } \frac{2{\overline{H}}_j}{j^2}=1. \end{aligned}$$
(9.7)

Now, we compute the limit of \(u_j(x)\). We have that \((u_j)_x=\frac{j}{m_j(x)}-p_j,\) where \(p_j=\int \limits _{{\mathbb {T}}}\frac{j}{m_j(y)}\hbox {d}y\). From (5.1), we have that \(\frac{j}{m_j(x)}=\sqrt{2({\overline{H}}_j-V(x)-m_j(x))}\).

So,

$$\begin{aligned} \left| \frac{j}{m_j(x)}-\frac{j}{m_j(y)}\right|&=\left| \sqrt{2({\overline{H}}_j-V(x)-m_j(x))}-\sqrt{2({\overline{H}}_j-V(y)-m_j(y))}\right| \\&\leqslant \frac{|m_j(x)-m_j(y)|+|V(x)-V(y)|}{\sqrt{2({\overline{H}}_j-\max \limits _{{\mathbb {T}}}V - j^{2/3})}}\\&\leqslant \frac{2\sqrt{3}+\text {osc}V}{j^{1/3}}\rightarrow 0, \end{aligned}$$

as \(j\rightarrow \infty \). Hence,

$$\begin{aligned} |(u_j)_x|&=\left| \frac{j}{m_j(x)}-p_j\right| =\left| \int \limits _{{\mathbb {T}}}\left( \frac{j}{m_j(x)}-\frac{j}{m_j(y)}\right) \hbox {d}y\right| \\&\leqslant \int \limits _{{\mathbb {T}}}\left| \frac{j}{m_j(x)}-\frac{j}{m_j(y)}\right| \hbox {d}y\rightarrow 0, \end{aligned}$$

when \(j \rightarrow \infty \). Consequently, \(\lim \limits _{j \rightarrow \infty }u_j(x)=\lim \limits _{j \rightarrow \infty } \int \limits _{0}^{x} (u_j)_x(y)\hbox {d}y=0\). \(\square \)

Next, we study the behavior of solutions as \(j \rightarrow 0\).

Proposition 9.2

We have that

  1. (i)

    \(\lim \limits _{j \rightarrow 0} {\overline{H}}_j=\max \left( \max \limits _{{\mathbb {T}}} V,1+\int \limits _{{\mathbb {T}}} V\right) ={\overline{H}}_0\);

  2. (ii)

    if \(1+\int \limits _{{\mathbb {T}}} V > \max \limits _{{\mathbb {T}}} V\), then

    $$\begin{aligned} \lim \limits _{j \rightarrow 0} m_j(x)=1+\int \limits _{{\mathbb {T}}} V-V(x),\quad \lim \limits _{j \rightarrow 0} u_j(x)=0,\quad \text {and}\ \lim \limits _{j\rightarrow 0}p_j=0 \end{aligned}$$

    for all \(x \in {\mathbb {T}}\);

  3. (iii)

    if \(1+\int \limits _{{\mathbb {T}}} V \leqslant \max \limits _{{\mathbb {T}}} V\), then

    $$\begin{aligned} \lim \limits _{j \rightarrow 0} m_j(x)= & {} m_{d,1}(x),\ \ \ \lim \limits _{j \rightarrow 0} u_j(x)=u_{d,1}(x),\ \text {and}\ \lim \limits _{j\rightarrow 0}p_j\\= & {} \int \limits _{0}^{d} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x \end{aligned}$$

    for all \(x \in {\mathbb {T}}\), where \(m_{d,1}\) and \(u_{d,1}\) are given by (5.11) and (5.12).

Proof

  1. (i)

    There are two possible cases: \(j_\mathrm{lower}>0\) and \(j_\mathrm{lower}=0\). If \(j_\mathrm{lower}=0\), then \(\alpha ^+(j)>1\) for all \(j>0\) and \(\alpha ^{-}(j)<1\) for small enough j. Hence, by the results in Sect. 5.1, we have that \({\overline{H}}_j={\overline{H}}_j^\mathrm{cr}=\frac{3}{2}j^{2/3}+\max V\). Thus, \(\lim \limits _{j \rightarrow 0}{\overline{H}}_j=\max V\). On the other hand, \(j_\mathrm{lower}=0\) means that \(\lim \limits _{j\rightarrow 0}\alpha ^+(j)\geqslant 1\). Consequently, by Proposition 8.1, \(\max \limits _{{\mathbb {T}}} V-\int \limits _{{\mathbb {T}}}V\geqslant 1\). Thus, \(\max \left( \max \limits _{{\mathbb {T}}} V,1+\int \limits _{{\mathbb {T}}} V\right) =\max \limits _{{\mathbb {T}}} V=\lim \limits _{j\rightarrow 0}{\overline{H}}_j\).

    If \(j_\mathrm{lower}>0\), then \(\alpha ^+(j)<1\) for \(j<j_\mathrm{lower}\) and solutions \((m_j,u_j,{\overline{H}}_j)\) are given by (5.4). Hence, \(m_j(x)\geqslant j^{2/3}\) and

    $$\begin{aligned} 0<\frac{j^2}{2m_j(x)^2}\leqslant \frac{j^{2/3}}{2}. \end{aligned}$$
    (9.8)

    Therefore,

    $$\begin{aligned} \lim \limits _{j\rightarrow 0}{\overline{H}}_j=\lim \limits _{j\rightarrow 0}\left( \int \limits _{{\mathbb {T}}} V+\int \limits _{{\mathbb {T}}}m_j+\int \limits _{{\mathbb {T}}}\frac{j^2}{2m_j(x)^2}\right) = \int \limits _{{\mathbb {T}}} V+ 1. \end{aligned}$$

    But \(\max \limits _{{\mathbb {T}}} V-\int \limits _{{\mathbb {T}}}V=\lim \limits _{j\rightarrow 0} \alpha ^+(j)<1\), so \(\max \left( \max \limits _{{\mathbb {T}}} V,1+\int \limits _{{\mathbb {T}}} V\right) =1+\int \limits _{{\mathbb {T}}} V=\lim \limits _{j\rightarrow 0}{\overline{H}}_j\).

  2. (ii)

    Since \(\lim \limits _{j\rightarrow 0}\alpha ^+(j)=\max V -\int V\), we have that the condition \(1+\int \limits _{{\mathbb {T}}} V> \max \limits _{{\mathbb {T}}} V\) is equivalent to the condition \(j_\mathrm{lower}>0\). In this case, we have that \(\lim \limits _{j \rightarrow 0}{\overline{H}}_j=1+\int \limits _{{\mathbb {T}}}V\). Therefore, from (9.8), we have that

    $$\begin{aligned} \lim \limits _{j \rightarrow 0} m_j(x)=\lim \limits _{j \rightarrow 0} \left( {\overline{H}}_j-V(x)-\frac{j^2}{2m_j^2(x)}\right) =1+\int \limits _{{\mathbb {T}}} V-V(x). \end{aligned}$$

    Furthermore,

    $$\begin{aligned} \lim \limits _{j \rightarrow 0} u_j(x)=\lim \limits _{j \rightarrow 0}\int \limits _{0}^x \left( \frac{j}{m_j(y)}-\int \limits _{{\mathbb {T}}}\frac{j}{m_j(z)}dz\right) \hbox {d}y=0. \end{aligned}$$
  3. (iii)

    The inequality \(1+\int \limits _{{\mathbb {T}}} V \leqslant \max \limits _{{\mathbb {T}}} V\) is equivalent to \(j_\mathrm{lower}=0\). Hence, for \(0<j<j_\mathrm{upper}\) solutions are given by (5.6).

    Because \(0<m_j^-(x)\leqslant j^{2/3}\), \(\lim \limits _{j \rightarrow 0} m^-_j(x)=0\). Furthermore, \(m_j^+(x)\geqslant j^{2/3}\). Thus,

    $$\begin{aligned} \lim \limits _{j\rightarrow 0}\frac{j^2}{2(m_j^+(x))^2}=0. \end{aligned}$$

    Therefore,

    $$\begin{aligned} \lim \limits _{j\rightarrow 0}m_j^+(x)=\lim \limits _{j \rightarrow 0}\left( {\overline{H}}_j-V(x)-\frac{j^2}{2(m_j^+(x))^2}\right) =\max V-V(x). \end{aligned}$$

    Suppose that the jump points, \(d_j,\) of \(m_j(x)\) (see (5.6)) converge to some \(d \in [0,1]\) through a subsequence. Then, through that subsequence \(\lim \limits _{j\rightarrow 0} m_j(x)=m_0^{d,1}(x),\) where \(m_0^{d,1}\) is defined in (5.11). Hence,

    $$\begin{aligned} 1=\int \limits _{{\mathbb {T}}} m_0^{d,1}(x)\hbox {d}x=\int \limits _d^1 \left( \max V- V(x)\right) \hbox {d}x. \end{aligned}$$

    Because V has a single maximum, d is defined uniquely by the previous equation. Hence, \(\lim \limits _{j\rightarrow 0}d_j=d\) and \(\lim \limits _{j\rightarrow 0} m_j(x)=m_0^{d,1}(x)\), globally (not only through some subsequence). Consequently,

    $$\begin{aligned} \lim \limits _{j \rightarrow 0} u_j(x)=u_0^{d,1}(x),\ \lim \limits _{j\rightarrow 0}p_j=p_0^{d,1}=\int \limits _{0}^{d} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x, \end{aligned}$$

    where d is such that \(\int \limits _d^1 \left( \max V- V(x)\right) =1\).

\(\square \)

From Proposition 9.2, we see that we recover only part of the solutions for \(j=0\) as limits of solutions for \(j>0\). If we consider the solutions of (5.1) for which m takes negative values, we recover all solutions described in Sect. 5.2. Indeed, the first equation in (5.1) is a cubic equation in m(x). Thus, for every \(x \in {\mathbb {T}},\) there are three solutions: two positive and one negative. Because we are interested in the MFG interpretation of (5.1), we neglect solutions with negative m. However, we can construct solutions for (5.1) without the constraint \(m>0\). As j converges to 0, the negative parts of these solutions converge to 0, and, in the limit, we obtain all nonnegative solutions of (5.9) given in Proposition 5.4.

10 Properties of \({\overline{H}}_j\)

In this section, we study various properties of the effective Hamiltonian, \({\overline{H}}_j\), as a function of j. In the following proposition, we collect several properties of \({\overline{H}}_j\).

Proposition 10.1

We have that

  1. (i)

    For every \(j \in {\mathbb {R}},\) there exists a unique number, \({\overline{H}}_j\), such that (1.1) has solutions with a current level j;

  2. (ii)

    \({\overline{H}}_j\) is even; that is, \({\overline{H}}_j={\overline{H}}_{-j}\);

  3. (iii)

    \({\overline{H}}_j\) is continuous;

  4. (iv)

    \({\overline{H}}_j\) increasing on \((0,\infty )\) and decreasing on \((-\infty ,0)\);

  5. (v)

    \(\min \limits _{j \in {\mathbb {R}}}{\overline{H}}_j={\overline{H}}_0=\max \left( \max \limits _{{\mathbb {T}}} V, 1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\right) \);

  6. (vi)

    \(\lim \limits _{|j|\rightarrow \infty } \frac{{\overline{H}}_j}{j^2/2}=1\).

Fig. 14
figure 14

\(\bar{H}_j\) for \(V(x)=\frac{1}{2} \sin \big (2 \pi \big (x + \frac{1}{4}\big )\big )\)

Proof

  1. (i)

    This follows from Propositions 5.1 and 5.4.

  2. (ii)

    This follows from the fact that \(j\mapsto j^2/2t+t\) is an even function for all \(t>0\).

  3. (iii)

    Continuity of \({\overline{H}}_j\) follows from the continuity of the mapping \((j,t) \mapsto j^2/2t^2+t\) for \(j,t>0\).

  4. (iv)

    Since \({\overline{H}}_j\) is even, it suffices to show that it is increasing on \((0,+\infty )\). First, we show that \({\overline{H}}_j\) is increasing on \((j_\mathrm{upper},\infty )\). For that, we fix \(j_0 >j_\mathrm{upper}\). We have that \({\overline{H}}_{j_0}\geqslant {\overline{H}}_{j_0}^\mathrm{cr}\). Hence, for any \(j_\mathrm{upper}<j<j_0\) we have that \({\overline{H}}_{j_0}\geqslant {\overline{H}}_{j_0}^\mathrm{cr}>{\overline{H}}_{j}^\mathrm{cr}\). Therefore, the function, \(\tilde{m}_j\), determined by

    $$\begin{aligned} {\left\{ \begin{array}{ll} \frac{j^2}{2(\tilde{m}_j(x))^2}+\tilde{m}_j(x)={\overline{H}}_{j_0}-V(x),\\ \tilde{m}_j(x) \leqslant j^{2/3}, \end{array}\right. } \end{aligned}$$
    (10.1)

    is well defined for all \(j_\mathrm{upper}<j<j_0\). Next, we show that the mapping

    $$\begin{aligned} j \mapsto \tilde{m}_j(x) \end{aligned}$$

    is increasing in \((j_\mathrm{upper},j_0)\) for all \(x \in {\mathbb {T}}\). Indeed, fix \(x\in {\mathbb {T}}\) and differentiate (10.1) in j to obtain

    $$\begin{aligned} \frac{d\tilde{m}_j(x)}{dj}=-\frac{j\tilde{m}_j(x)}{j^2-\tilde{m}_j(x)^3}<0. \end{aligned}$$

    Hence, \(\tilde{m}_j(x)<m_{j_0}(x),\ x\in {\mathbb {T}}\). Accordingly,

    $$\begin{aligned} \int \limits _{{\mathbb {T}}} \tilde{m}_j(x)\hbox {d}x<\int \limits _{{\mathbb {T}}} m_{j_0}(x)\hbox {d}x=1. \end{aligned}$$

    Finally, the previous inequality implies \({\overline{H}}_j<{\overline{H}}_{j_0}\).

    The monotonicity of \({\overline{H}}_j\) on \((0,j_\mathrm{lower})\) (in the case \(j_\mathrm{lower}>0\)) can be proven analogously.

    Next, for \(j_\mathrm{lower}<j<j_\mathrm{upper}\), we have that \({\overline{H}}_j={\overline{H}}_j^\mathrm{cr}=\frac{3}{2}j^{2/3}+\max \limits _{{\mathbb {T}}} V\). \({\overline{H}}_j\) is thus evidently monotone.

  5. (v)

    This follows from the previous properties of \({\overline{H}}_j\) and Proposition 9.2.

  6. (vi)

    We have proven this in (9.7).

\(\square \)

In Fig. 14 we plot \({\overline{H}}_j\) as a function of j for \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\).

11 Analysis in Terms of p

Now, we analyze (1.1) in terms of the variable p. If g(m) is increasing, for every \(p\in {\mathbb {R}},\) there exists a unique number, \({\overline{H}}(p)\), for which (1.1) has a solution. This solution is unique if \(m>0\) (see, e.g., [38]). Here, we show that, if g(m) is not increasing, there may be different values of \({\overline{H}}(p)\) for which (1.1) has a semiconcave solution. The uniqueness of \({\overline{H}}\) depends both on the monotonicity of g and on the properties of V. For example, if \(g(m)=-m,\) \({\overline{H}}\) is uniquely determined by p if and only if V has a single maximum. Moreover, our prior characterization of semiconcave solutions of (1.1) implies that, for V with a single maximum point, (1.1) admits a unique semiconcave solution for every \(p\in {\mathbb {R}}\).

We start with an auxiliary lemma.

Lemma 11.1

Let \(x=0\) be the single maximum point of V.

  1. (i)

    For every \(j\ne 0,\) there exists a unique number, \(p_{j}\), such that (1.1) has a semiconcave solution. Furthermore, the map \(j\mapsto p_{j}\) is increasing on \((0,\infty )\) and \((-\infty ,0)\).

  2. (ii)

    If \(1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x \geqslant \max \limits _{{\mathbb {T}}} V\), then \(p_0=0\) is the unique number for which (1.1) has a semiconcave solution with \(j=0.\) Moreover, \(\lim \limits _{j\rightarrow 0}p_{j}=0\).

  3. (iii)

    If \(1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x < \max \limits _{{\mathbb {T}}} V\), then

    $$\begin{aligned} p_j>\int \limits _{0}^{d_1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x,\quad j>0 \end{aligned}$$

    and

    $$\begin{aligned} p_j<-\int \limits _{d_2}^{1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x,\quad j<0, \end{aligned}$$

    where \(d_1,d_2 \in (0,1)\) are such that

    $$\begin{aligned} \int \limits _{0}^{d_1} (\max \limits _{{\mathbb {T}}} V-V(x))\hbox {d}x=\int \limits _{d_2}^{1} (\max \limits _{{\mathbb {T}}} V-V(x))\hbox {d}x=1. \end{aligned}$$

    Consequently, (1.1) has a semiconcave solution for \(j=0\) if and only if

    $$\begin{aligned} -\int \limits _{d_2}^{1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x\leqslant p\leqslant \int \limits _{0}^{d_1} \sqrt{2(\max \limits _{{\mathbb {T}}} V-V(x))}\hbox {d}x. \end{aligned}$$
    (11.1)

Proof

i. According to Proposition 5.1, for every \(j>0,\) there exists a unique number, \(p_j\), such that (1.1) has a semiconcave solution with a current level j. Let \((u_j,m_j,{\overline{H}}_j)\) be the solution of (1.1) given by (5.4), (5.5) or (5.6). Because

$$\begin{aligned} p_j=\int \limits _{{\mathbb {T}}}\frac{j}{m_j(y)}\hbox {d}y, \end{aligned}$$

to prove that \(p_j\) is increasing it suffices to show that \(j\mapsto \frac{j}{m_j(x)}\) is increasing for all \(x\in {\mathbb {T}}\). First, we prove the monotonicity for \(j_\mathrm{lower}<j<j_\mathrm{upper}\). Let \(n_j(x):=\frac{j}{m_j(x)}\). We have that

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{n_j^2(x)}{2}+\frac{j}{n_j(x)}={\overline{H}}_j-V(x),\quad x\in {\mathbb {T}};\\ n_j(x)=\frac{j}{m^{-}_{j}(x)}\chi _{[0,d_j)}+\frac{j}{m^{+}_{j}(x)}\chi _{[d_j,1)}. \end{array}\right. } \end{aligned}$$
(11.2)

Because the maps \(j\mapsto m_j^+(x)\) and \(j\mapsto m_j^-(x)\) are increasing for all \(x \in {\mathbb {T}}\), the map \(j\mapsto d_j\) is also increasing. Assume that j is such that \(d_j\ne x\). We differentiate in j the first equation in (11.2) and take into account that \({\overline{H}}_j=\frac{3}{2}j^{2/3}+\max \limits _{{\mathbb {T}}} V\) for \(j_\mathrm{lower}<j<j_\mathrm{upper}\) to get

$$\begin{aligned} \frac{dn_j(x)}{dj}=\frac{{\overline{H}}'_j-\frac{1}{n_j(x)}}{n_j(x)-\frac{j}{n_j^2(x)}}=\frac{\frac{1}{j^{1/3}}-\frac{1}{n_j(x)}}{n_j(x)-\frac{j}{n_j^2(x)}}. \end{aligned}$$

Let \(j^x\) be such that \(x=d_{j^x}\). For \(j>j^x,\) we have \(d_j>x\). Thus, \(n_j(x)=j/m_j^-(x)>j^{1/3}\), which implies \(\frac{dn_j(x)}{dj}>0\). Similarly, for \(j<j^x,\) we have \(d_j<x\). Therefore, \(n_j(x)=j/m_j^+(x)<j^{1/3}\), which implies \(\frac{dn_j(x)}{dj}>0\).

Next, we analyze the behavior of \(n_j\) at \(j^x\). For \(j>j^x,\) \(n_j(x)=\frac{j}{m_j^-(x)},\) and, for \(j<j^x,\) \(n_j(x)=\frac{j}{m_j^+(x)}\). Thus, \(n_j(x)\) takes a positive jump, \(\frac{j}{m_j^-(x)}-\frac{j}{m_j^+(x)}>0,\) at \(j=j^x\). Therefore, \(j\mapsto n_j(x)\) has positive derivatives whenever \(j\ne j^x\) and a positive jump at \(j=j^x\). It is thus increasing for \(j_\mathrm{lower}<j<j_\mathrm{upper}\).

Next, we show that \(j\mapsto n_j(x)\) is increasing on \((j_\mathrm{upper},\infty )\). As before, we have

$$\begin{aligned} \frac{dn_j(x)}{dj}=\frac{{\overline{H}}'_j-\frac{1}{n_j(x)}}{n_j(x)-\frac{j}{n_j^2(x)}}. \end{aligned}$$

Because \(m_j(x)<j^{2/3}\), we have \(n_j(x)>j^{1/3}\). Therefore, if \({\overline{H}}'_j\geqslant 1/n_j(x),\) the map \(j\mapsto n_j(x)\) is increasing.

Fix \(j_0\) and, for \(j>j_0,\) consider \(\tilde{H}_j:={\overline{H}}_{j_0}+(j-j_0)\frac{\min \limits _{{\mathbb {T}}} m_{j_0}(x)}{j_0}\). Define

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{j^2}{2\tilde{m}_j(x)^2}+\tilde{m}_j(x)=\tilde{H}_j-V(x),\quad x\in {\mathbb {T}};\\ \tilde{m}_j(x)\leqslant j^{2/3}. \end{array}\right. } \end{aligned}$$

Note that \(\tilde{H}_{j_0}={\overline{H}}_{j_0}\) and \(\tilde{m}_{j_0}=m_{j_0}\). Now, we compute the derivative of the map \(j \rightarrow \tilde{m}_j(x)\) at \(j=j_0\). Because \(m_{j_0}<j_0^{2/3}\), we have

$$\begin{aligned} \frac{d\tilde{m}_j(x)}{dj}\Bigg |_{j=j_0}=\frac{\tilde{H}'_j-\frac{j}{\tilde{m}_j^2}}{1-\frac{j^2}{\tilde{m}_j^3}}\Bigg |_{j=j_0}=\frac{\frac{\min \limits _{{\mathbb {T}}} m_{j_0}}{j_0}-\frac{j_0}{m_{j_0}^2}}{1-\frac{j_0^2}{m_{j_0}^3}}>0. \end{aligned}$$

Thus,

$$\begin{aligned} \frac{d}{dj}\int \limits _{{\mathbb {T}}}\tilde{m}_j(x)\hbox {d}x \Bigg |_{j=j_0}=\int \limits _{{\mathbb {T}}}\frac{d\tilde{m}_j(x)}{dj}\hbox {d}x>0. \end{aligned}$$

Hence, for small \(j>j_0\) close to \(j_0,\) we get

$$\begin{aligned} \int \limits _{{\mathbb {T}}}\tilde{m}_j(x)\hbox {d}x>1. \end{aligned}$$

Consequently, for those values of the current, we have that \({\overline{H}}_j\geqslant \tilde{H}_j\). Hence,

$$\begin{aligned} {\overline{H}}'_{j_0}\geqslant \tilde{H}'_{j_0}=\frac{\min \limits _{{\mathbb {T}}} m_{j_0}}{j_0}=\max \limits _{{\mathbb {T}}} \frac{1}{n_{j_0}}, \end{aligned}$$

which completes the monotonicity proof for \(j>j_\mathrm{upper}\). The monotonicity for \(j<j_\mathrm{lower}\) is similar.

(ii) & (iii) These claims follow from the monotonicity of \(j \mapsto p_j\) and Propositions 5.4 and 9.2. \(\square \)

Fig. 15
figure 15

\(p_j\) for \(V(x)=\frac{1}{2} \sin \big (2 \pi \big (x + \frac{1}{4}\big )\big )\)

In Fig. 15, we plot p as a function of j for \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\).

Proposition 11.2

Let \(x=0\) be the only point of maximum of V. Then,

  1. (i)

    for every \(p \in {\mathbb {R}},\) there exists a unique number, \({\overline{H}}(p),\) for which (1.1) has a semiconcave solution;

  2. (ii)

    for every \(p\in {\mathbb {R}}, \) (1.1) has a unique semiconcave solution;

  3. (iii)

    if \(\max \limits _{{\mathbb {T}}} V> 1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\), \({\overline{H}}(p)\) is flat at the origin;

  4. (iv)

    \({\overline{H}}(p)\) is increasing on \((0,\infty )\) and decreasing on \((-\infty ,0)\). Thus,

    $$\begin{aligned} \min \limits _{p\in {\mathbb {R}}} {\overline{H}}(p)={\overline{H}}(0)=\max \left( \max \limits _{{\mathbb {T}}} V, 1+\int \limits _{{\mathbb {T}}}V(x)\hbox {d}x\right) ; \end{aligned}$$
  5. (v)

    \(\lim \limits _{|p| \rightarrow \infty } \frac{{\overline{H}}(p)}{p^2/2}=1\).

Proof

(i) & (ii) From Lemma 11.1 and (ii) in Proposition 9.1, we have that for every p,  there exists a unique j such that (1.1) has semiconcave solutions. From Proposition 10.1, we have that for every j there exists a unique number, \({\overline{H}}\), such that (1.1) has a semiconcave solution. Therefore, for every p,  the constant \({\overline{H}}\) is determined uniquely. Moreover, if \(j=j(p)\ne 0\), Proposition 5.1 gives that (1.1) has a unique semiconcave solution.

Furthermore, if \(j=j(p)=0,\) we have that (1.1) has semiconcave solutions by Proposition 5.4. Moreover, the observations in the proof of the Proposition 5.3 yield that the solutions given in Proposition 5.4 are the only semiconcave solutions when V admits a single maximum at \(x=0\). Therefore, if we are in the Case i in Proposition 5.4, we get that \(p=0\), and the only semiconcave solution is given by (5.10). Next, if we are in Case ii in Proposition 5.4, we have that for a given \(d_1 \in (0,1)\) there exists a unique \(d_2\in (0,1)\) such that (5.13) holds, and the mapping, \(d_1\mapsto d_2\), is increasing. Consequently, the mapping, \(d_1\mapsto p_0^{d_1,d_2}\) is also increasing. Therefore, for a given p there exists a unique pair \((d_1,d_2)\) such that \(p=p_0^{d_1,d_2}\). Hence, there is a unique semiconcave solution corresponding to p.

(iii) From (iii) in Lemma 11.1, we have that if p satisfies (11.1), then \({\overline{H}}(p)={\overline{H}}_0\).

(iv) This follows from (i) in Lemma 11.1 and (iv) and (v) in Proposition 10.1.

(v) This follows from (vi) in Proposition 10.1, (ii) in Proposition 9.1, and the formula \(p_j=\int \limits _{{\mathbb {T}}}\frac{j}{m_j(y)}\hbox {d}y\).

\(\square \)

In Fig. 16, we show \({\overline{H}}(p)\) for \(V(x)=\frac{1}{2}\sin \big (2\pi \big (x+\frac{1}{4}\big )\big )\).

Fig. 16
figure 16

\(\bar{H}(p)\) for \(V(x)=\frac{1}{2} \sin \big (2 \pi \big (x + \frac{1}{4}\big )\big )\)

Remark 11.3

We conjecture that, if V has only one maximum point, \({\overline{H}}(p)\) is convex. Let \(p_1<p_2\) and \((u_1,m_1,{\overline{H}}(p_1))\) and \(\ (u_2,m_2,{\overline{H}}(p_2))\) solve (1.1) for \(p=p_1\) and \(p=p_2\), respectively. Consider the trajectories, \(y_1,y_2,\) determined by

$$\begin{aligned} \dot{y}_i(t)=-(u_i)_x(y_i(t))-p_i,\quad y_i(0)=0,\quad i=1,2. \end{aligned}$$

As in weak Kolmogorov–Arnold–Moser (KAM) or classical KAM theory, we would like to show that

$$\begin{aligned} \lim \limits _{t \rightarrow \infty } \frac{y_i(t)}{t}=\lim \limits _{t\rightarrow \infty } \dot{y}_i(t)=-D_p{\overline{H}}(p_i),\quad i=1,2. \end{aligned}$$
(11.3)

Furthermore, let \(j_1\) and \(j_2\) be the current values corresponding to \(p_1\) and \(p_2\). From the proof of (i) in Lemma 11.1, we have that

$$\begin{aligned} -(u_1)_x(y)-p_1=-\frac{j_1}{m_1(y)} \geqslant -\frac{j_2}{m_2(y)}=-(u_2)_x(y)-p_2. \end{aligned}$$

Hence, if (11.3) holds, we get

$$\begin{aligned} y_1(t)\geqslant y_2(t),\quad \dot{y}_1(t)\geqslant \dot{y}_2(t),\quad t>0, \end{aligned}$$

which implies \(D_p{\overline{H}}(p_1) \leqslant D_p{\overline{H}}(p_2)\). Thus, \({\overline{H}}(p)\) is convex.

Remark 11.4

Assume that V is non-constant and has at least two maximum points. Then, from Proposition 5.2, we know that there are infinitely many two-jump semiconcave solutions for a fixed current level \(j\in (j_\mathrm{lower},j_\mathrm{upper})\). Therefore, for every \(j\in (j_\mathrm{lower},j_\mathrm{upper})\) there are infinitely many values of p such that (1.1) has semiconcave solutions corresponding to the same constant, \({\overline{H}}={\overline{H}}_j^\mathrm{cr}\). This observation hints that there may be \(p\in {\mathbb {R}}\) such that (1.1) has semiconcave solutions for two different values of \({\overline{H}}\).

Indeed, for every \(j\in (j_\mathrm{lower},j_\mathrm{upper})\) there is still a unique one-jump semiconcave solution described in (iii) of Proposition 5.1. Suppose that \(p_j\) is the value of p corresponding to this one-jump solution. Then, one can prove that the mapping \(j\mapsto p_j\) is increasing and continuous by the same proof as in (i) of Lemma 11.1. Suppose that \(j_0\in (j_\mathrm{lower},j_\mathrm{upper})\) and \(p^0\) corresponds to some two-jump semiconcave solution \(m_{j_0}^{d_1,d_2}\) given in Proposition 5.2. If for some V and \(j_0\)

$$\begin{aligned} \lim \limits _{j\rightarrow j_\mathrm{lower}}p_j<p^0 <\lim \limits _{j\rightarrow j_\mathrm{upper}}p_j, \end{aligned}$$

there exists \(j_1 \in (j_\mathrm{lower},j_\mathrm{upper})\) such that \(p_{j_1}=p^0\). Thus, for \(p=p^0=p_{j_1}\) (1.1) has semiconcave solutions for \({\overline{H}}={\overline{H}}_{j_0}^\mathrm{cr}\) and \({\overline{H}}={\overline{H}}_{j_1}^\mathrm{cr}\).

We confirm our observations by a numerical evidence. We consider \(V(x)=\frac{1}{2}\sin \left( 4\pi \left( x+\frac{1}{8}\right) \right) \). For this V\(j_\mathrm{lower}=0.218242\) and \(j_\mathrm{upper}=1.74875\). Furthermore, we fix \(j_0=0.5\) and find two-jump solution \(m_{j_0}^{d_1,d_2}\) described in Proposition 5.2 with \(d_1=0.20626\) and \(d_2=0.70626\). The corresponding value of p is \(p_{j_0}^{d_1,d_2}=0.787246\).

Furthermore, we consider one-jump solutions, \(m_j\), described in Proposition 5.1 and denote by \(p_j\) the corresponding values of p. Then, we find that \(p_{j_1}=p_{j_0}^{d_1,d_2}=0.787246\) for \(j_1=0.5132\). Thus, for \(p=0.787246,\) (1.1) has at least two semiconcave solutions for \({\overline{H}}={\overline{H}}_{j_0}^\mathrm{cr}=1.44494\) and \({\overline{H}}={\overline{H}}_{j1}^\mathrm{cr}=1.4615\). In Fig. 17, we plot the one-jump solution, \(m_{j_1}\), and the two-jump solution, \(m_{j_0}^{d_1,d_2}\).

Fig. 17
figure 17

One-jump semiconcave solution for \(j_1=0.5132\) (left) and two-jump semiconcave solution for \(j_0=0.5\) (right) for \(V(x)=\frac{1}{2} \sin (4 \pi (x + 1/8))\). In both cases \(p=0.787246\)