1 Introduction

A family of Lagrangian submanifolds \(X(x,t):{\mathbb {R}}^n\times {\mathbb {R}}\rightarrow \mathbb C^n\) evolves by Lagrangian mean curvature flow if it solves

$$\begin{aligned} \begin{aligned} (X_t)^\bot =\Delta _gX=\textbf{H}, \end{aligned} \end{aligned}$$
(1.1)

where \(\textbf{H} \) denotes the mean curvature vector of the Lagrangian submanifold. The mean curvature vector of the Lagrangian submanifold \((x,Du(x))\subset \mathbb C^n\) is determined by the Lagrangian angle or phase \(\Theta \), by Harvey-Lawson [1, Proposition 2.17]. The Lagrangian angle is given by

$$\begin{aligned} \Theta =\sum _{i=1}^n \arctan \lambda _i, \end{aligned}$$
(1.2)

where \(\lambda _i\) are the eigenvalues of the Hessian \(D^2u\). This angle acts as the potential of the mean curvature vector

$$\begin{aligned} \begin{aligned} \textbf{H}=J\nabla _g\Theta , \end{aligned} \end{aligned}$$
(1.3)

where \(g=I_n+(D^2u)^2\) is the induced metric on (xDu(x)), and J is the almost complex structure on \(\mathbb C^n\). Thus, Eq. (1.2) is the potential equation for prescribed Lagrangian mean curvature. When the Lagrangian phase \(\Theta \) is constant, u solves the special Lagrangian equation of Harvey-Lawson [1]. In this case, \(H=0\), and (xDu(x)) is a volume-minimizing Lagrangian submanifold.

After a change of coordinates, one can locally write \(X(x,t)=(x,Du(x,t))\), such that \(\Delta _gX=(J{{\bar{\nabla }}}\Theta (x,t))^\bot \), where \({{\bar{\nabla }}}=(\partial _x,\partial _y)\) is the ambient gradient. This means a local potential u(xt) evolves by the parabolic equation

$$\begin{aligned} \begin{aligned}&u_t=\sum _{i=1}^n\arctan \lambda _i,\\&u(x,0):=u(x). \end{aligned} \end{aligned}$$
(1.4)

Symmetry reductions of (1.1) reduce (1.4) to an elliptic equation for u(x). This is illustrated, for instance, in the work of Chau-Chen-He [2]. These solutions model singularities of the mean curvature flow.

If u(x) solves

$$\begin{aligned} \begin{aligned} \sum _{i=1}^n\arctan \lambda _i=s_1+s_2(x\cdot Du(x)-2u(x)), \end{aligned} \end{aligned}$$
(1.5)

then \(X(x,t)=\sqrt{1-2s_2t}\,(x,Du(x))\) is a shrinker or expander solution of (1.1), if \(s_2>0\) or \(s_2<0\), respectively. The mean curvature of the initial submanifold (xDu(x)) is given by \(H=-s_2X^\bot \). Entire smooth solutions to (1.5) for \(s_2>0\) are quadratic polynomials, by Chau-Chen-Yuan [3]; see also Huang-Wang [4] for the smooth convex case. The circle \(x^2+u'(x)^2=1\) is a closed example of a shrinker \(s_2=1,s_1=0\) in one dimension. We refer the reader to the work of Joyce-Lee-Tsui [5], for other non-graphical examples.

If u(x) solves

$$\begin{aligned} \begin{aligned} \sum _{i=1}^n\arctan \lambda _i=t_1+t_2\cdot x+t_3\cdot Du(x), \end{aligned} \end{aligned}$$
(1.6)

then \(X(x,t)=(x,Du(x))+t(-t_3,t_2)\) is a translator solution of (1.1), with constant mean curvature \(H=(-t_3,t_2)^\bot \). For example, in one dimension, the grim reaper curve \((x,u'(x))=(x,-\ln \cos (x))\), for \(t_2=1,t_3=t_1=0\). Entire solutions to (1.6) with Hessian bounds are quadratic polynomials, by Chau-Chen-He [2]; see also Ngyuen-Yuan [6] for entire ancient solutions to (1.6) with Hessian conditions.

The Hamiltonian vector field \(A\cdot z=J{{\bar{\nabla }}}\Theta \) has a real potential given by \(\Theta (x,y)=\frac{1}{2i}\langle z,A\cdot z\rangle _{\mathbb C^n}\) if \(A\in SU(n)\) is skew-adjoint. Since \(\exp (tA)\in U(n)\) preserves the symplectic form \(dz\wedge d{{\bar{z}}}=\sum dz^i\wedge d{{\bar{z}}}^i\), the Hamiltonian flow \(X(x,t)=\exp (tA)(x,Du(x))\) is a Lagrangian immersion with \(X_t=AX=J{{\bar{\nabla }}}\Theta \). For \(A=r_2J\) and \(\Theta (x,y)=r_1+\frac{r_2}{2}|z|^2\), if u(x) solves

$$\begin{aligned} \begin{aligned} \sum _{i=1}^n\arctan \lambda _i=r_1+\frac{r_2}{2}(|x|^2+|Du(x)|^2), \end{aligned} \end{aligned}$$
(1.7)

then \(X(x,t)=\exp (r_2tJ)(x,Du(x))\) is a rotator solution of (1.1), with mean curvature \(H=r_2(JX)^\bot \). The Yin-Yang curve of Altschuler [7] is one such example in one dimension. We also refer the reader to the notes of Yuan [8, p. 3].

A broader class of equations of interest that generalize Eqs. (1.5), (1.6), (1.7), among others, are the Lagrangian mean curvature type equations

$$\begin{aligned} \begin{aligned} \sum _{i=1}^n\arctan \lambda _i=\Theta (x,u(x),Du(x)). \end{aligned} \end{aligned}$$
(1.8)

The study of Lagrangian mean curvature-type equations is driven by a geometric interest, particularly because of the notable special cases illustrated above; see [9, 10] for a detailed discussion.

In this paper, we prove interior Hessian estimates for shrinkers, expanders, translators, and rotators of the Lagrangian mean curvature flow and further extend these results to the broader class of Lagrangian mean curvature-type equations. We assume the Lagrangian phase to be hypercritical, i.e. \(|\Theta |\ge (n-1)\frac{\pi }{2}\). This results in the convexity of the potential of the initial Lagrangian submanifold. For certain \(\Theta =\Theta (x)\), smooth convex solutions were constructed by Wang-Huang-Bao [11] satisfying \(Du(\Omega _1)=\Omega _2\) for prescribed uniformly convex smooth domains \(\Omega _i\), following Brendle-Warren [12] for the constant \(\Theta \) case; see also Huang [13] for a construction using Lagrangian mean curvature flow.

Notations. Before we present our main results, we clarify some terminology.

  1. I.

    By \(B_R\) we denote a ball of radius R centered at the origin.

  2. II.

    We denote the oscillation of u in \(B_R\) by \(\textrm{osc}_{B_R}(u)\).

  3. III.

    Let \(\Gamma _R = B_R\times u(B_R)\times Du(B_R)\subset B_R\times {\mathbb {R}}\times {\mathbb {R}}^n\). Let \(\nu _1,\nu _2\) be constants such that for \(\Theta (x,z,p)\), we have the following structure conditions

    $$\begin{aligned} |\Theta _x|,|\Theta _z|,|\Theta _p|&\le \nu _1,\\ |\Theta _{xx}|,|\Theta _{xz}|,|\Theta _{xp}|,|\Theta _{zz}|,|\Theta _{zp}|&\le \nu _2 \nonumber \end{aligned}$$
    (1.9)

    for all \((x,z,p)\in \Gamma _R\). In the above partial derivatives, the variables xzp are treated as independent of each other. Observe that this indicates that the above partial derivatives do not have any \(D^2u\) or \(D^3u\) terms.

Our main results are the following:

Theorem 1.1

If u is a \(C^4\) solution of any of these Eqs.: (1.5), (1.6), and (1.7) on \(B_{R}(0)\subset {\mathbb {R}}^{n}\) where \(|\Theta |\ge (n-1)\frac{\pi }{2}\), then we have

$$\begin{aligned} |D^2u(0)|\le C_1\exp [C_2(\textrm{osc}_{B_R}(u)/R^2)^{4n-2}] \end{aligned}$$

where \(C_1\) and \(C_2\) are positive constants depending on n and the following:

  1. (1)

    \(s_2\) for (1.5)

  2. (2)

    \(t_2,t_3\) for (1.6)

  3. (3)

    \(r_2\) for (1.7).

Remark 1.1

In the case of Eq. (1.6), since there is no gradient dependence in the derivative of the phase, the precise estimate obtained is

$$\begin{aligned} |D^2u(0)|\le C_1\exp [C_2(\textrm{osc}_{B_R}(u)/R^2)^{3n-2}]. \end{aligned}$$

Theorem 1.2

Suppose that u is a \(C^4\) solution of (1.8) on \(B_{R}(0)\subset {\mathbb {R}}^n\), where \(|\Theta |\ge (n-1)\frac{\pi }{2}\), \(\Theta (x,z,p)\in C^2(\Gamma _R)\) is partially convex in the p variable, and satisfies the structure conditions given by (1.9). Then we have

$$\begin{aligned} |D^2u(0)|\le C_1\exp [C_2(\textrm{osc}_{B_{R}}(u)/R^2)^{4n-2}] \end{aligned}$$

where \(C_1\) and \(C_2\) are positive constants depending on n, \(\nu _1\), \(\nu _2\).

Remark 1.2

From the singular solutions constructed in [10, (1.13)], it is evident that the Hessian estimates in Theorem 1.2 will not hold without partial convexity of \(\Theta \) in the gradient variable Du.

One application of the above results is that \(C^0\) viscosity solutions to (1.5),(1.6), and (1.7) with \(|\Theta |\ge (n-1)\frac{\pi }{2}\) are analytic inside the domain of the solution, as explained in Remark 5.1.

The concavity of the arctangent operator in (1.2) is closely associated with the range of the Lagrangian phase. The phase \((n-2)\frac{\pi }{2}\) is called critical because the level set \(\{ \lambda \in {\mathbb {R}}^n \vert \lambda \) satisfying (1.2}) is convex only when \(|\Theta |\ge (n-2)\frac{\pi }{2}\) [14, Lemma 2.2]. The arctangent operator is concave if u is convex. The concavity of the level set is evident for \(|\Theta |\ge (n-1)\frac{\pi }{2}\) since that implies \(\lambda >0\), making F concave. The phase \(|\Theta |\ge (n-1)\frac{\pi }{2}\) is called hypercritical. The phase \(|\Theta |\ge (n-2)\frac{\pi }{2}+\delta \) is called supercritical. The phase \(|\Theta |\ge (n-2)\frac{\pi }{2}\) is called critical and supercritical. For solutions of the special Lagrangian equation with critical and supercritical phase \(|\Theta |\ge (n-2)\frac{\pi }{2}\), Hessian estimates have been obtained by Warren-Yuan [15, 16], Wang-Yuan [17]; see also Li [18] for a compactness approach and Zhou [19] for estimates requiring Hessian constraints which generalize criticality. The singular \(C^{1,\alpha }\) solutions to (1.2) constructed by Nadirashvili-Vlăduţ [20] and Wang-Yuan [21] show that interior regularity is not possible for subcritical phases \(|\Theta |<(n-2)\frac{\pi }{2}\), without an additional convexity condition, as in Bao-Chen [22], Chen-Warren-Yuan [23], and Chen-Shankar-Yuan [24], and that the Dirichlet problem is not classically solvable for arbitrary smooth boundary data. In [25], viscosity solutions to (1.2) that are Lipschitz but not \(C^1\) were constructed.

If the Lagrangian phase varies \(\Theta =\Theta (x)\), then there is less clarity. Hessian estimates for convex smooth solutions with \(C^{1,1}\) phase \(\Theta =\Theta (x)\) were obtained by Warren in [26, Theorem 8]. For \(C^{1,1}\) supercritical phase, interior Hessian and gradient estimates were established by Bhattacharya in [27]. For \(C^{1,1}\) critical and supercritical phase, interior Hessian and gradient estimates were established by Bhattacharya [27, 28] and Bhattacharya-Mooney-Shankar [29] (for \(C^2\) phase) respectively. See also Lu [30]. Recently in [31], Zhou established interior Hessian estimates for supercritical \(C^{0,1}\) phase. For convex viscosity solutions, interior regularity was established for \(C^2\) phase by Bhattacharya-Shankar in [10, 32]. If \(\Theta \) is merely in \(C^{\alpha }\) and supercritical, counterexamples to Hessian estimates exist as shown in [33].

While our knowledge is still limited when it comes to the variable Lagrangian phase \(\Theta (x)\), it narrows even further when the Lagrangian phase is dependent on both the potential and the gradient of the potential of the Lagrangian submanifold, i.e., \(\Theta (x,u,Du)\). Applying the integral method of [27] to the current problem poses numerous challenges. For instance, establishing the Jacobi-type inequality becomes significantly more intricate due to the presence of the gradient term Du in \(\Theta \). Consequently, it is by no means a straightforward process to combine the derivatives of \(\Theta \) into a single constant term as in [27]. Next, due to the presence of the gradient term in the phase, the Michael-Simon Sobolev inequality cannot be used to estimate the integral of the volume form by a weighted volume of the non-minimal Lagrangian graph. We circumvent this issue by using the Lewy-Yuan rotation [14, p. 122], which is reminiscent of the technique used in [23]. This rotation results in a uniformly elliptic Jacobi inequality on the rotated Lagrangian graph, which allows the use of a local maximum principle [34, Theorem 9.20]. However, the constants appearing in our Jacobi inequality are dependent on the oscillation of the potential. Therefore we need an explicit dependence of the constants arising in the local maximum principle on \(\textrm{osc}(u)\). To address this, we state and prove a version of the local maximum principle [34, Theorem 9.20] applied to our specific equation (see Appendix). Next, rotating back to the original coordinates and keeping track of the constants appearing at each step, we bound the slope of the gradient graph (xDu(x)) at the origin by an exponential function of the oscillation of u. Note that since the Michael-Simon mean value [35, Theorem 3.4] and Sobolev inequalities [35, Theorem 2.1] are not employed, there is no explicit dependence on the mean curvature bound in our final estimate.

The critical and supercritical phase case \(|\Theta |\ge (n-2)\frac{\pi }{2}\) introduces new challenges requiring new techniques, which we present along with the supercritical phase case \(|\Theta |\ge (n-2)\frac{\pi }{2}+\delta \) in forthcoming work [36].

2 Preliminaries

For the convenience of the readers, we recall some preliminary results. We first introduce some notations that will be used in this paper. The induced Riemannian metric on the Lagrangian submanifold \(X=(x,Du(x))\subset {\mathbb {R}}^n\times {\mathbb {R}}^n\) is given by

$$\begin{aligned} g=I_n+(D^2u)^2. \end{aligned}$$

We denote

$$\begin{aligned} \partial _i=\frac{\partial }{\partial x_i} \text { , } \partial _{ij}=\frac{\partial ^2}{\partial x_i\partial x_j} \text { , } u_i=\partial _iu \text { , } u_{ij}=\partial _{ij}u. \end{aligned}$$

Note that for the functions defined below, the subscripts on the left do not represent partial derivatives

$$\begin{aligned} h_{ijk}=\sqrt{g^{ii}}\sqrt{g^{jj}}\sqrt{g^{kk}}u_{ijk},\quad g^{ii}=\frac{1}{1+\lambda _i^2}. \end{aligned}$$

Here \((g^{ij})\) is the inverse of the matrix g and \(h_{ijk}\) denotes the second fundamental form when the Hessian of u is diagonalized. The volume form, gradient, and inner product with respect to the metric g are given by

$$\begin{aligned}&dv_g=\sqrt{\det g}dx = Vdx \text { , }\qquad \nabla _g v=g^{ij}v_iX_j,\\&\langle \nabla _gv,\nabla _g w\rangle _g =g^{ij}v_iw_j \text { , }\quad |\nabla _gv|^2=\langle \nabla _gv,\nabla _g v\rangle _g. \end{aligned}$$

Next, we derive the Laplace-Beltrami operator on the non-minimal submanifold (xDu(x)). Taking variations of the energy functional \(\int |\nabla _g v|^2 dv_g\) with respect to v, one gets the Laplace-Beltrami operator of the metric g:

$$\begin{aligned} \Delta _g =\frac{1}{\sqrt{ g}}\partial _i(\sqrt{ g}g^{ij}\partial _j )&=g^{ij}\partial _{ij}+\frac{1}{\sqrt{g}}\partial _i(\sqrt{g}g^{ij})\partial _j \\&=g^{ij}\partial _{ij}-g^{jp}u_{pq}(\partial _q\Theta ) \partial _j. \nonumber \end{aligned}$$
(2.1)

The last equation follows from the following computation:

$$\begin{aligned} \frac{1}{\sqrt{g}}\partial _i(\sqrt{g}g^{ij})&=\frac{1}{\sqrt{g}} \partial _i(\sqrt{g})g^{ij}+\partial _ig^{ij} \nonumber \\&=\frac{1}{2}(\partial _i \ln g)g^{ij}+\partial _kg^{kj}\nonumber \\&=\frac{1}{2}g^{kl}\partial _i g_{kl}g^{ij}-g^{kl}\partial _k g_{lb}g^{bj}\nonumber \\&=-g^{jp}g^{ab}u_{abq}u_{pq}=-g^{jp} u_{pq}\partial _q\Theta \end{aligned}$$
(2.2)

where the last equation follows from (2.3) and (2.4) below. The first derivative of the metric g is given by

$$\begin{aligned} \partial _i g_{ab}=\partial _i(\delta _{ab}+u_{ak}u_{kb})=u_{aik}u_{kb} +u_{bik}u_{ka}\overset{\text {at } x_0}{=}u_{abi}(\lambda _a+\lambda _b), \end{aligned}$$
(2.3)

assuming the Hessian of u is diagonalized at \(x_0\). On taking the gradient of both sides of the Lagrangian mean curvature type Eq. (1.8), we get

$$\begin{aligned} \sum _{a,b=1}^{n}g^{ab}u_{jab}=\partial _j\Theta (x,u(x),Du(x)). \end{aligned}$$
(2.4)

For the general phase \(\Theta (x,u(x),Du(x))\), assuming the Hessian \(D^2u\) is diagonalized at \(x_0\), we get

$$\begin{aligned} \partial _i \Theta (x,u(x),Du(x))&= \Theta _{x_i} + \Theta _u u_i + \sum _{k=1}^n \Theta _{u_k}u_{ki} \end{aligned}$$
(2.5)
$$\begin{aligned}&\overset{x_0}{=}\ \Theta _{x_i} + \Theta _u u_i + \Theta _{u_i}\lambda _i. \end{aligned}$$
(2.6)

So from (2.6) and (1.3), we get, at the point \(x_0\in B_R\),

$$\begin{aligned} |\textbf{H}|_g^2=g^{ii}(\partial _i\Theta )^2&=g^{ii}\bigg (\Theta _{x_i}^2 + \Theta _u^2u_i^2 + \Theta _{u_i}^2\lambda _i^2 + 2\Theta _{x_i}\Theta _{u}u_i + 2\Theta _{x_i}\Theta _{u_i}\lambda _i +2\Theta _{u}\Theta _{u_i}u_i\lambda _i\bigg )\\&\le 3g^{ii}\bigg (\Theta _{x_i}^2 + \Theta _u^2u_i^2 + \Theta _{u_i}^2\lambda _i^2\bigg )\\&\le C(\nu _1, n, \textrm{osc}_{B_{R+1}}(u)). \end{aligned}$$

Taking the j-th partial derivative of (2.5), we get

$$\begin{aligned} \partial _{ij}\Theta (x,u(x),Du(x))&= \Theta _{x_ix_j} + \Theta _{x_i u}u_j + \sum _{r=1}^n \Theta _{x_iu_r}u_{rj}\nonumber \\&\quad +\left( \Theta _{ux_j} + \Theta _{uu}u_j + \sum _{s=1}^n \Theta _{u u_s}u_{sj} \right) u_i + \Theta _u u_{ij}\nonumber \\&\quad +\sum _{k=1}^n \left( \Theta _{u_kx_j} + \Theta _{u_ku}u_j + \sum _{\ell =1}^n \Theta _{u_ku_\ell }u_{\ell j}\right) u_{ki}+\sum _{k=1}^n \Theta _{u_k}u_{kij}\nonumber \\&\overset{x_0}{=}\ \Theta _{x_ix_j} + \Theta _{x_i u}u_j + \Theta _{x_iu_j}\lambda _j \\&\quad +\left( \Theta _{ux_j} + \Theta _{uu}u_j + \Theta _{u u_j}\lambda _j \right) u_i + \Theta _u \lambda _i\delta _{ij}\nonumber \\&\quad +\left( \Theta _{u_ix_j} + \Theta _{u_iu}u_j + \Theta _{u_iu_j}\lambda _j\right) \lambda _i + \sum _{k=1}^n \Theta _{u_k}u_{kij}.\nonumber \end{aligned}$$
(2.7)

Observe that when \(\Theta \) is constant, one can choose harmonic co-ordinates \(\Delta _g x=0\), which reduces the Laplace-Beltrami operator on the minimal submanifold \(\{(x,Du(x))|x\in B_R(0)\}\) to the linearized operator of (1.2) at u.

3 The slope as a subsolution to a fully nonlinear PDE

In this section, we prove a Jacobi-type inequality for the slope of the gradient graph (xDu(x)), i.e., we show that a certain function of the slope of the gradient graph (xDu(x)) is almost strongly subharmonic.

Proposition 3.1

Let u be a \(C^4\) convex solution of (1.8) in \({\mathbb {R}}^{n}\). Suppose that the Hessian \(D^{2}u\) is diagonalized at point \(x_0\). Then we have the following at \(x_0\)

$$\begin{aligned} \frac{1}{n}|\nabla _g \log \sqrt{\det g}|^2_g\le \sum _{i=1}^n\lambda _i^2h_{iii}^2 + \sum _{i\ne j} \lambda _j^2h_{jji}^2 \end{aligned}$$

and

$$\begin{aligned} \Delta _g \log \sqrt{\det g}&\overset{x_0}{=}\sum _{i=1}^n(1 + \lambda _i^2)h_{iii}^2 + \sum _{j\ne i}(3 + \lambda _j^2 + 2\lambda _i\lambda _j)h_{jji}^2\nonumber \\&\quad + 2\sum _{i<j<k}(3 + \lambda _i\lambda _j + \lambda _j\lambda _k + \lambda _k\lambda _i)h_{ijk}^2\nonumber \\&\quad +\sum _{i=1}^n g^{ii}\lambda _i\partial _{ii}\Theta - \sum _{i=1}^n g^{ii}\lambda _i(\partial _i\Theta ) \partial _i\log \sqrt{\det g}.\nonumber \end{aligned}$$
(3.1)

Proof

We compute some derivatives of the metric g. We have

$$\begin{aligned} \partial _j g_{ab}&= \sum _{k=1}^n(u_{akj}u_{kb} + u_{ak}u_{kbj})\nonumber \\&\overset{x_0}{=}\ u_{abj}(\lambda _a + \lambda _b ) \end{aligned}$$
(3.2)

and

$$\begin{aligned} \partial _ig^{ab}&= -g^{ak}\partial _ig_{kl}g^{lb}\nonumber \\&\overset{x_0}{=}\ -g^{aa}\partial _ig_{ab}g^{bb}\nonumber \\&\overset{x_0}{=}\ -g^{aa}g^{bb}u_{abi}(\lambda _a + \lambda _b ). \end{aligned}$$
(3.3)

Hence

$$\begin{aligned} \partial _{ij}g_{ab}&= \sum _{k=1}^n(u_{akji}u_{kb} + u_{akj}u_{kbi} + u_{aki}u_{kbj} + u_{ak}u_{kbij})\\&\overset{x_0}{=}\ u_{abji}(\lambda _a + \lambda _b ) +\sum _{k=1}^n(u_{akj}u_{kbi} + u_{aki}u_{kbj}). \end{aligned}$$

In order to substitute the 4th order derivatives, we take the partial derivative of (2.4) and get

$$\begin{aligned} \sum _{i,j=1}^ng^{ij}u_{ijk\ell }&= \partial _{k\ell }\Theta - \sum _{i,j=1}^n\partial _\ell g^{ij} u_{ijk}\\&\overset{x_0}{=}\ \partial _{k\ell }\Theta + \sum _{i,j=1}^ng^{ii}g^{jj}u_{ij\ell }u_{ijk}(\lambda _i + \lambda _j). \end{aligned}$$

Thus, we have

$$\begin{aligned} \sum _{i,j=1}^ng^{ij}\partial _{ij}g_{ab}&\overset{x_0}{=}\ (\lambda _a + \lambda _b)\partial _{ab}\Theta + \sum _{i,j=1}^ng^{ii}g^{jj}u_{ija}u_{ijb}(\lambda _i + \lambda _j )(\lambda _a + \lambda _b) \nonumber \\&\quad + \sum _{i,k=1}^n2g^{ii}u_{aki}u_{bki}. \end{aligned}$$
(3.4)

Next, we compute the norm of the gradient:

$$\begin{aligned} \frac{1}{n}|\nabla _g \log \sqrt{\det g}|^2_g&\overset{x_0}{=}\sum _{i=1}^n\frac{1}{n}g^{ii}\left( \partial _i\log \sqrt{\det g}\right) ^2\nonumber \\&\overset{x_0}{=}\sum _{i=1}^n\frac{1}{n}g^{ii}\left( \sum _{a,b=1}^n \frac{1}{2}g^{ab}\partial _ig_{ab}\right) ^2\nonumber \\&\overset{x_0}{=}\sum _{i=1}^n\frac{1}{n}g^{ii}\left( \sum _{a,b=1}^n \frac{1}{2}g^{ab}u_{abi}(\lambda _a + \lambda _b)\right) ^2 \quad \text {from }(3.1)\nonumber \\&\overset{x_0}{=}\sum _{i=1}^n\frac{1}{n}g^{ii}\left( \sum _{a=1}^n g^{aa}u_{aai}\lambda _a\right) ^2\\&\le \sum _{i,a=1}^n g^{ii}(g^{aa})^2u_{aai}^2\lambda _a^2\nonumber \\&\overset{x_0}{=}\sum _{i,a=1}^n h_{aai}^2\lambda _a^2\nonumber \\&\overset{x_0}{=}\ \sum _{i=1}^n\lambda _i^2h_{iii}^2 + \sum _{i\ne j} \lambda _j^2h_{jji}^2.\nonumber \end{aligned}$$
(3.5)

From here, we need to calculate \(\Delta _g \log \sqrt{\det g}\), where again, the Laplace-Beltrami operator takes the form of (2.1). From the above calculations, we observe that

$$\begin{aligned} \sum _{i,j=1}^ng^{ij}\partial _{ij}\log \sqrt{\det g}&= \sum _{i,j=1}^ng^{ij}\partial _j\left( \frac{1}{\sqrt{\det g}}\frac{1}{2\sqrt{\det g}}\partial _i\det g \right) \nonumber \\&=\sum _{i,j,a,b=1}^ng^{ij}\partial _j\left( \frac{1}{2\det g}\det g\; g^{ab}\partial _i g_{ab} \right) \nonumber \\&=\sum _{i,j,a,b=1}^ng^{ij} \frac{1}{2}\partial _j\left( g^{ab}\partial _i g_{ab}\right) \nonumber \\&=\sum _{i,j,a,b=1}^n g^{ij}\frac{1}{2}\left( (\partial _j g^{ab})\partial _ig_{ab} + g^{ab}\partial _{ij}g_{ab} \right) . \end{aligned}$$
(3.6)

Using (3.1) and (3.2), we see that the first term of (3.5) becomes

$$\begin{aligned} \sum _{i,j,a,b=1}^n\frac{1}{2}g^{ij}(\partial _j g^{ab})\partial _ig_{ab} \overset{x_0}{=}-\frac{1}{2}\sum _{i,a,b=1}^n g^{ii}g^{aa}g^{bb}u_{abi}^2(\lambda _a + \lambda _b )^2. \end{aligned}$$
(3.7)

Using (3.3), the second term of (3.5) becomes

$$\begin{aligned} \sum _{i,j,a,b=1}^n\frac{1}{2}g^{ij}g^{ab}\partial _{ij}g_{ab}&\overset{x_0}{=}\ \sum _{a=1}^n g^{aa}\lambda _a\partial _{aa}\Theta + \sum _{i,j,a=1}^n g^{aa}g^{ii}g^{jj}u_{ija}^2(\lambda _i + \lambda _j)\lambda _a \nonumber \\&\quad + \sum _{i,k,a=1}^n g^{aa}g^{ii}u_{aki}^2. \end{aligned}$$
(3.8)

Combining (3.6) and (3.7), we get

$$\begin{aligned} \sum _{i,j=1}^n g^{ij}\partial _{ij}\log \sqrt{\det g}&\overset{x_0}{=} \sum _{a=1}^n g^{aa}\lambda _a\partial _{aa}\Theta +\sum _{i,j,a=1}^n g^{aa}g^{ii}g^{jj}u_{ija}^2(\lambda _i + \lambda _j)\lambda _a \\&\quad + \sum _{i,k,a=1}^n g^{aa}g^{ii}u_{aki}^2 -\frac{1}{2} \sum _{i,a,b=1}^n g^{ii}g^{aa}g^{bb}u_{abi}^2 (\lambda _a + \lambda _b )^2\\&\overset{x_0}{=}\ \sum _{a=1}^n g^{aa}\lambda _a\partial _{aa}\Theta + \sum _{a,b,c=1}^n g^{aa}g^{bb}g^{cc}u_{abc}^2(\lambda _b + \lambda _c)\lambda _a \\&\quad + \sum _{a,b,c=1}^n g^{aa}g^{bb}g^{cc}u_{abc}^2(1 + \lambda _c^2) -\frac{1}{2} \sum _{a,b,c=1}^n g^{aa}g^{bb}g^{cc}u_{abc}^2 (\lambda _a + \lambda _b )^2\\&\overset{x_0}{=}\sum _{a=1}^n g^{aa}\lambda _a\partial _{aa}\Theta + \sum _{a,b,c=1}^n h_{abc}^2(1+\lambda _b\lambda _c)\\&\overset{x_0}{=}\sum _{i=1}^n g^{ii}\lambda _i\partial _{ii}\Theta + \sum _{i=1}^n(1 + \lambda _i^2)h_{iii}^2 + \sum _{j\ne i}(3 + \lambda _j^2 + 2\lambda _i\lambda _j)h_{jji}^2\\&\quad + 2\sum _{i<j<k}(3 + \lambda _i\lambda _j + \lambda _j\lambda _k + \lambda _k\lambda _i)h_{ijk}^2. \end{aligned}$$

Altogether, we get

$$\begin{aligned} \Delta _g \log \sqrt{\det g}&\overset{x_0}{=}\sum _{i=1}^n(1 + \lambda _i^2)h_{iii}^2 + \sum _{j\ne i}(3 + \lambda _j^2 + 2\lambda _i\lambda _j)h_{jji}^2\\&\quad + 2\sum _{i<j<k}(3 + \lambda _i\lambda _j + \lambda _j\lambda _k + \lambda _k\lambda _i)h_{ijk}^2\\&\quad +\sum _{i=1}^n g^{ii}\lambda _i\partial _{ii}\Theta - \sum _{i=1}^n g^{ii}\lambda _i(\partial _i\Theta )\partial _i\log \sqrt{\det g}. \end{aligned}$$

\(\square \)

Lemma 3.1

Let u be a \(C^4\) convex solution of (1.8) in \(B_2(0)\subset {\mathbb {R}}^{n}\) where \(\Theta (x,z,p)\in C^2(\Gamma _2)\) is partially convex in the p variable and satisfies (1.9). Suppose that the Hessian \(D^{2}u\) is diagonalized at \(x_0\in B_1(0)\). Then at \(x_0\), the function \(\log \sqrt{\det g}\) satisfies

$$\begin{aligned} \Delta _g \log \sqrt{\det g}\ge c(n)|\nabla _g\log \sqrt{\det g}|^2-C \end{aligned}$$
(3.9)

where \(C=C(n,\nu _1,\nu _2)(1 + (\textrm{osc}_{B_2}(u))^2)\).

Proof

  • Step 1. From Proposition 3.1, we get, at \(x_0\in B_1(0)\),

    $$\begin{aligned} \Delta _g \log \sqrt{\det g} - \frac{1}{n}|\nabla _g \log \sqrt{\det g}|^2_g&\ge \sum _{i=1}^n(1 + \lambda _i^2)h_{iii}^2 + \sum _{j\ne i}(3 + \lambda _j^2 + 2\lambda _i\lambda _j)h_{jji}^2 \nonumber \\&\quad + 2\sum _{i<j<k}(3 + \lambda _i\lambda _j + \lambda _j\lambda _k + \lambda _k\lambda _i)h_{ijk}^2\nonumber \\&\quad - \sum _{i=1}^n\lambda _i^2h_{iii}^2 - \sum _{i\ne j}\lambda _j^2h_{jji}^2\nonumber \\&\quad +\sum _{i=1}^n g^{ii}\lambda _i\partial _{ii}\Theta - \sum _{i=1}^n g^{ii}\lambda _i(\partial _i\Theta )\partial _i\log \sqrt{\det g} \nonumber \\&=\sum _{i=1}^nh_{iii}^2 + \sum _{j\ne i}(3 + 2\lambda _i\lambda _j)h_{jji}^2 \nonumber \\&\quad + 2\sum _{i<j<k}(3 + \lambda _i\lambda _j + \lambda _j\lambda _k + \lambda _k\lambda _i)h_{ijk}^2\nonumber \\&\quad +\sum _{i=1}^n g^{ii}\lambda _i\partial _{ii}\Theta - \sum _{i=1}^n g^{ii}\lambda _i(\partial _i\Theta )\partial _i\log \sqrt{\det g}\nonumber \\&\ge \sum _{i=1}^n g^{ii}\lambda _i\partial _{ii}\Theta - \sum _{i=1}^n g^{ii}\lambda _i(\partial _i\Theta )\partial _i\log \sqrt{\det g} \end{aligned}$$
    (3.10)

    where the last inequality follows from the convexity of u.

    From here, we use (2.7) to get

    $$\begin{aligned} \sum _{a=1}^n g^{aa}\lambda _a \partial _{aa}\Theta&\overset{x_0}{=}\sum _{a=1}^n \frac{\lambda _a}{1+\lambda _a^2}\bigg [\Theta _{x_ax_a} + \Theta _{x_a u}u_a + \Theta _{x_au_a}\lambda _a\nonumber \\&\quad +\left( \Theta _{ux_a} + \Theta _{uu}u_a + \Theta _{u u_a}\lambda _a \right) u_a + \Theta _u \lambda _a\nonumber \\&\quad +\left( \Theta _{u_ax_a} + \Theta _{u_au}u_a + \Theta _{u_au_a}\lambda _a\right) \lambda _a\nonumber \\&\quad +\sum _{k=1}^n \Theta _{u_k}u_{kaa}\bigg ]\nonumber \\&\overset{x_0}{=}\sum _{a=1}^n \frac{\lambda _a}{1+\lambda _a^2}\bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a + 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a\\&\quad + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2 + \Theta _{u_au_a}\lambda _a^2 + \sum _{k=1}^n \Theta _{u_k}u_{kaa}\bigg ] \nonumber \end{aligned}$$
    (3.11)
    $$\begin{aligned}&\overset{x_0}{=}\sum _{a=1}^n \frac{\lambda _a}{1+\lambda _a^2}\bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a+ 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a \\&\quad + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2 + \Theta _{u_au_a}\lambda _a^2\bigg ] + \sum _{k=1}^n \Theta _{u_k}\partial _k\log \sqrt{\det g}\quad \text {using } (3.4). \nonumber \end{aligned}$$
    (3.12)

    Similarly, using (2.6), we get

    $$\begin{aligned} \sum _{i=1}^n g^{ii}\lambda _i(\partial _i\Theta )\partial _i\log \sqrt{\det g}&\overset{x_0}{=}\ \sum _{i=1}^n \frac{\lambda _i}{1+ \lambda _i^2}\left( \Theta _{x_i} + \Theta _u u_i + \Theta _{u_i}\lambda _i\right) \partial _i\log \sqrt{\det g}. \end{aligned}$$
    (3.13)

    Hence, (3.9) becomes

    $$\begin{aligned}&\sum _{a=1}^n g^{aa}\lambda _a\partial _{aa}\Theta - \sum _{i=1}^n g^{ii}\lambda _i(\partial _i\Theta )\partial _i\log \sqrt{\det g}\nonumber \\&\quad \overset{x_0}{=}\sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2} \bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a+ 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2 + \Theta _{u_au_a}\lambda _a^2\bigg ]\nonumber \\&\qquad + \sum _{k=1}^n\Theta _{u_k}\partial _k\log \sqrt{\det g}-\sum _{k=1}^n\frac{\lambda _k}{1+ \lambda _k^2}\left( \Theta _{x_k} + \Theta _u u_k + \Theta _{u_k}\lambda _k\right) \partial _k\log \sqrt{\det g} \nonumber \\&\overset{x_0}{=}\sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2}\bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a+ 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2 + \Theta _{u_au_a}\lambda _a^2\bigg ]\\&\qquad +\sum _{k=1}^n\frac{1}{1+ \lambda _k^2}\left( \Theta _{u_k} - \Theta _{x_k}\lambda _k - \Theta _u u_k\lambda _k\right) \partial _k\log \sqrt{\det g}. \end{aligned}$$
    (3.14)
  • Step 2.1. Using Young’s inequality, (3.14) can be bounded below by

    $$\begin{aligned}&\sum _{k=1}^n\frac{1}{1+ \lambda _k^2} \left( \Theta _{u_k} - \Theta _{x_k}\lambda _k - \Theta _u u_k\lambda _k\right) \partial _k\log \sqrt{\det g} \nonumber \\&\quad \ge -\sum _{k=1}^n\frac{1}{1+ \lambda _k^2}\left( |\Theta _{u_k}| + |\Theta _{x_k}|\lambda _k + |\Theta _u u_k|\lambda _k\right) |\partial _k\log \sqrt{\det g}|\nonumber \\&\quad \ge -\frac{1}{2\epsilon }\sum _{k=1}^n\frac{1}{1+ \lambda _k^2}\left( \Theta _{u_k}^2 + \Theta _{x_k}^2\lambda _k^2 + \Theta _u^2 u_k^2\lambda _k^2\right) - \frac{\epsilon }{2}|\nabla _g\log \sqrt{\det g}|_g^2. \end{aligned}$$
    (3.15)

    Altogether, from (3.9), (3.13), and (3.15), we have

    $$\begin{aligned}&\Delta _g \log \sqrt{\det g} - \left( \frac{1}{n}-\frac{\epsilon }{2}\right) |\nabla _g \log \sqrt{\det g}|^2_g\\&\quad \ge \sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2}\bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a + 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2 + \Theta _{u_au_a}\lambda _a^2\bigg ]\\&\qquad -\frac{1}{2\epsilon }\sum _{k=1}^n\frac{1}{1+ \lambda _k^2}\left( \Theta _{u_k}^2 + \Theta _{x_k}^2\lambda _k^2 + \Theta _u^2 u_k^2\lambda _k^2\right) . \end{aligned}$$

    Let \(\epsilon = \frac{1}{n}\), so that we achieve

    $$\begin{aligned} \Delta _g&\log \sqrt{\det g} - \frac{1}{2n}|\nabla _g \log \sqrt{\det g}|^2_g\nonumber \\&\ge \sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2} \nonumber \\&\bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a + 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2 + \Theta _{u_au_a}\lambda _a^2\bigg ] \end{aligned}$$
    (3.16)
    $$\begin{aligned}&\quad -\frac{n}{2}\sum _{k=1}^n\frac{1}{1+ \lambda _k^2}\left( \Theta _{u_k}^2 + \Theta _{x_k}^2\lambda _k^2 + \Theta _u^2 u_k^2\lambda _k^2\right) . \end{aligned}$$
    (3.17)
  • Step 2.2 Here, we use the assumption that \(\Theta (x,z,p)\) is partially convex in the p variable. That is, \(\Theta _{u_a u_a} \ge 0\). This comes from the fact that \(D^2_{Du}\Theta \) is a symmetric positive definite matrix. Combined with the fact that u is a convex function, we get

    $$\begin{aligned} \frac{\lambda _a^3}{1 + \lambda _a^2}\Theta _{u_a u_a} \ge 0. \end{aligned}$$

    Thus, (3.16) becomes

    $$\begin{aligned}&\sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2}\bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a+ 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2 + \Theta _{u_au_a}\lambda _a^2 \bigg ]\nonumber \\&\quad \ge \sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2}\bigg [\Theta _{x_ax_a} + 2\Theta _{x_a u}u_a+ 2\Theta _{x_au_a}\lambda _a + 2\Theta _{uu_a}u_a\lambda _a + \Theta _{u}\lambda _a + \Theta _{uu}u_a^2\bigg ]\nonumber \\&\quad \ge -\sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2}\bigg [|\Theta _{x_ax_a}| + 2|\Theta _{x_a u}u_a|+ 2|\Theta _{x_au_a}|\lambda _a + 2|\Theta _{uu_a}u_a|\lambda _a + |\Theta _{u}|\lambda _a + |\Theta _{uu}|u_a^2\bigg ]. \end{aligned}$$
    (3.18)

    Now, for all \(\lambda _a \in [0,\infty ]\), we have that

    $$\begin{aligned} 0 \le \frac{\lambda _a}{1 + \lambda _a^2}\le 1 \quad \text {and} \quad 0 \le \frac{\lambda _a^2}{1 + \lambda _a^2}\le 1. \end{aligned}$$

    Hence, (3.17) and (3.18) yield

    $$\begin{aligned}&\Delta _g \log \sqrt{\det g} - \frac{1}{2n}|\nabla _g \log \sqrt{\det g}|^2_g\nonumber \\&\quad \ge - \sum _{a=1}^n\bigg [|\Theta _{x_ax_a}| + 2|\Theta _{x_a u}u_a| + 2|\Theta _{x_au_a}| + 2|\Theta _{uu_a}u_a| + |\Theta _{u}| + |\Theta _{uu}|u_a^2\bigg ]\\&\qquad - \frac{n}{2}\sum _{a=1}^n\left( \Theta _{u_a}^2 + \Theta _{x_a}^2 + \Theta _u^2 u_a^2\right) .\nonumber \end{aligned}$$
    (3.19)

    We observe that (3.19) is bounded by

    $$\begin{aligned}&\sum _{a=1}^n \bigg [|\Theta _{x_ax_a}| + 2|\Theta _{x_a u}u_a|+ 2|\Theta _{x_au_a}| + 2|\Theta _{uu_a}u_a| + |\Theta _{u}| + |\Theta _{uu}|u_a^2\bigg ]\\&\quad + \frac{n}{2}\sum _{a=1}^n\left( \Theta _{u_a}^2 + \Theta _{x_a}^2 + \Theta _u^2 u_a^2\right) \\&\quad \le C(n,\nu _1,\nu _2)\left( 1 + \sum _{a=1}^n(|u_a| + u_a^2)\right) \\&\quad \le C(n,\nu _1,\nu _2)(1 + |Du(x_0)| + |Du(x_0)|^2)\\&\quad \le C(n,\nu _1,\nu _2)(1 + ||Du||_{L^\infty (B_1)} + ||Du||_{L^\infty (B_1)}^2)\\&\quad \le C(n,\nu _1,\nu _2)(1 + (\textrm{osc}_{B_2}(u))^2) \end{aligned}$$

    where the last inequality comes from the convexity of u and Young’s inequality.

    Therefore,

    $$\begin{aligned} \Delta _g \log \sqrt{\det g} - \frac{1}{2n}|\nabla _g \log \sqrt{\det g}|^2_g \ge - C(n,\nu _1,\nu _2)(1 + (\textrm{osc}_{B_2}(u))^2) \end{aligned}$$

    as desired.

\(\square \)

Corollary 3.1

Let u be a \(C^4\) convex solution to (1.5) in \(B_2(0)\subset {\mathbb {R}}^n\). Assuming the Hessian \(D^2u\) is diagonalized at \(x_0\in B_1(0)\), (3.8) holds with \(C= C(n,s_2)(1+(\textrm{osc}_{B_2}(u))^2)\).

Proof

Let \(x_0\in B_1\). As \(\Theta (x,u(x),Du(x)) = s_1 + s_2(x\cdot Du(x) - 2u(x))\), we get that

$$\begin{aligned} \begin{matrix} \Theta _{x_i}= s_2u_i &{} \Theta _{x_ix_j}= 0 &{} \Theta _{x_iu}= 0&{}\Theta _{x_iu_j}=s_2\delta _{ij}\\ \Theta _u = -2s_2 &{} \Theta _{ux_j} = 0&{} \Theta _{uu} = 0 &{}\Theta _{uu_j}=0\\ \Theta _{u_i} = s_2x_i &{} \Theta _{u_ix_j}= s_2\delta _{ij} &{} \Theta _{u_iu}= 0 &{} \Theta _{u_iu_j}= 0. \end{matrix} \end{aligned}$$

Hence (3.13) becomes zero and (3.14) becomes

$$\begin{aligned} \sum _{k=1}^n\frac{s_2}{1+ \lambda _k^2}\left( x_k + u_k\lambda _k\right) \partial _k\log \sqrt{\det g}. \end{aligned}$$

Applying Young’s inequality and simplifying, we get

$$\begin{aligned} \Delta _g \log \sqrt{\det g} - \frac{1}{2n}|\nabla _g \log \sqrt{\det g}|^2_g \ge -\frac{ns_2^2}{2}\left( |x_0|^2 + |Du(x_0)|^2\right) \ge -C. \end{aligned}$$

\(\square \)

Corollary 3.2

Let u be a \(C^4\) convex solution to (1.6) in \(B_2(0)\subset {\mathbb {R}}^n\). Assuming the Hessian \(D^2u\) is diagonalized at \(x_0\in B_1(0)\), (3.8) holds with \(C= C(n,t_2,t_3)\).

Proof

As \(\Theta (x,u(x),Du(x)) = t_1 + t_2\cdot x + t_3\cdot Du(x)\), we get

$$\begin{aligned} \Theta _{x_i} = t_{2,i} \quad \text { and }\quad \Theta _{u_i} = t_{3,i} \end{aligned}$$

where all the remaining derivatives are zero. Hence (3.13) is zero and (3.14) becomes

$$\begin{aligned} \sum _{k=1}^n\frac{1}{1+ \lambda _k^2}\left( t_{3,k} + t_{2,k}\lambda _k\right) \partial _k\log \sqrt{\det g}. \end{aligned}$$

Applying Young’s inequality and simplifying, we get

$$\begin{aligned} \Delta _g \log \sqrt{\det g} - \frac{1}{2n}|\nabla _g \log \sqrt{\det g}|^2_g \ge -\frac{n}{2}\left( |t_2|^2 + |t_3|^2\right) = -C. \end{aligned}$$

\(\square \)

Corollary 3.3

Let u be a \(C^4\) convex solution to (1.7) in \(B_2(0)\subset {\mathbb {R}}^n\). Assuming the Hessian \(D^2u\) is diagonalized at \(x_0\in B_1(0)\), (3.8) holds with \(C= C(n,r_2)(1 + (\textrm{osc}_{B_2}(u))^2)\).

Proof

Let \(x_0\in B_1\). As \(\Theta (x,u(x),Du(x)) = r_1 + \frac{r_2}{2}(|x|^2 + |Du(x)|^2)\), we get

$$\begin{aligned} \begin{matrix} \Theta _{x_i}= r_2x_i &{} \Theta _{x_ix_j}= r_2\delta _{ij} &{}\Theta _{x_iu_j}=0\\ \Theta _{u_i} = r_2u_i &{} \Theta _{u_ix_j}= 0 &{} \Theta _{u_iu_j}= r_2\delta _{ij}. \end{matrix} \end{aligned}$$

Then (3.13) and (3.14) are bounded below by

$$\begin{aligned}&\sum _{a=1}^n\frac{\lambda _a}{1+\lambda _a^2}\bigg [r_2 +r_2\lambda _a^2\bigg ]+\sum _{k=1}^n\frac{r_2}{1+ \lambda _k^2}\left( u_k - x_k\lambda _k\right) \partial _k\log \sqrt{\det g}\\&\quad \ge \sum _{k=1}^n\frac{r_2}{1+ \lambda _k^2}\left( u_k - x_k\lambda _k\right) \partial _k\log \sqrt{\det g} \end{aligned}$$

since \(r_2\ge 0\) and \(\lambda _a\ge 0\) for all \(1\le a\le n\). Thus, using Young’s inequality and simplifying, we get

$$\begin{aligned} \Delta _g \log \sqrt{\det g} - \frac{1}{2n}|\nabla _g \log \sqrt{\det g}|^2_g \ge -\frac{nr_2^2}{2}\left( |x_0|^2 + |Du(x_0)|^2\right) \ge -C. \end{aligned}$$

\(\square \)

Lemma 3.2

Let u be a \(C^4\) convex solution of (1.5),(1.6),(1.7),(1.8) on \(B_{2}(0)\subset {\mathbb {R}}^n\). Let

$$\begin{aligned} b= \log V = \log \sqrt{\det g}. \end{aligned}$$

Then b is \(C^2\), and hence, for all nonnegative \(\phi \in C_0^\infty (B_1)\), b satisfies the integral Jacobi inequality, each with their respective constant C:

$$\begin{aligned} \int _{B_1}-\langle \nabla _g \phi ,\nabla _g b\rangle _g dv_g\ge c(n)\int _{B_1}\phi |\nabla _gb|^2dv_g-\int _{B_1}C\phi \; dv_g. \end{aligned}$$

Consequently, we have

$$\begin{aligned} \int _{B_r}|\nabla _g b|^2 dv_g \le C(n)\left( \frac{1}{1-r} + C\right) \int _{B_1}d v_g \end{aligned}$$

for \(0< r < 1\).

Proof

Since u is \(C^4\), it follows that \(g = I + (D^2u)^2\) is \(C^2\). Note that \(\det g\) is \(C^2\) since the determinant is a smooth function, and furthermore, at each point, we have \(\det g(x) =\prod _i^n(1 + \lambda _i^2(x)) \ge 1\). From this, it follows that \(\log \sqrt{\det g}\) is well defined and \(C^2\) as a composition of smooth and \(C^2\) functions. It immediately follows, using (3.8) and integration by parts,

$$\begin{aligned} \int _{B_1}-\langle \nabla _g \phi ,\nabla _g b\rangle _g dv_g = \int _{B_1}\phi \Delta _g b\; dv_g \ge c(n)\int _{B_1}\phi |\nabla _gb|^2dv_g-\int _{B_1}C\phi \; dv_g. \end{aligned}$$

Rearranging, we see that for any cutoff \(\phi \in C_0^\infty (B_1)\),

$$\begin{aligned} \int _{B_1} \phi ^2|\nabla _g b|^2\;dv_g&\le \frac{1}{c(n)}\int _{B_1}\phi ^2 \Delta _g b\; dv_g +\frac{1}{c(n)}\int _{B_1}\phi ^2C\;dv_g \\&= -\frac{1}{c(n)}\int _{B_1}\langle 2\phi \nabla _g \phi ,\nabla _g b\rangle _g dv_g + \frac{1}{c(n)}\int _{B_1}\phi ^2C\;dv_g\\&\le \frac{1}{2}\int _{B_1}\phi ^2|\nabla _g b|^2dv_g + \frac{2}{c(n)^2}\int _{B_1}|\nabla _g \phi |^2 dv_g + \frac{1}{c(n)}\int _{B_1}\phi ^2C\;dv_g. \end{aligned}$$

Let \(0< r< 1\). Choose \(0\le \phi \le 1\) with \(\phi =1\) on \(B_r\) and \(|D\phi |\le \frac{2}{1-r}\) in \(B_1\) to get

$$\begin{aligned} \int _{B_r}|\nabla _g b|^2dv_g&\le \int _{B_1}\phi ^2|\nabla _g b|^2dv_g\\&\le \frac{4}{c(n)^2}\int _{B_1}|\nabla _g\phi |^2dv_g + \frac{2}{c(n)}\int _{B_1} \phi ^2 C\; dv_g\\&\le C(n)\left( \frac{1}{1-r} + C\right) \int _{B_1}dv_g. \end{aligned}$$

\(\square \)

4 Sobolev inequalities and the Lewy–Yuan rotation

We first recall the Lewy-Yuan rotation developed in [14, p. 122] for the convex potential u of the Lagrangian graph \(X = (x,Du(x))\): We rotate it to \(X = ({\bar{x}},D{\bar{u}}({\bar{x}}))\) in a new co-ordinate system of \({\mathbb {R}}^n\times {\mathbb {R}}^n\cong {\mathbb {C}}^n\) via \({\bar{z}} = e^{-i\frac{\pi }{4}}z\), where \(z = x + iy\) and \({\bar{z}} = {\bar{x}} + i{\bar{y}}\). That is,

$$\begin{aligned} {\left\{ \begin{array}{ll} {\bar{x}} = \frac{\sqrt{2}}{2}x + \frac{\sqrt{2}}{2}Du(x)\\ {\bar{y}} = D{\bar{u}} = -\frac{\sqrt{2}}{2} x + \frac{\sqrt{2}}{2}Du(x). \end{array}\right. } \end{aligned}$$
(4.1)

We state the following proposition from [23, Prop 3.1] and [14, p. 122].

Proposition 4.1

Let u be a \(C^4\) convex function on \(B_R(0)\subset {\mathbb {R}}^n\). Then the Lagrangian submanifold \(X = (x,Du(x))\subset {\mathbb {R}}^n\times {\mathbb {R}}^n\) can be represented as a gradient graph \(X = ({\bar{x}},D{\bar{u}}({\bar{x}}))\) of the new potential \({\bar{u}}\) in a domain containing a ball of radius

$$\begin{aligned} {\bar{R}}\ge \frac{\sqrt{2}}{2}R \end{aligned}$$
(4.2)

such that in these coordinates the new Hessian satisfies

$$\begin{aligned} -I \le D^2{\bar{u}} \le I. \end{aligned}$$
(4.3)

We define

$$\begin{aligned} {\bar{\Omega }}_r = {\bar{x}}(B_r(0)). \end{aligned}$$

From (4.1), for \({\bar{x}}\in {\bar{\Omega }}_r\), we have that

$$\begin{aligned} |{\bar{x}}|\le r\frac{\sqrt{2}}{2} + ||Du||_{L^\infty (B_r)}\frac{\sqrt{2}}{2}:= \rho (r), \end{aligned}$$
(4.4)

and from (4.2), we have

$$\begin{aligned} \text {dist}({\bar{\Omega }}_1,\partial {\bar{\Omega }}_{2n})\ge \frac{2n-1}{\sqrt{2}}\ge \frac{3}{\sqrt{2}}>2. \end{aligned}$$

From (4.3), it follows that the induced metric on \(X= ({\bar{x}},D{\bar{u}}({\bar{x}}))\) in \({\bar{x}}-\)coordinates is bounded by

$$\begin{aligned} d{\bar{x}}^2 \le g({\bar{x}})\le 2 d{\bar{x}}^2. \end{aligned}$$
(4.5)

Next, we state the following Sobolev inequality, which is a generalization of Proposition 3.2 from [23]. For the sake of completeness, we add a proof below.

Proposition 4.2

Let u be a \(C^4\) convex function on \(B_{R'}(0)\subset {\mathbb {R}}^n\). Let f be a \(C^2\) positive function on the Lagrangian surface \(X=(x,Du(x))\). Let \(0< r< R < R'\) be such that \(R-r> 2\sqrt{2}\epsilon \). Then

$$\begin{aligned} \left[ \int _{B_r}|(f-{\tilde{f}})^+|^\frac{n}{n-1}dv_g \right] ^\frac{n-1}{n} \le C(n)\left( \frac{\rho ^2}{r \epsilon }\right) ^{(n-1)}\int _{B_R}|\nabla _g(f - {\tilde{f}})^+|dv_g \end{aligned}$$

where \(\rho = \rho (R')\) is as defined in (4.4), and

$$\begin{aligned} {\tilde{f}} = \frac{2}{|B_r|}\int _{B_R(0)} fdx. \end{aligned}$$

We first state and prove a generalization of Lemma 3.2 from [23].

Lemma 4.1

Let \(\Omega _1\subset \Omega _2\subset B_{\rho }\subset {\mathbb {R}}^n\) and \(\epsilon >0\). Suppose that dist\((\Omega _1,\partial \Omega _2)\ge 2\epsilon \); A and \(A^c\) are disjoint measurable sets such that \(A\cup A^c = \Omega _2\). Then

$$\begin{aligned} \min \{|A\cap \Omega _1|,|A^c\cap \Omega _1|\}\le C(n)\frac{\rho ^n}{\epsilon ^n}|\partial A\cap \partial A^c|^\frac{n}{n-1}. \end{aligned}$$

Proof

Define the following continuous function on \(\Omega _1\):

$$\begin{aligned} \xi (x) = \frac{|A\cap B_{\epsilon }(x)|}{|B_\epsilon |}. \end{aligned}$$

Case 1. \(\xi (x_0)=\frac{1}{2}\) for some \(x_0\in \Omega _1\). We then have that \(B_\epsilon (x_0)\subset \Omega _2\) by dist\((\Omega _1,\partial \Omega _2)\ge 2\epsilon \). From the classical relative isoperimetric inequality for balls [37, Theorem 5.3.2], we have

$$\begin{aligned} \frac{|B_\epsilon |}{2}&= |A\cap B_\epsilon (x_0)|\\&\le C(n)|\partial (A\cap B_\epsilon (x_0))\cap \partial (A^c\cap B_\epsilon (x_0))|^\frac{n}{n-1}\\&\le C(n)|\partial A\cap \partial A^c|^\frac{n}{n-1}. \end{aligned}$$

Hence,

$$\begin{aligned} \min \{|A\cap \Omega _1|,|A^c\cap \Omega _1|\}\le |\Omega _1|\le |B_\rho |=\frac{\rho ^n}{\epsilon ^n}|B_\epsilon |\le C(n)\frac{\rho ^n}{\epsilon ^n}|\partial A\cap \partial A^c|^\frac{n}{n-1}. \end{aligned}$$

Case 2. \(\xi (x)>\frac{1}{2}\) for all \(x\in \Omega _1\). Cover \(\Omega _1\) by \(N\le C(n)\frac{\rho ^n}{\epsilon ^n}\) balls of radius epsilon \(B_\epsilon (x_i)\) for some uniform constant C(n) since \(\Omega _1\) is bounded. Note that all of these balls are in \(\Omega _2\) since dist\((\Omega _1,\partial \Omega _2)\ge 2\epsilon \). Thus,

$$\begin{aligned} |A^c\cap B_\epsilon (x_i)|= \min \{|A\cap B_\epsilon (x_i)|,|A^c\cap B_\epsilon (x_i)|\} \le C(n)|\partial A\cap \partial A^c|^\frac{n}{n-1}. \end{aligned}$$

Summing over the cover, we get

$$\begin{aligned} |A^c\cap \Omega _1|\le \sum _{i=1}^N|A^c\cap B_\epsilon (x_i)|\le C(n)\frac{\rho ^n}{\epsilon ^n}|\partial A\cap \partial A^c|^\frac{n}{n-1}. \end{aligned}$$

Case 3. \(\xi (x)<\frac{1}{2}\) for all \(x\in \Omega _1\). Repeating the same proof in Case 2, but with A instead of \(A^c\), yields the same result. \(\square \)

Proof of Proposition 4.2

Let \(M=||f||_{L^\infty (B_r)}\). If \(M\le \tilde{f}\), then \((f-\tilde{f})^+=0\) on \(B_r\), and hence, the left hand side is zero, from which the result follows immediately. We assume \(\tilde{f}< M\). By the Morse-Sard Lemma [38, Lemma 13.15], [39], \(\{x | f(x) = t\}\) is \(C^1\) for almost all \(t \in (\tilde{f},M)\). We first show that for such t,

$$\begin{aligned} |\{x | f(x)> t\}\cap B_r|_g\le C(n)\frac{\rho ^{2n}}{r^n\epsilon ^{n}}|\{x | f(x) = t\}\cap B_R|_g^\frac{n}{n-1}. \end{aligned}$$
(4.6)

Note \(|\cdot |_g\) is the metric with respect to g, and \(|\cdot |\) is the Euclidean metric.

Let \(t>\tilde{f}\). It must be that

$$\begin{aligned} \frac{|B_r|}{2}> |\{x | f(x) > t\}\cap B_r| \end{aligned}$$

since otherwise

$$\begin{aligned} M = \frac{2}{|B_r|}\int _0^M\frac{|B_r|}{2}dt\le \frac{2}{|B_r|}\int _0^M|\{x|f(x)>t\}\cap B_r|dt \le \frac{2}{|B_r|}\int _{B_R}fdx=\tilde{f} < M. \end{aligned}$$

From this, it follows

$$\begin{aligned} |\{x | f(x) \le t\}\cap B_r| > \frac{|B_r|}{2}. \end{aligned}$$
(4.7)

Let \(A_t = \{{\bar{x}}|f({\bar{x}}) > t\}\cap {\bar{\Omega }}_R\). From Lemma 4.1, we have that

$$\begin{aligned} \min \{|A_t\cap {\bar{\Omega }}_r|,|A_t^c\cap {\bar{\Omega }}_r|\}\le C(n)\frac{\rho ^n}{\epsilon ^n}|\partial A_t\cap \partial A_t^c|^\frac{n}{n-1}. \end{aligned}$$

If \(|A_t\cap {\bar{\Omega }}_r|\le |A_t^c\cap {\bar{\Omega }}_r|\), then

$$\begin{aligned} |A_t\cap {\bar{\Omega }}_r|_{g({\bar{x}})}&\le 2^\frac{n}{2}|A_t\cap {\bar{\Omega }}_r|\\&\le C(n)\frac{\rho ^n}{\epsilon ^n}|\partial A_t\cap \partial A_t^c|^\frac{n}{n-1}_{g({\bar{x}})}. \end{aligned}$$

On the other hand, if \(|A_t\cap {\bar{\Omega }}_r|> |A_t^c\cap {\bar{\Omega }}_r|\), from (4.7), we have

$$\begin{aligned} |A_t^c\cap {\bar{\Omega }}_r|>\frac{|B_r|}{2^{n+1}}, \end{aligned}$$

and so

$$\begin{aligned} |A_t\cap {\bar{\Omega }}_r|\le \frac{\rho ^n}{r^n}|B_r|\le 2^{n+1}\frac{\rho ^n}{r^n}|A_t^c\cap {\bar{\Omega }}_r|. \end{aligned}$$

Therefore

$$\begin{aligned} |A_t\cap {\bar{\Omega }}_r|_{g({\bar{x}})}\le C(n)\frac{\rho ^n}{r^n}|A_t^c\cap {\bar{\Omega }}_r|\le C(n)\frac{\rho ^{2n}}{r^n\epsilon ^n}|\partial A_t\cap \partial A_t^c|^\frac{n}{n-1}_{g({\bar{x}})}. \end{aligned}$$

In either case, we have

$$\begin{aligned} |A_t\cap {\bar{\Omega }}_r|_{g({\bar{x}})}\le C(n)\frac{\rho ^{2n}}{r^n\epsilon ^n}|\partial A_t\cap \partial A_t^c|^\frac{n}{n-1}_{g({\bar{x}})}, \end{aligned}$$

which in our original coordinates is (4.6).

We get

$$\begin{aligned}&\bigg [\int _{B_r}|(f-\tilde{f})^+|^\frac{n}{n-1}dv_g\bigg ]^\frac{n-1}{n}\\&\quad =\left[ \int _0^{M-\tilde{f}}|\{x | f(x) - \tilde{f}> t\}\cap B_r|_g dt^\frac{n}{n-1}\right] ^\frac{n-1}{n} \text { via Layer cake }[38, Ex 1.13]\\&\quad \le \int _0^{M-\tilde{f}}|\{ x | f(x) - \tilde{f}> t\}\cap B_r|^\frac{n-1}{n}_g dt \text { via the H-L-P inequality } [37, (5.3.3)]\\&\quad \le C(n)\left( \frac{\rho ^2}{r\epsilon }\right) ^{n-1}\int _{\tilde{f}}^M|\{x| f(x) = t\}\cap B_R|_g dt \text { via }(4.6)\\&\quad \le C(n)\left( \frac{\rho ^2}{r\epsilon }\right) ^{n-1}\int _{B_R}|\nabla _g(f - \tilde{f})^+|dv_g \text { via the co-area formula } [37, Thm 4.2.1] \end{aligned}$$

which completes the proof. \(\square \)

5 Proof of the main theorems

We now prove Theorem 1.2 from which Theorem 1.1 follows.

Proof of Theorem 1.2

For simplifying notation in the remaining proof, we assume \(R=2n+2\) and u is a solution on \(B_{2n+2}\subset {\mathbb {R}}^n\). Then by scaling \(v(x)=\frac{u(\frac{R}{2n+2}x)}{(\frac{R}{2n+2})^2}\), we get the estimate in Theorem 1.2. The proof follows in the spirit of [23, Section 3]. Under our assumption \(|\Theta |\ge (n-1)\frac{\pi }{2}\), we have that u is convex. Note \(C=C(n,\nu _1,\nu _2)(1 + (\textrm{osc}_{B_{2n+2}}(u))^2)\) is the positive constant from (3.8).

  • Step 1. We use the rotated Lagrangian graph \(X = ({\bar{x}},D{\bar{u}}({\bar{x}}))\) via the Lewy-Yuan rotation, as illuatrated in Sect. 4. Consider \(b = \log V\) on the manifold \(X= (x,Du(x))\), where V is the volume element in the original coordinates. In the rotated coordinates \(b({\bar{x}})= \log V({\bar{x}})\) satisfies

    $$\begin{aligned}&\bigg (g^{ij}({\bar{x}})\frac{\partial ^2}{\partial {\bar{x}}_i\partial {\bar{x}}_j}- g^{jp}({\bar{x}})\frac{\partial \Theta (x({\bar{x}}),u(x({\bar{x}})), \frac{\sqrt{2}}{2}{\bar{x}} + \frac{\sqrt{2}}{2}D{\bar{u}}({\bar{x}}))}{\partial {\bar{x}}_q}\frac{\partial ^2 {\bar{u}}({\bar{x}})}{\partial {\bar{x}}_q\partial {\bar{x}}_p} \frac{\partial }{\partial {\bar{x}}_j}\bigg ) b({\bar{x}})\nonumber \\&\quad = \Delta _{g({\bar{x}})}b({\bar{x}}) \ge - C. \end{aligned}$$
    (5.1)

    The nondivergence and divergence elliptic operator are both uniformly elliptic due to (4.3).

    From (4.1), we have

    $$\begin{aligned} {\left\{ \begin{array}{ll} x({\bar{x}}) = \frac{\sqrt{2}}{2}{\bar{x}} - \frac{\sqrt{2}}{2}D{\bar{u}}({\bar{x}})\\ Du(x({\bar{x}})) = \frac{\sqrt{2}}{2} {\bar{x}} + \frac{\sqrt{2}}{2}D{\bar{u}}({\bar{x}}) \end{array}\right. } \end{aligned}$$

    from which it follows that

    $$\begin{aligned}&\frac{\partial \Theta (x({\bar{x}}),u(x({\bar{x}})), \frac{\sqrt{2}}{2}{\bar{x}} + \frac{\sqrt{2}}{2}D{\bar{u}}({\bar{x}}))}{\partial {\bar{x}}_q}\nonumber \\&\quad = \sum _{j=1}^n\Theta _{x_j}\frac{\partial x_j}{\partial {\bar{x}}_q} + \Theta _u\sum _{j=1}^n u_j\frac{\partial x_j}{\partial {\bar{x}}_q} + \sum _{j=1}^n\Theta _{u_j}\frac{\partial }{\partial {\bar{x}}_q}\left( \frac{\sqrt{2}}{2}{\bar{x}}_j + \frac{\sqrt{2}}{2}{\bar{u}}_j\right) \nonumber \\&\quad =\frac{\sqrt{2}}{2}(\Theta _{x_q} + \Theta _u u_q)( 1- {\bar{\lambda }}_q) + \frac{\sqrt{2}}{2}\Theta _{u_q}(1 + {\bar{\lambda }}_q)\nonumber \\&\quad \le \sqrt{2}\nu _1(1 + \textrm{osc}_{B_{2n+2}}(u)). \end{aligned}$$
    (5.2)

    Denote

    $$\begin{aligned} {\tilde{b}} = \frac{2}{|B_1(0)|}\int _{B_{2n}(0)}\log V dx. \end{aligned}$$

    Via the local mean value property of nonhomogeneous subsolutions [34, Theorem 9.20] (see Appendix Theorem 6.1), we get the following, from (5.1) and (5.2):

    $$\begin{aligned}&(b - {\tilde{b}})^+(0) = (b - {\tilde{b}})^+({\bar{0}})\\&\quad \le C(n)\left[ \tilde{C}^{\;n-1}\left( \int _{B_{1/\sqrt{2}}({\bar{0}})}| (b - {\tilde{b}})^+({\bar{x}})|^\frac{n}{n-1}d{\bar{x}} \right) ^\frac{n-1}{n} + C\left( \int _{B_{1/\sqrt{2}}({\bar{0}})} d{\bar{x}}\right) ^\frac{1}{n}\right] \\&\quad \le C(n)\left[ \tilde{C}^{\;n-1}\left( \int _{B_{1/\sqrt{2}}({\bar{0}})}| (b - {\tilde{b}})^+({\bar{x}})|^\frac{n}{n-1}dv_{g({\bar{x}})} \right) ^\frac{n-1}{n} + C\left( \int _{B_{1/\sqrt{2}}({\bar{0}})} dv_{g({\bar{x}})}\right) ^\frac{1}{n}\right] \\&\quad \le C(n)\left[ \tilde{C}^{\;n-1}\left( \int _{B_1(0)}|(b - {\tilde{b}})^ +(x)|^\frac{n}{n-1}dv_{g(x)} \right) ^\frac{n-1}{n} + C\left( \int _{B_1(0)}dv_g\right) ^\frac{1}{n}\right] \end{aligned}$$

    where \(\tilde{C}=(1 + \nu _1 + \nu _1\textrm{osc}_{B_{2n+2}}(u))\) and \(C=C(n,\nu _1,\nu _2)(1 + (\textrm{osc}_{B_{2n+2}}(u))^2)\) is the positive constant from (3.8).

    The above mean value inequality can also be derived using the De Giorgi-Moser iteration [34, Theorem 8.16].

  • Step 2. By Proposition 4.2 with \(\rho = \rho (2n+1)\) and Lemma 3.2, we have

    $$\begin{aligned} b(0)&\le C(n)\tilde{C}^{\;n-1}\rho ^{2(n-1)}\int _{B_{2n}}|\nabla _g(b - {\tilde{b}})^+|dv_g + CC(n)\left( \int _{B_{2n}}Vdx\right) ^\frac{1}{n}\nonumber \\&\quad + C(n)\int _{B_{2n}}\log V dx\nonumber \\&\le C(n)\tilde{C}^{\;n-1}\rho ^{2(n-1)}\left( \int _{B_{2n}}|\nabla _gb |^2dv_g\right) ^\frac{1}{2}\left( \int _{B_{2n}}Vdx\right) ^\frac{1}{2} \nonumber \\&\quad +CC(n)\left( \int _{B_{2n}}Vdx\right) ^\frac{1}{n}+ C(n)\int _{B_{2n}} V dx\nonumber \\&\le C(n)(1 + \tilde{C}^{\;n-1}(1 + C)^\frac{1}{2})\rho ^{2(n-1)}\int _{B_{2n+1}}V dx+ CC(n)\left( \int _{B_{2n+1}}Vdx\right) ^\frac{1}{n}. \end{aligned}$$
    (5.3)
  • Step 3. We bound the volume element using the rotated coordinates. From (4.5), we have

    $$\begin{aligned} Vdx = {\bar{V}}d{\bar{x}} \le 2^\frac{n}{2}d{\bar{x}}. \end{aligned}$$

    Since \({\bar{\Omega }}_{2n+1} = {\bar{x}}(B_{2n+1}(0))\), we get

    $$\begin{aligned} \int _{B_{2n+1}}Vdx= \int _{{\bar{\Omega }}_{2n+1}}{\bar{V}}d{\bar{x}} \le 2^\frac{n}{2}\int _{{\bar{\Omega }}_{2n+1}}d{\bar{x}}\le C(n)\rho ^n. \end{aligned}$$

    Hence, from (5.3), we get

    $$\begin{aligned} b(0)\le C(n)(1 + \tilde{C}^{\;n-1}(1 + C)^\frac{1}{2})\rho ^{3n-2} + CC(n)\rho \le C(n)(1 + \tilde{C}^{\;n-1}(1 + C)^\frac{1}{2} + C)\rho ^{3n-2}.\nonumber \\ \end{aligned}$$
    (5.4)

    By plugging in (4.4), \(\tilde{C}\), and C, and using

    $$\begin{aligned} (a + b)^p \le 2^p(a^p + b^p),\quad \text {for } a,b\ge 0, p>0, \end{aligned}$$

    as well as Young’s inequality, we have

    $$\begin{aligned}&C(n)(1 +\tilde{C}^{\;n-1}(1 + C)^\frac{1}{2} + C)\rho ^{3n-2}\nonumber \\&\quad \le C(n,\nu _1,\nu _2)(1 + (\textrm{osc}_{B_{2n+2}}(u))^{n-1}+ (\textrm{osc}_{B_{2n+2}}(u))^{n}\nonumber \\&\qquad +(\textrm{osc}_{B_{2n+2}}(u))^2)(1 + (\textrm{osc}_{B_{2n+2}}(u))^{3n-2})\nonumber \\&\quad \le C(n,\nu _1,\nu _2)(1 + (\textrm{osc}_{B_{2n+2}}(u))^{4n-2}). \end{aligned}$$
    (5.5)

    By combining (5.4) and (5.5) and exponentiating, we get

    $$\begin{aligned} |D^2 u(0)|\le C_1\exp [C_2(\textrm{osc}_{B_{2n+2}}(u))^{4n-2}] \end{aligned}$$

    where \(C_1\) and \(C_2\) are positive constants depending on \(\nu _1,\nu _2\), and n.

\(\square \)

Proof of Theorem 1.1

Repeating the above proof, but with the constant C for Eqs. (1.5) and (1.7) from Corollaries 3.1 and 3.3 respectively, we get the desired estimate. Note, in the case of (1.6), we get \(C = \tilde{C}= C(n,t_2,t_3)\), and so (5.4) becomes

$$\begin{aligned} b(0)\le C(n,t_2,t_3)\rho ^{3n-2} \end{aligned}$$

resulting in the estimate

$$\begin{aligned} |D^2 u(0)|\le C_1\exp [C_2(\textrm{osc}_{B_{2n+2}}(u))^{3n-2}] \end{aligned}$$

where \(C_1\) and \(C_2\) depend on \(n,t_2,t_3\). \(\square \)

Remark 5.1

We prove analyticity of a \(C^0\) viscosity solution within its domain by outlining a modification of the approach in [23, Section 4]. Note, we obtain smooth approximations via [40, Theorem 4], [41]. Let

$$\begin{aligned} F(x,u,Du,D^2u) = G(D^2u) - \Theta (x,u,Du) = \sum _{j=1}^n \arctan \lambda _j - \Theta (x,u,Du). \end{aligned}$$

We wish to apply Evans-Krylov-Safonov theory ([34, Theorem 17.15]) which requires F(xzpr) to be concave in zpr and the following structure conditions to hold

$$\begin{aligned}&0< \ell |\xi |^2 \le F_{ij}(x,z,p,r)\xi _i\xi _j\le \Lambda |\xi |^2,\\&|F_p|,|F_z|,|F_{rx}|,|F_{px}|,|F_{zx}|\le \mu \ell ,\\&|F_x|,|F_{xx}|\le \mu \ell (1 + |p| + |r|), \end{aligned}$$

for all nonzero \(\xi \in {\mathbb {R}}^n\), where \(\ell \) is a nonincreasing function of |z|, and \(\Lambda \) and \(\mu \) are nondecreasing functions of |z|. Note, for our operator F defined above, \(F_{rx}=0\).

We have that \(G(D^2u)\) is concave, and by our assumption, \(\Theta (x,z,p)\) is partially convex in p. By additionally assuming partial convexity of \(\Theta \) in z, we get that F is concave in zpr as desired. Note, for equations (1.5),(1.6),(1.7), this condition is naturally satisfied.

Theorems 1.1 and 1.2 give us that

$$\begin{aligned} 0 < \frac{1}{1 + [C(\textrm{osc}_{B_R}(u))]^2}|\xi |^2 \le F_{ij}(x,z,p,r)\xi _i\xi _j\le |\xi |^2. \end{aligned}$$

Taking \(\ell = \frac{1}{1 + C^2}\) and \(\mu = \frac{\nu _1 + \nu _2}{\ell }\), we see that the other conditions are satisfied. Hence, we achieve a \(C^{2,\alpha }\) bound. By applying classical elliptic theory [34, Lemma 17.16] and [42, p. 202], to solutions of (1.5),(1.6),(1.7) we get the analyticity of u.