Abstract
We prove longtime existence and estimates for smooth solutions to a fully nonlinear Lagrangian parabolic equation with locally \(C^{1,1}\) initial data \(u_0\) satisfying either (1) \(-(1+\eta ) I_n\le D^2u_0 \le (1+\eta )I_n\) for some positive dimensional constant \(\eta \), (2) \(u_0\) is weakly convex everywhere, or (3) \(u_0\) verifies a large supercritical Lagrangian phase condition.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we settle the longtime existence problem of smooth solutions with “almost convex” initial data for the fully nonlinear Lagrangian parabolic equation
where \(\lambda _i\)’s are the eigenvalues of \(D^2u\), and also obtain global estimates for the longtime solutions. With “uniformly convex” initial data for (1), the longtime existence and estimate problem of smooth solutions has been solved in [1]. With general continuous initial data for (1), the longtime existence and uniqueness problem of continuous viscosity solutions was obtained in [2].
When a family of smooth entire Lagrangian graphs in \(\mathbb{C }^n\) evolve by the mean curvature flow their potentials \(u:\mathbb{R }^n\times [0, T)\rightarrow \mathbb{R }\) will evolve by (1), up to a time dependent constant. Conversely, if \(u(x, t)\) solves (1), then the graphs \((x, Du(x, t))\) in \(\mathbb{R }^{2n}\) will evolve by the mean curvature flow up to tangential diffeomorphisms.
In the hypersurface case, the global and local behavior of mean curvature flow of Lipschitz continuous initial graphs has been studied in [3, 4]; the regularity for mean curvature flow of continuous graphs has been obtained in [5]. In the high codimensional case, the mean curvature of Lipschitz continuous initial graphs with small Lipschitz norm has been studied in [7].
The main result of the paper is the following
Theorem 1.1
There exists a small positive dimensional constant \(\eta =\eta (n)\) such that if \(u_0:\mathbb{R }^n \rightarrow \mathbb{R }\) is a \(C^{1,1}\) function satisfying
then (1) has a unique longtime smooth solution \(u(x, t)\) for all \(t>0\) with initial condition \(u_0\) such that the following estimates hold:
-
(i)
\(- \sqrt{3} I_n\le D^2u\le \sqrt{3} I_n\) for all \(t>0\).
-
(ii)
\(\sup _{x\in \mathbb{R }^n}|D^l u(x,t)|^2 \le C_{l}/t^{l-2} \) for all \(l\ge 3,\,t>0\) and some \(C_{l}\) depending only on \(l\).
-
(iii)
\(Du(x, t)\) is uniformly Hölder continuous in time at \(t=0\) with Hölder exponent \(1/2\).
After a coordinate rotation described in Sect. 2 [see (15)], the “convex” condition \(-I_n< D^2u < I_n\) corresponds to a convex potential in which case the right hand side of (1) is a concave operator. This however is not the case under the weaker “almost convex” assumption (2) in Theorem 1.1. This is interesting from a PDE standpoint as Krylov’s theory for parabolic equations is for the concave operators [8]. Another subtle point is that the “almost convex” initial condition (2) is not preserved under the flow (but only preserved to an extent enough for further estimates of solutions), as the weak maximum principle fails now.
In [1] Theorem 1.1 was proved for any negative constant \(\eta ,\) in which case it was shown that now the “uniformly convex” condition (2) is preserved for all \(t>0\). In particular, a priori estimates were established for any solution to (1) with \(D^2u\) so bounded. The estimates combined maximum principle arguments for tensors and a Bernstein theorem for entire special Lagrangians [15] via a blow up argument. The estimates depended on the negativity of \(\eta \) and could not be applied to the more general case of Theorem 1.1 even for \(\eta =0\). We overcome this through recent estimates in [9] for solutions to (1) satisfying certain Hessian conditions (cf. Theorem 2.1 which is Theorem 1.1 in [9]).
A particular case of Theorem 1.1 (similarly for Theorems 1.2 and 1.3) is where \(Du_0:\mathbb{R }^n \rightarrow \mathbb{R }^n\) is a lift of a map \(f:\mathbb{T }^n \rightarrow \mathbb{T }^n\) and \(\mathbb{T }^n\) is the standard \(n\)-dimensional flat torus. In this “periodic case”, our estimates together with the results in [10] imply that the graphs \((x, Du(x, t))\) immediately become smooth after initial time and converge smoothly to a flat plane in \(\mathbb{R }^{2n}\) (cf. [1, 10–12]).
In light of the above coordinate rotation, we apply Theorem 1.1 directly to the “borderline convex” case in the following.
Theorem 1.2
Let \(u_0:\mathbb{R }^n \rightarrow \mathbb{R }\) be a locally \(C^{1,1}\) weakly convex function, namely \(D^2u_0 \ge 0\). Then (1) has a unique longtime smooth and weakly convex solution \(u(x, t)\) with initial condition \(u_0\) such that
-
(i)
either \(D^2u(x, t) >0\) for all \(x\) and \(t>0\) or there exists coordinates \(x_1,...,x_n\) on \(\mathbb{R }^n\) in which \(u(x, t)=w(x_{k},...,x_n,t)\) on \(\mathbb{R }^n\times [0, \infty )\) where \(k>1\) and \(w\) is convex with respect to \(x_k,...,x_n\) for all \(t>0\),
-
(ii)
\(\sup _{x\in \mathbb{R }^n}|\nabla ^l_t A(x,t)|^2 \le C_{l}/t^{l+1} \) for all \(l\ge 0,\,t>0\) and some constant \(C_{l}\) depending only on \(l\) where \(\nabla ^l_t A(x,t)\) is the \(l\)th covariant derivative of the second fundamental form of the embedding \(F_t:\mathbb{R }^n\rightarrow \mathbb{R }^{2n}\) given by \(x\rightarrow (x, Du(x, t))\),
-
(iii)
the Euclidean distance from each point of \(F_t(\mathbb{R }^n)\) to \(F_0(\mathbb{R }^n)\) in \(\mathbb{R }^{2n}\) is Hölder continuous in time at \(t=0\) with Hölder exponent \(1/2\).
We also prove the following
Theorem 1.3
Let \(u_0:\mathbb{R }^n \rightarrow \mathbb{R }\) be a locally \(C^{1,1}\) function satisfying
Then (1) has a unique longtime smooth solution \(u(x, t)\) with initial condition \(u_0\) such that (3) is satisfied with either strict inequality for all \(t>0\) or equality for all \(t\ge 0\) in which case \(u_0\) must be quadratic. Moreover, \(u(x, t)\) also satisfies (ii) and (iii) in Theorem 1.2.
Remark 1.1
Note that if \(u_0\) satisfies (3) then \(u_0\) must be convex.
As discussed above, after a coordinate rotation we may assume \(D^2 u_0\) in Theorem 1.2 satisfies the strict inequality \(-I_n\le D^2u_0 < I_n\) in which case Theorem 1.1 immediately provides a longtime solution \(u(x, t)\) to (1). In order for this to correspond to the desired longtime solution in the original coordinates we must first show \(-I_n\le D^2u < I_n\) is preserved for all \(t>0\) and this is the first main difficulty in proving (i) in Theorem 1.2. This in particular will rule out the possibility of \(\lambda _i (D^2u(x, t))=1\) for some \((x, t)\) which would correspond to a non-graphical (vertical) Lagrangian in the original coordinates. The second main difficulty comes from showing that either \(-I_n < D^2u\) for all \(t>0\), or the solution splits off a quadratic term as in Lemma 4.2 and this will give (i) in Theorem 1.2 after rotating back to the original coordinates.
As for Theorem 1.3, by Remark 1.1, if \(u_0\) satisfies (3) then it is automatically convex hence Theorem 1.2 guarantees a longtime convex solution \(u(x, t)\) to (1). The difficulty in showing (3) is preserved for all \(t>0\) comes from the fact that a maximum principle may not directly apply as \(u_0\) is only locally \(C^{1,1}\) with possibly unbounded Hessian. Performing a similar but small \(\sigma _0\) coordinate rotation, we can assume that \(-K(\sigma _0)I_n< D^2u_0 < 1/K(\sigma _0) I_n\), for some constant \(K(\sigma _0)\) which approaches zero as \(\sigma _0\rightarrow 0\), and satisfies
We then observe that the set of real symmetric \(n\times n\) matrices satisfying (4) is a convex set \(S\), and we approximate \(u_0\) by convolution with the standard heat kernels, which has the effect of averaging elements in \(S\), thus producing smooth approximations with bounded derivatives (of order \(2\) and higher) and Hessians belonging to \(S\). We perform a further \(\pi /4\) coordinate rotation after which the smooth approximated initial data satisfies (ii) in Theorem 1.1 and
By Theorem 1.1 we then apply a maximum principle argument to show (5) is preserved starting from each approximate initial data.
The outline of the rest of the paper is as follows. In Sect. 2 we provide preliminary results which will be used in the proofs of the theorems. In particular, we state the a priori estimates in [9]. Theorem 1.1 is proved in Sect. 3 and Theorems 1.2 and 1.3 are proved in Sect. 4.
2 Preliminaries
In this section we establish some preliminary results.
Proposition 2.1
([1], Proposition 5.1) Suppose \(u_0:\mathbb{R }^n \rightarrow \mathbb{R }\) is a smooth function such that \(\sup |D^l u_0| <\infty \) for each \(l\ge 2\). Then (1) has a smooth solution \(u(x, t)\) on \(\mathbb{R }^n \times [0, T)\) for some \(T>0\) such that \(\sup _{x\in \mathbb{R }^n}|D^l u(x, t)|<\infty \) for every \(l\ge 2\) and \(t\in [0, T)\).
Remark 2.1
In Proposition 5.1 in [1] it was shown that the non-parametric mean curvature flow equation
where \(g^{ij}(f)\) is the matrix inverse of \(g_{ij}(f):= \delta _{ij}+\sum _{a=1}^n f^a_i f^a_j\), has a short time solution \(f(x, t)\) provided \(u_0\) satisfies the conditions in Proposition 2.1. As explained in [1] (see Lemma 2.1), this in fact provides a short time solution \(u(x, t)\) to (1) as in Proposition 2.1 such that \(f(x, t)=Du(x, t)\) and the proof of Proposition 5.1 in [1] can also be adapted directly to (1) to establish Proposition 2.1. For convenience of the reader and completeness, we provide the details of this argument below.
Proof
Let \(C^{k+\alpha , k/2+\alpha /2}\) denote the standard parabolic Hölder spaces on \(\mathbb{R }^n\times [0, 1)\). Define
and define a map \( F: \mathcal{B }\rightarrow C^{\alpha ,\frac{{\alpha }}{2}} \) by
where \(\Theta (v):=\sum _{i=1}^n \arctan \lambda _i(D^2(u_0+v))\). Then the differential \(DF_{v}\) at any \(v\in \mathcal{B }\) is given by
where \(g^{ij}(u_0+v)\) is the matrix inverse of \(I_n+ [D^2(u_0+v)]^2\).
-
Claim 1 \(DF_{v}\) is a bijection from \(T_{v} \mathcal{B }\) onto \(T_{F(v)} C^{ \alpha ,\frac{{\alpha }}{2}}\).
This follows from the general theory of linear parabolic equations on \(\mathbb{R }^n\times [0, 1)\) with Hölder continuous coefficients.
Now define functions \(f_1 \,\mathrm{and}\, f_2\) on \(\mathbb{R }^n\) recursively by
Then we see that \(\sup _{\mathbb{R }^n}|D^l f_i|<\infty \) for every \(i\) and \(l\), and if we let \(w_0=F(v_0)\) where \(v_0=t f_1 + t^2 f_2/2\), then a straightforward computation gives
for \(l\le 1\) and
for every \(l, m\ge 0\). In particular \(w_0 \in C^{\alpha ,\frac{{\alpha }}{2}}\). By the inverse function theorem there exists \(\epsilon >0\) such that \(\Vert w-w_0\Vert _{\alpha , \frac{\alpha }{2}}<\epsilon \) implies \(F(v)=w\) for some \(v\in \mathcal{B }\).
For any \(0<\tau <1\), define \(w_\tau \) by
-
Claim 2 \(\Vert w_\tau -w_0\Vert _{\alpha , \frac{\alpha }{2}}<\epsilon \) for sufficiently small \(\tau >0\).
By (8) and (9), it follows that \(w_{\tau }\in C^{ \alpha ,\frac{{\alpha }}{2}}\) and \(\Vert w_{\tau }\Vert _{\alpha ,\frac{{\alpha }}{2}}\) is bounded uniformly and independently of \(\tau \). From this and the fact that \(w_\tau -w_0\) converges uniformly to \(0\) in \(C^0\) as \(\tau \rightarrow 0\), it is not hard to show the claim follows.
Hence by the inverse function theorem we have \(F(v)=w_{\tau }\) for some \(0<\tau <1\) and \(v\in C^{2+\alpha , 1+\alpha /2}\). In particular \(u_0+v\) solves (1) on \(\mathbb{R }^n\times [0, \tau ]\). Now the higher regularity of \(u\) can be shown as follows. For any \(x_0\in \mathbb{R }^n\), consider the function
Then \(\tilde{u}(x, t)\in \mathcal{B }\) and still solves (1) on \(\mathbb{R }^n\times [0, \tau ]\). Now we can write (1) as
Notice that \(D\tilde{u}(0,0)=\tilde{u}(0,0)=0\) and that \(D^2\tilde{u}(x,t)=D^2u(x+x_0,t)\) is uniformly bounded on \(\mathbb{R }^n\times [0, \tau ]\). Now if we let \(B(1)\) be the unit ball in \(\mathbb{R }^n\) it follows from (1) that \(\tilde{u}(x, t)\) and thus \(D\tilde{u}(x,t)\) is uniformly bounded on \(B(1)\times [0, \tau ]\), giving \(\tilde{u}(x, t)\in C^{2+ \alpha ,1+\frac{1}{2}+ \frac{{\alpha }}{2}}(B(1)\times [0, \tau ])\). In particular, by freezing the symbol \(a^{ij}:=\int _0^1 g^{ij}(s\tilde{u})ds\) in (11), we can view (11) as a linear parabolic equation for \(\tilde{u}\) with coefficients uniformly bounded in \(C^{\alpha ,\frac{1}{2}+ \frac{{\alpha }}{2}}(B(1)\times [0, \tau ])\). Now applying the local parabolic Schauder estimates (Theorem 5, Chapter 3, [6]) and a standard bootstrapping argument to (11) we may then bound the \(C^{l+ \alpha }\) norm of \(\tilde{u}(x, t)\) on \(B(1)\) by a constant depending only on \(t\) and \(l\).
Now the fact that \(v\) is smooth with bounded derivatives as in the theorem follows by repeating the above argument for any \(x_0 \in \mathbb{R }^n\). \(\square \)
Lemma 2.1
([1], Lemma 5.1) Let \(u_0:\mathbb{R }^n \rightarrow \mathbb{R }\) be a \(C^{1,1}\) function satisfying \(-C_0 I_n \le D^2 u_0 \le C_0 I_n\) for some constant \(C_0 >0\). Then there exists a sequence of smooth functions \(u^k_0:\mathbb{R }^n\rightarrow \mathbb{R }\) such that
-
(i)
\(u^k_0 \rightarrow u_0\) in \(C^{1+\alpha }(B_R(0))\) for any \(R\) and \(0<\alpha <1 \),
-
(ii)
\(-C_0 I_n \le D^2 u^k_0 \le C_0 I_n\) for every \(k\),
-
(iii)
\(\sup _{x\in \mathbb{R }^n}|D^l u^k_0| <\infty \) for every \(l \ge 2\) and \(k\).
Proof
Let
where \(K(x, y, t)\) is the standard heat kernel on \(\mathbb{R }^n\times (0, \infty )\). Conditions (i) and smoothness of \(u^k_0\) are easily verified. By assumption, \(D_y^2 u_0(y)\) is a well defined and uniformly bounded function almost everywhere on \(\mathbb{R }^n\) and we may write
for every \(l\ge 2\) from which it is easy to see that conditions (ii) and (iii) is also true. \(\square \)
Theorem 2.1
([9], Theorem 1.1) Let \(u(x, t)\) be a smooth solution to (1) in \(Q_1 \subset \mathbb{R }^n\times (-\infty , 0]\). When \(n\ge 4\) we also assume that at least one of the following conditions holds in \(Q_1\)
-
(i)
\(\sum _{i=1}^{n} \arctan \lambda _i \ge (n-2)\frac{\pi }{2},\)
-
(ii)
\(3+\lambda _i^2+2\lambda _i\lambda _j\ge 0\) for all \(1\le i, j \le n\).
Then we have
Here \(Q_r(x, t)=B_r(x)\times [t-r^2, t] \subset \mathbb{R }^n\times (-\infty , 0]\), and \(Q_r:=Q_r(0, 0)\). We refer to [9] for further notations and definitions used in Theorem 2.1.
Lemma 2.2
Suppose \(u_0:\mathbb{R }^n \rightarrow \mathbb{R }\) is a \(C^{1,1}\) function satisfying
and that \(u(x, t)\in C^{\infty }(\mathbb{R }^{n}\times (0, T))\bigcap C^{0}(\mathbb{R }^{n}\times [0, T))\) is a solution to (1) and satisfies \(u(x, 0)=u_0\). Then (14) is preserved for all \(t\).
Proof
We begin by establishing the following special case
Claim If \(u(x, t)\) is a smooth solution of (1) on \(\mathbb{R }^{n}\times [0, T)\) satisfying.
-
(i)
\(\sup _{\mathbb{R }^n}|D^lu(x, t)|<\infty \) for every \(t\in [0, T)\) and \(l\ge 2,\)
-
(ii)
\(u(x, 0)\) satisfies \(\left( -1+\delta \right) I_n \le D^{2}u\left( x,0\right) \le \left(1-\delta \right) I_n\) for some \(\delta >0,\)
then \(u(x, t)\) satisfies \(\left(-1+\delta \right) I_n \le D^{2}u\left( x,t\right) \le \left(1-\delta \right) I_n\) for each \(t\in (0, T)\).
This was established in Lemma 4.1 in [1] and we provide a different proof of this here. We begin by describing a change of coordinate which we will use at various places throughout the paper. Let \(z^j=x^j+\sqrt{-1}y^j\) and \(w^j=r^j+\sqrt{-1}s^j\) (\(j=1,...,n\)) be two holomorphic coordinates on \(\mathbb{C }^n\) related by
for some constant \(\sigma \). Then as described in [16] (see p.1356), if \(L=\{(x, Du_0(x)) | x\in \mathbb{R }^n\}\) in \(\mathbb{C }^n\) is represented as \(L=\{(r, Dv_0(r)) | r\in \mathbb{R }^n\}\) in the coordinates \(w^j\), then \(v_0\) satisfies
Now by (ii) in the claim, as described in [15] we may choose \(\sigma =-\pi /4\) and obtain such a new graphical representation of \(L\) and the new potential function will satisfy
The claim will be established once we show (17) is preserved for any \(\delta >0\). Differentiating (1) twice with respect to any coordinate direction \(x_k\) yields
where the subscripts of \(v\) denote partial differentiation. Now fix any vector \(V\in \mathbb{R }^n\) and any point \((r_0, t_0)\) note that \(V^T D^2v(r_0, t_0) V=v_{VV}(r, t)\) where \(v_{VV}(r, t)\) is just the second derivative of \(v(r_0, t_0)\) in the direction \(V\). It follows from (18) that the function
satisfies
at any \((r, t)\) in \(\mathbb{R }^{n}\times [0, t_1)\) where \(t_1\) is the maximal time for which
holds in \(\mathbb{R }^{n}\times [0, t_1)\). Note the existence and positivity of \(t_1\) is guaranteed by assumption (i) in the claim. Now by our assumption on the derivatives of \(u\) we have that \(g_{ij}(r, t)\) is uniformly equivalent to the Euclidean metric on \(\mathbb{R }^n\) uniformly for \(t \in [0,t^{\prime }]\) with \(t^{\prime } < t_1\), while \(g_{ij}(r, t)\) and \(f(r, t)\) are also continuous on \(\mathbb{R }^{n}\times [0, t_1)\). The maximum principle (Theorem 9, p.43, [6]) then implies \(f(r, t)\le 0\) and thus \(D^{2}v(r,t) \le \frac{2-\delta }{\delta }I_n\) for all \(t\in [0, t_1)\). Now if \(t_1<T\), then by continuity there exists some \(t_2>t_1\) for which (19) holds for all \(t\in [0, t_2)\) thus contradicting the maximality of \(t_1\). We must therefore have \(t_1=T\) from which the upper bound in the claim follows. Applying the above argument to \(-u,\) we obtain the lower bound in the claim as well.
Now let \(u_0\) and \(u(x, t)\) be as in the lemma, and let \(u_0^k\) be a sequence as in Lemma 2.1. Fix some sequence \(\delta _k \rightarrow 0\) and consider the sequence \(v_0^k = (1-\delta _k) u_0^k\). Then by Proposition 2.1 there exists a positive sequence \(T_k\) such that for each \(k\) there is a smooth solution \(v_k(x, t)\) of (1) on \(\mathbb{R }^{n}\times [0, T_k)\) with initial condition \(v_0^k\) and \(\sup _{x\in \mathbb{R }^n}|D^l v^k(x, t)| <\infty \) for every \(l \ge 2\) and \(t\in [0, T_k)\). For each \(k\), assume that \(T_k\) is the maximal time on which the solution \(v_k\) exists. By the above claim we also have \(\left( -1+\delta _k\right) I_n \le D^{2}v_k\left( x,t\right) \le \left(1-\delta _k\right)I_n \) for each \(t\in [0, T_k)\) and the main theorem in [1] then implies \(\sup _{x\in \mathbb{R }^n}|D^l v_k(x,t)|^2 \le C_{l, k}/t^{l-2}\) for all \(l\ge 3\), and some constant \(C_{l, k}\) depending only on \(l\) and \(\delta _k\) and it follows that \(T_k =\infty \). In fact, the local estimates in Theorem 2.1 can be used to remove the dependence on \(\delta _k\) in these bounds. Indeed, fix some \(k,\,T\in (0,\infty )\) and \(x^{\prime }\in \mathbb{R }^n\) and let
Then we have \(w_k(0, 0)=Dw_k(0,0)=0\), and \(w_k(y, s)\) solves (1) on \(\mathbb{R }^n\times [-1,0]\) and satisfies \(-(1-\delta _k)I_n \le D^2 w_k \le (1-\delta _k)I_n\) for all \((y, s)\in \mathbb{R }^n\times [-1,0]\). Applying Theorem 2.1 then gives
where \(B_{\sqrt{T}/2}(x^{\prime })\) is the ball of radius \(\sqrt{T}/2\) centered at \(x^{\prime }\in \mathbb{R }^n\) and \(C\) is some constant independent of \(k\). Noting that \(x^{\prime }\in \mathbb{R }^n\) and \(T\in (0,\infty )\) were arbitrary we obtain
for all \(t\in (0, \infty )\) and it follows from a scaling argument, described in the proof of Lemma 5.2 in [1], that for every \(t\in (0, \infty )\) and \(l\ge 3\) we may have
for some constant \(C_l\) depending only on \(l\).
From (23) we conclude that the \(v_k(x, t)\)’s have a subsequence converging to a function \(v(x, t)\) on \(\mathbb{R }^n\times [0, \infty )\) where the convergence is smooth on compact subsets of \(\mathbb{R }^n\times (0, \infty )\). In particular, by construction we have that \(v(x, t)\) is smooth and solves (1) on \(\mathbb{R }^n\times (0, \infty )\), satisfies (14) for every \(t\in [0, \infty )\) and \(v(x, 0)=u_0(x)\). Moreover, by (1) we have \(|\partial _t v (x, t)|\le \frac{n\pi }{2}\) for all \((x, t) \in \mathbb{R }^n\times [0, \infty )\) from which we conclude that \(v\in C^{0}(\mathbb{R }^{n}\times [0, T))\).
It now follows by the uniqueness result in [2] that \(u(x, t)=v(x, t)\) for all \(t\in [0, T)\), and thus \(u(x, t)\) also satisfies (14) for every \(t\in [0, T)\). This completes the proof of the lemma. \(\square \)
We now apply the above results to prove the following proposition.
Proposition 2.2
There exists a dimensional constant \(\eta =\eta (n)>0\) such that for every \(T>0\) the following holds: if \(u(x, t)\) is a smooth solution to (1) on \(\mathbb{R }^n\times [0, T)\) such that \(-(1+\eta )I \le D^2u \le (1+\eta )I\) at \(t=0\) and \(\sup _{x\in \mathbb{R }^n}|D^l u| <\infty \) for each \(t\in [0, T)\) and \(l\ge 2\), then \(u(x, t)\) satisfies \(-\sqrt{3}I_n \le D^2u \le \sqrt{3}I_n\) for all \(t\in [0, T)\).
Proof
Suppose otherwise. Then there exists a sequence \(\eta _k \rightarrow 0\) and a sequence of smooth \(u_k(x, t)\) each solving (1) on \(\mathbb{R }^n \times [0, T_k)\) where \(T_k >0\), and each satisfying
-
(a)
\(-(1+\eta _k)I_n \le D^2u_k \le (1+\eta _k)I_n\) at \(t=0\),
-
(b)
\(\sup _{x\in \mathbb{R }^n}\left|D^l u_k \right| <\infty \) for each \(t\in [0, T_k)\) and \(l\ge 2\),
-
(c)
\(\left|\lambda _i(D^2 u_k(x_k, t_k))\right| > \displaystyle \sqrt{3}\) for some \((x_k, t_k) \in \mathbb{R }^n\times [0, T_k)\) and some \(i\).
Then by (a) and (b) it is not hard to show that there exists a sequence \(R_k\) with \(R^2_k \in (0, T_k)\) satisfying
-
(A)
\(-\sqrt{3}I_n \le D^2u_k \le \sqrt{3}I_n\) for all \(t\in [0, R_k^2)\),
-
(B)
\(\left|\lambda _i(D^2 u_k(x_k, t_k))\right| = \sqrt{3/2}\) for some \((x_k, t_k) \in \mathbb{R }^n\times [0, R_k^2)\) and some \(i\).
Now consider the sequence
each solving (1) on \(\mathbb{R }^n\times [-1, 0]\) and each satisfying
-
(i)
\(-(1+\eta _k)I_n \le D^2v_k \le (1+\eta _k)I_n\) at \(t=0\) and \(-\sqrt{3}I_n \le D^2v_k \le \sqrt{3}I_n\) \(\forall t>0\),
-
(ii)
\(\displaystyle \left|\lambda _i (D^2 v_k)(0, 0)\right|= \sqrt{3/2}\) for some \(i\),
-
(iii)
\(v_k(0, 0)=Dv_k(0, 0)=0\).
Then as assumption (ii) in Theorem 2.1 is satisfied, we may apply the estimates there as in the proof of Lemma 2.2 to show that the \(v_k(x, t)^{\prime }s\) have a subsequence converging to a function \(v(x, t)\in C^{\infty }(\mathbb{R }^n\times (-1, 0)) \bigcap C^{0}(\mathbb{R }^n\times [-1, 0])\) such that in addition we have \(\sup _{x\in \mathbb{R }^n} |D^3 v(x, t)|\) bounded independent of \(t\in [-1/2, 0]\). Moreover, by construction \(v(x, t)\) solves (1) and satisfies \(-I_n \le D^2v(x, -1) \le I_n\) in the \(L_{\infty }\) sense and \(|\lambda _i(D^2v(0,0))| =\sqrt{3/2}\) for some \(i\). Together, these facts contradict Lemma 2.2. \(\square \)
Remark 2.2
Noting that (13) in Theorem 2.1 holds in general when \(n\le 3\), we observe that when \(n\le 3\) we can replace \(\sqrt{3}\) in Proposition 2.2 with any larger constant \(C>0\) (for a little larger \(\eta \)).
3 Proof of Theorem 1.1
Proof of Theorem 1.1
Let \(u_0\) be as in Theorem 1.1 where \(\eta >0\) is as in Proposition 2.2. Let \(u^k_0\) be a sequence of approximations as in Lemma 2.1. By Proposition 2.1 we have smooth short time solutions \(u_k(x, t)\) to (1) with initial condition \(u_k(x, t)=u_0^k(x)\). Moreover, by Proposition 2.2 we have \(-\sqrt{3}I_n \le D^2u \le \sqrt{3}I_n\) for all \((x, t)\). We will let \(\mathbb{R }^n\times [0, T_k)\) be the maximal space time domain on which \(u_k(x, t)\) is defined.
Then by a rescaling argument and applying Theorem 2.1 as in the the proof of Lemma 2.2, we can show that for each \(k,\,T_k=\infty \) and \(u_k(x, t)\) satisfies the estimates in (23) for all \(l\ge 3\) and \(t>0\). In particular, we argue as in the last two paragraphs of the proof of Lemma 2.2 that some subsequence of the \(u_k(x, t)\)’s converge to a function \(u(x, t)\) solving (1) on \(\mathbb{R }^n\times [0, \infty )\) satisfying (i) and (ii) in the conclusions of Theorem 1.1.
We now show that \(Du(x, t)\) satisfies conclusion (iii) in Theorem 1.1. By differentiating (1) once in space and using (i) and the estimates in (ii) for \(l=3\) we may estimate as follows for any \(x\in \mathbb{R }^n\) and \(t>t^{\prime } >0\):
for some constant \(C\) independent of \(x,\,t\) and \(t^{\prime }\). The uniqueness of \(u(x,t)\) follows from the uniqueness result in [2]. \(\square \)
4 Proofs of Theorem 1.2 and Theorem 1.3
We begin by establishing the following lemmas
Lemma 4.1
Let \(v(r, t)\) be a solution to (1) as in Theorem 1.1 and assume \(-I_n\le D^2 v(r, t)\le I_n\) for all \(r\) and \(t\). Then if \(\lambda _1(r^{\prime }, t^{\prime })=1\) at some point where \(t^{\prime }>0\), then \(\lambda _1(r, t)=1\) for all \((r,t) \in \mathbb{R }^n \times (0, t^{\prime }]\). Similarly if \(\lambda _1(r^{\prime }, t^{\prime })=-1\) at some point where \(t^{\prime }>0\), then \(\lambda _1(r, t)=-1\) for all \((r,t) \in \mathbb{R }^n \times (0, t^{\prime }]\).
Proof
Consider a solution \(v\) to the elliptic equation corresponding to (1):
where \(C\) is some constant. By twice differentiating (25) and the characteristic equation \(\det (D^2v -\lambda _i I_n )=0\) a formula was obtained for \(\sum _{a,b=1}^n g^{ab}\partial ^2_{ab} \ln \sqrt{1+\lambda _i^2}\) at any point where \(\lambda _i\) is a non-repeated eigenvalue for \(D^2v\) in [14, Lemma 2.1]. By essentially the same calculations we may differentiate the parabolic Eq. (1) and the characteristic equation \(\det (D^2v -\lambda _i I_n )=0\) twice in space, and also differentiate the characteristic equation once in time, to obtain the exact same formula for \((\sum _{a,b=1}^n g^{ab}\partial ^2_{ab}-\partial _t )\ln \sqrt{1+\lambda _i^2}\) at any point where \(\lambda _i\) is a non-repeated eigenvalue for \(D^2v\). Namely, if \(\lambda _i\) is a non-repeated eigenvalue of \(D^2v\) at a point \((r_0, t_0)\) then the following holds at \((r_0, t_0)\) (after making a linear change of coordinates on \(\mathbb{R }^n\) so that \(D^2v(r_0, t_0)\) is diagonal):
where \(h_{\alpha \beta \gamma }(r, t)\) is the second fundamental form of the embedding \(F(r, t)=(r, Dv(r, t))\) of \(\mathbb{R }^n\) into \(\mathbb{R }^{2n}\).
-
Claim If \(\lambda _1(r^{\prime }, t^{\prime })=1\) at some point where \(t^{\prime }>0\), then \(\lambda _1(r, t)=1\) for all \((r,t) \in \mathbb{R }^n \times (0, t^{\prime }]\).
We will always assume that \(1\ge \lambda _1\ge \lambda _{2}\ge \cdot \cdot \cdot \ge \lambda _n \ge -1\) where the upper and lower bounds are given by Lemma 2.2. Now suppose that \(1\) is an eigenvalue of multiplicity \(k\) and consider the function \(f=\sum _{i=1}^k \ln \sqrt{1+\lambda _{ i}^2}\). Then \(f\) is a smooth function in a space-time neighborhood \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) of \((r^{\prime },t^{\prime })\) (see [13]) and attains a maximum value in \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) at \((r^{\prime },t^{\prime })\). Now we want to compute the evolution of \(f\) in \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\). We illustrate how to do this first at some point where \(\lambda _1,...,\lambda _k\) are all distinct. In this case we may apply (26) separately to each term in \(f\), and after some computation we obtain
where
where \(I\) corresponds to summing the second and third terms on the right hand side of (26) for \(i=1,...,k\) and \(I\!I\) corresponds to summing the fourth term on the right hand side of (26) for \(i=1,...,k\). Our derivation above only applies at a point where \(\lambda _1,...,\lambda _k\) are all distinct, and thus cannot be used directly to calculate the evolution of \(f=\sum _{i=1}^k \ln \sqrt{1+\lambda _{ i}^2}\) at \((r^{\prime }, t^{\prime })\). We now remove this assumption on the distinctness of eigenvalues by the approximation argument below.
Consider the function
Then for sufficiently large \(m\), in some space-time neighborhood of \((r^{\prime }, t^{\prime })\) which we still denote as \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) the eigenvalues \(\lambda _{i, m}\) of \(D^2 v_m\) will be between \(-1\) and \(1\) while the \(k\) largest eigenvalues will be non-repeated. Thus the function \(\ln \sqrt{1+\lambda _{i,m}^2}\) is smooth in \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) for each \(i\). On the other hand, by (1) and the definition \(v_{m}\) we have
where
Note that \(w_m\) approaches zero smoothly and uniformly on compact subsets of \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) as \(m \rightarrow \infty \). By (29), the above referenced derivation of (26) and by (28) we have
in \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) where \(I_m\) is obtained by replacing \(\lambda _{\alpha }\) and \(\lambda _{\beta }\) in \(I\) by \(\lambda _{\alpha ,m}\) and \(\lambda _{\beta ,m}\) respectively, and \(I\!I_m\) is obtained similarly. We have also used the fact that \(I_m, I\!I_m\) is nonnegative. Letting \(m \rightarrow \infty \), we conclude that \((\sum _{a,b=1}^n g^{ab}\partial ^2_{ab}-\partial _t ) f \ge 0\) on \(U\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) and thus \(f=k \ln \sqrt{2}\) in \(U\times (t^{\prime }-\epsilon , t^{\prime }]\) by the strong maximum principle (Theorem 1, p. 34, [6]).
Now for any \((r^{\prime \prime }, t^{\prime \prime })\in \mathbb{R }^n\times (0, t^{\prime }]\) let \(\gamma (s):[0, 1]\) be a line segment in space-time such that \(\gamma (0)=(r^{\prime },t^{\prime })\) and \(\gamma (1)=(r^{\prime \prime },t^{\prime \prime })\). Let \(A\) be the set of \(\bar{s} \in [0, 1]\) for which \(\lambda _1(\gamma (s))=1\) for all \(s\in [0, \bar{s}]\). Then the above argument shows that \(A\) is in fact open and non-empty. Moreover, \(A\) is clearly closed by continuity and we then conclude that \(A=[0, 1]\) and in particular, \(\lambda _1 (r^{\prime \prime },t^{\prime \prime })=1\). This established the claim and thus the first statement in the conclusion of the lemma.
By considering the solution \(-v(r, t)\) to (1), we likewise conclude the second statement in the conclusion of the lemma is true. \(\square \)
Lemma 4.2
Let \(v(r, t)\) be a solution to (1) as in Theorem 1.1 and assume that \(-I_n<D^2 v(r, t)\le I_n\) for all \(r\in \mathbb{R }^n\) and \(t\in [0, \infty )\). Then either \(D^2 v(r, t)< I_n\) for all \(r\) and \(t>0\) or there exist coordinates \(r_1,...,r_n\) on \(\mathbb{R }^n\) in which we have \(v(r, t)=\frac{r_1^2}{2}+\cdot \cdot \cdot + \frac{r_k^2}{2}+w(r_{k+1},...,r_n,t)\) on \(\mathbb{R }^n\times [0, \infty )\) where \(-I_n<D^2 w(r, t)< I_n\) for all \(r,\,t> 0\) and \(k>1\).
Proof
We begin by establishing the following claims.
-
Claim 1 If \(v_{11}(r^{\prime }, t^{\prime })=1\) at some point \((r^{\prime }, t^{\prime })\) with \(t^{\prime }>0\), then \(v_{11}=1\) on \(\mathbb{R }^n\times (0, t^{\prime }]\).
By a rotation of coordinates on \(\mathbb{R }^n\), we may assume that \(D^2 v(r^{\prime },t^{\prime })\) is diagonal. Since \(D^2v>-I_n\) there exists some space time neighborhood \(U_1\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) of \((r^{\prime }, t^{\prime })\) in which \(-(1-\delta )I_n \le D^2 v(r, t)\le I_n\) for some \(\epsilon , \delta >0\). By (16) it follows that for some choice of \(\sigma \in (0, \pi /4)\), we may change coordinates on \(\mathbb{C }^n\) (from \(w^j\) to \(z^j\)) using (15) so that the local family of Lagrangian graphs \(L=\{(r, D v(r, t)) | (r, t)\in U_1\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\}\) is represented in the new coordinates as \(L=\{(x, D u(x, t)) | (x, t)\in U_2\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\}\) for some space time neighborhood \(U_2\times (t^{\prime }-\epsilon , t^{\prime }+\epsilon )\) in which \(0 \le D^2 u(x, t)\le M I_n\) with \(u_{11}(x^{\prime }, t^{\prime })=M\) at some interior point \((x^{\prime }, t^{\prime })\) with respect to coordinates \(x_1,...,x_n\) given by (15). It follows from (18) and the strong maximum principle (Theorem 1, p.34, [6]) that \(u_{11}=M\) in \(U_2\times [t^{\prime }-\epsilon , t^{\prime }]\) and thus \(v_{11}=1\) in \(U_1\times [t^{\prime }-\epsilon , t^{\prime }]\).
Now for any \((r^{\prime \prime }, t^{\prime \prime })\in \mathbb{R }^n\times (0, t^{\prime }]\) and let \(\gamma (s),\,s\in [0, 1]\), be a line segment in space-time such that \(\gamma (0)=(r^{\prime },t^{\prime })\) and \(\gamma (1)=(r^{\prime \prime },t^{\prime \prime })\). Let \(A\) be the set of \(\bar{s} \in [0, 1]\) for which \(v_{11}(\gamma (s))=1\) for all \(s\in [0, \bar{s}]\). Then the above argument shows that \(A\) is in fact open and non-empty. Moreover, \(A\) is clearly closed by continuity and we then conclude that \(A=[0, 1]\) and in particular, \(v_{11}(r^{\prime \prime }, t^{\prime \prime })=1\). This established the claim.
-
Claim 2 In Claim 1, we in fact have \(v(r, t)=\frac{r_1^2}{2}+w(r_{2},...,r_n,t)\) on \(\mathbb{R }^n\times [0, \infty )\).
Integrating \(v_{11}\) twice with respect to \(r_1\) gives
on \(\mathbb{R }^n\times (0, t^{\prime }]\) for some functions \(w_1\) and \(w_2\). It follows that \(w_1\) must in fact be linear with respect to \(r_2,...,r_n\) as otherwise \(D^2 v\) would be unbounded on \(\mathbb{R }^n\times (0, t^{\prime }]\) thus contradicting our assumption on that \(-I_n<D^2 v(r, t)\le I_n\) for all \(r\) and \(t\). Our assumption that \(D^2v(r^{\prime },t^{\prime })\) is diagonal then implies that \(w_1\) must in fact be constant in space. Finally, as the right hand side of (1) is uniformly bounded in absolute value from which we further conclude that is in fact constant in time as well and thus after a possible translation of the coordinate \(r_1\) we have
on \(\mathbb{R }^n\times (0, t^{\prime }]\) for some function \(w_3\). Now observe that up to the addition of a time dependent constant, \(w_3(r_{2},...,r_n,t)\) solves (1) on \(\mathbb{R }^{n-1}\times [0, t^{\prime }]\) and by Theorem 1.1 this extends to a smooth longtime solution which we still denote as \(w_3(r_{2},...,r_n,t)\). In particular \(\frac{r_1^2}{2}+w_3(r_{2},...,r_n,t)\) is also a longtime solution to (1) and it follows from the uniqueness result in [2] that the above representation of \(v(r, t)\) holds on \(\mathbb{R }^n\times [0, \infty )\).
The lemma follows by iterating the arguments above starting with the function \(w\) in Claim 2. \(\square \)
Proof of Theorem 1.2
Now let \(u_0\) be a \(C^{1,1}\) locally weakly convex function as in Theorem 1.2. Using \(\sigma =\pi /4\) in (15) to change coordinates on \(\mathbb{C }^n\) and noting (16) (see also [15]), we represent the Lagrangian graph \(L=\{(x, Du_0(x)) | x\in \mathbb{R }^n\}\) in the coordinates \(z^j\) as \(L=\{(r, Dv_0(r)) | r\in \mathbb{R }^n\}\) in the coordinates \(w^j\) where \(v_0\) satisfies \(-I_n\le D^2 v_0< I_n\). Let \(v(r, t)\) be the long time solution to (1) with initial condition \(v_0\) given by Theorem 1.1. Then from Lemmas 2.2 and 4.1 we have \(-I_n\le D^2 v(r,t)< I_n\) for all \(r,\,t\ge 0\). Moreover, applying Lemma 4.2 to \(-v(r, t)\) we further conclude that either \(-I_n< D^2 v(r,t)< I_n\) for all \(r\in \mathbb{R }^n\) and \(t> 0\) or
on \(\mathbb{R }^n\times [0, T)\) where \(k>0\) and \(-I_n<D^2 w(r, t)< I_n\) for all \(r\in \mathbb{R }^n,\,t\ge 0\). Let \(L_t=\{(r, Dv(r, t)) | (r, t)\in \mathbb{R }^n\times [0, \infty )\}\) be the corresponding family of Lagrangian graphs in \(\mathbb{C }^n\). Then by (16), \(L_t\) will correspond to a family of Lagrangian graphs \(\{(x, Du(x, t)) | (x, t)\in \mathbb{R }^n\times [0, \infty )\}\) such that \(u(x, t)\) is a longtime solution to (1) satisfying (i) in Theorem 1.2. Now note that as \(v(x, t)\) satisfies (ii) and (iii) in Theorem 1.1 it also satisfies (ii) and (iii) in Theorem 1.2. It follows that \(u(x, t)\) must then also satisfy (ii) and (iii) in Theorem 1.2. The uniqueness of \(u(x,t)\) follows from the uniqueness result in [2]. \(\square \)
Proof of Theorem 1.3
Let \(u_0\) be a locally \(C^{1,1}\) function satisfying (3). Then \(u_0\) is automatically convex and by Theorem 1.2 there exists a longtime convex solution \(u(x, t)\) to (1) with initial condition \(u_0\). In particular, note that \(u(x,t)\) satisfies (ii) and (iii) in Theorem 1.2. It will be convenient here to define the operator
on symmetric real \(n\times n\) matrices \(A\) where the \(\lambda _i\)’s are the eigenvalues of \(A\). A direct computation shows that as \(u(x, t)\) solves (1), \(\Theta (D^2 u(x, t))\) evolves according to
We would like to use (32) and the maximum principle (Theorem 9, p.43, [6]) to conclude that (3) is thus preserved for all \(t>0\). One difficulty here is that \(\Theta (D^2 u(x, t))\) is not necessarily continuous at \(t=0\). Another difficulty is that \(D^2 u(x, t)\) is not neccesarily bounded above, and thus the symbol \(g^{ij}\) is not necessarily bounded below (by a positive constant) on \(\mathbb{R }^n\) for \(t>0\). To overcome this we will need to transform and approximate our solution \(u(x,t)\) through the following sequence of steps.
Step 1 (small rotation) We begin by using (15), with \(\sigma =\sigma _0\in (0, \pi /2)\) to be chosen in a moment, to change coordinates on \(\mathbb{C }^n\) and represent the Lagrangian graphs \(L_t=\{(x, Du(x,t)) | x\in \mathbb{R }^n\}\) in the coordinates \(z^j\) as \(L_t=\{(r, Dv(r,t)) | r\in \mathbb{R }^n\}\) in the coordinates \(w^j\) for some family \(v(r,t)\) with \((r,t)\in \mathbb{R }^n\times [0, \infty )\). By (16) we have
and by (16) and the convexity of \(u(x,t)\) we have
for all \(r,t\) where \(K(\sigma _0)\rightarrow 0\) as \(\sigma _0 \rightarrow 0\).
Step 2 (approximation) Let \(v^k_0\) be the sequence of approximations of \(v_0=v(r,0)\) constructed in Lemma 2.1. Then for each \(k\) we have \(-K(\sigma _0) \le D^2 v^k_0 \le 1/K(\sigma _0)\) by (34). Moreover, \(\sup _{r\in \mathbb{R }^n}|D^l v^k_0(r)| <\infty \) for all \(l \ge 3\). Now we show that
is satisfied for all \(k\).
Fix \(r \in \mathbb{R }^n\) and \(k\). By (12) we have
Approximating by the Riemann sums, we can find a double sequence \(\{p_{ij}\} \subset \mathbb{R }^n\) and a sequence \(\{j_i\} \subset \mathbb{Z }^+\) for which
On the other hand,
and we may then further assume
as \(i\rightarrow \infty \). By (36) we then have
where \(A_{ij}= K\left(r, p_{ij}, \frac{1}{k}\right) / (i^n B_i)\) and in particular \(\sum _{j=1}^{j_i} A_{ij} =1\) while \(A_{ij} \ge 0\) for all \(i,j\). Now since \((n-1)\frac{\pi }{2}-n\sigma _0 > (n-2)\frac{\pi }{2}\) by our choice of \(\sigma _0\), the results in [16] assert that the set of symmetric \(n\times n\) matrices \(A\) for which \(\Theta \ge (n-1)\frac{\pi }{2}-n\sigma _0\) is a convex set \(S\) in the space of real \(n\times n\) symmetric matrices. This, (37) and the fact that \(D^2 v_0(p_{ij}) \in S\) for all \(i, j\) imply \(D^2 v^k_0 (r) \in S\). Thus (35) holds for each \(k\).
Step 3 \((\pi /4\) rotation) Now we use (15) as in Step 1, but with \(\sigma =\pi /4\), to obtain from \(v(r, t)\) and the \(v^k_0(r)\)’s a corresponding family \(w(p,t)\) and sequence \(w^k_0(p)\). In particular, \(w(p,t)\) is a longtime solution to (1) and the \(w^k_0\)’s will satisfy (2) in Theorem 1.1, provided \(\sigma _0>0\) is chosen sufficiently small and we will assume such a choice of \(\sigma _0\) has been made. They will also satisfy
by (16). Thus for each \(k\), Theorem 1.1 gives a longtime solution \(w^k(p,t)\) to (1) with initial condition \(w^k_0\) satisfying \(\sup _{r\in \mathbb{R }^n}|D^l w^k(p,t)| <\infty \quad \mathrm{for\,\, all}\quad l\ge 2 \quad \mathrm{and}\quad t\ge 0.\) It follows from (38), (32), (16) and the weak maximum principle (Theorem 9, p.43, [6]) that
for all \((p,t)\). Now using Theorem 1.1 and arguing as in the beginning of the proof of Theorem 1.1, we see that some subsequence of the \(w^k(p,t)\)’s converge smoothly and uniformly on compact subsets of \(\mathbb{R }^n\times (0, \infty )\) to a smooth limit solution to (1) on \(\mathbb{R }^n\times (0, \infty )\). By the uniqueness result in [2] and the definition of \(w^k_0\), we see this limit solution is in fact the solution \(w(p,t)\). In particular, \(w(p,t)\) must satisfy (39) for all \((p,t)\).
Rotating back to the original coordinates, we conclude from the last statement above that \(u(x,t)\) must satisfy (3) for all \(t\ge 0\). Thus either (3) holds with strict inequality for all \(t>0\) or there exists some \((x^{\prime },t^{\prime })\in \mathbb{R }^n\times (0, \infty )\) at which equality holds in (3) in which case (32) and the strong maximum principle (Theorem 1, p. 34, [6]) give
in \(\mathbb{R }^n\times (0, t^{\prime }]\). In this case, integrating (1) in \(t\) and noting the continuity of \(u(x,t)\) in \(t\) (for all \(t\ge 0\)) we obtain
for all \(t\in [0, t^{\prime }]\), and thus for all \(t\in [0, \infty )\) by the uniqueness result in [2]. In particular, \(D^2 u(x,t)\) satisfies (40) for all \(t\ge 0\). On the other hand, \(u(x,t^{\prime })\) is smooth in \(x\) and it follows that \(u_0(x)=u(x,0)\) is a smooth convex solution to the special Lagrangian equation \(\Theta (D^2 u_0(x))= (n-1)\frac{\pi }{2}\) on \(\mathbb{R }^n\) and is thus quadratic by the Bernstein theorem in [15]. This concludes the proof of Theorem 1.3. \(\square \)
References
Chau, A., Chen, J., He, W.: Lagrangian mean curvature flow for entire Lipschitz graphs. Calc. Var. Partial Differ. Eq. 44(1–2), 199–220 (2012)
Chen, J., Pang, C.: Uniqueness of viscosity solutions of a geometric fully nonlinear parabolic equation. C.R. Math. Acad. Soc. Paris Ser. 347(17–18), 1031–1034 (2009)
Ecker, K., Huisken, G.: Mean curvature evolution of entire graphs. Ann. Math. (2) 130(3), 453–471 (1989)
Ecker, K., Huisken, G.: Interior estimates for hypersurfaces moving by mean curvature. Invent. Math. 105(3), 547–569 (1991)
Evans, L.C., Spruck, J.: Motion of level sets by mean curvature. III. J. Geom. Anal. 2, 121–150 (1992)
Friedman, A.: Partial Differential Equations of Parabolic Type. Prentice-Hall, Inc., Englewood Cliffs (1964)
Koch, H., Lamm, T.: Geometric flows with rough initial data. Asian J. Math. 16(2), 209–235 (2012)
Krylov, N.V.: Nonlinear Elliptic and Parabolic Equations of the Second Order. Translated from the Russian by Buzytsky P.L. [P. L. Buzytskiĭ]. Mathematics and its Applications (Soviet Series), 7. D. Reidel Publishing Co., Dordrecht (1987)
Nguyen, T.A., Yuan, Y.: A priori estimates for Lagrangian mean curvature flows. Int. Math. Res. Not. (2010). Art. ID rnq242. doi:10.1093/imrn/rnq242
Simon, L.: Asymptotics for a class of non-linear evolution equations, with applications to geometric problems. Ann. Math. 118(3), 525–571 (1983)
Smoczyk, K.: Longtime existence of the Lagrangian mean curvature flow. Calc. Var. Partial Differ. Equ. 20, 25–46 (2004)
Smoczyk, K., Wang, M.: Mean curvature flows of Lagrangian submanifolds with convex potentials. J Differ. Geom. 62, 243–257 (2002)
Sylvester, J.: On the differentiability of O(n) invariant functions of symmetric matrices. Duke Math. J. 52(2), 475–483 (1985)
Warren, M., Yuan, Y.: Hessian estimates for the sigma-2 equation in dimension three. Comm. Pure Appl. Math. 62, 305–321 (2009)
Yuan, Y.: A Bernstein problem for special Lagrangian equations. Invent. Math. 150, 117–125 (2002)
Yuan, Y.: Global solutions to special Lagrangian equations. Proc. Am. Math. Soc. 134, 1355–1358 (2006)
Author information
Authors and Affiliations
Corresponding author
Additional information
The first two authors are partially supported by NSERC grants, and the third author is partially supported by an NSF grant.
Rights and permissions
About this article
Cite this article
Chau, A., Chen, J. & Yuan, Y. Lagrangian mean curvature flow for entire Lipschitz graphs II. Math. Ann. 357, 165–183 (2013). https://doi.org/10.1007/s00208-013-0897-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00208-013-0897-2