1 Introduction

Let \((M,{\mathrm {g}}_M)\) and \((N,{\mathrm {g}}_N)\) be complete Riemannian manifolds, and consider a smooth map \(f:M\rightarrow N\). Then f is called strictly length-decreasing, if there is \(\delta \in (0,1]\), such that \(\Vert {\text {d}}{f(v)}\Vert _{{\mathrm {g}}_N}\le (1-\delta )\Vert v\Vert _{{\mathrm {g}}_M}\) for all \(v\in \varGamma (TM)\). The map f is called strictly area-decreasing if there is \(\delta \in (0,1]\), such that

$$\begin{aligned} \bigl \Vert {\text {d}}{f(v)} \wedge {\text {d}}{f(w)} \bigr \Vert _{{\mathrm {g}}_N} \le (1-\delta ) \Vert v \wedge w \Vert _{{\mathrm {g}}_M} \end{aligned}$$

for all \(v,w\in \varGamma (TM)\). In this paper, we deform the map f by deforming its corresponding graph

$$\begin{aligned} \varGamma (f) :=\big \{ (x,f(x)) \in M\times N : x\in M \big \} \end{aligned}$$

via the mean curvature flow in the product space \(M\times N\). That is, we consider the system

where denotes the mean curvature vector of the submanifold \(F_t(M)\) in \(M\times N\) at \(F_t(x)\). A smooth solution to the mean curvature flow for which \(F_t(M)\) is a graph for \(t\in [0,T_g)\) can be described completely in terms of a smooth family of maps \(f_t:M\rightarrow N\) with \(f_0=f\), where \(0<T_g\le \infty \) is the maximal time for which the graphical solution exists. In the case of long-time existence of the graphical solution (i.e., \(T_g=\infty \)) and convergence, we obtain a smooth homotopy from f to a minimal map \(f_{\infty }:M\rightarrow N\). Recall that a map between M and N is called minimal, if its graph is a minimal submanifold of the product space \(M\times N\) [15].

For a compact domain and arbitrary dimensions, several results for length- and area-decreasing maps are known (see e.g., [9, 12,13,14, 16, 20,21,22] and references therein). For example, if \(f:M\rightarrow N\) is strictly area-decreasing, M and N are space forms with \(\dim M\ge 2\), and their sectional curvatures satisfy

$$\begin{aligned} \sec _M \ge |\sec _N|, \quad \sec _M + \sec _N > 0, \end{aligned}$$

Wang and Tsui proved long-time existence of the graphical mean curvature flow and convergence of \(f_t\) to a constant map [19]. Subsequently, the curvature assumptions on the manifolds were relaxed by Lee and Lee [9] and recently by Savas-Halilaj and Smoczyk [13].

In the non-compact setting, Ecker and Huisken considered the flow of entire graphs, that is, graphs generated by maps \(f:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\). The quantity which plays an important role is essentially given by the Jacobian of the projection map from the graph \(\varGamma (f)\) to \({\mathbb {R}}^n\) and it satisfies a nice evolution equation. They provided conditions under which the mean curvature flow of the graph exists for all time and asymptotically approaches self-expanding solutions [6, 7]. Unfortunately, their methods cannot easily be adapted to the general higher-codimensional setting, since the analysis gets considerably more involved due to the complexity of the normal bundle of the graph.

Nevertheless, several results in the higher-codimensional case were obtained by considering the Gauß map of the immersion (see e.g., [24, 25]). In the case of two-dimensional graphs, Chen, Li, and Tian established long-time existence and convergence results by evaluating certain angle functions on the tangent bundle [5]. Another possibility is to impose suitable smallness conditions on the differential of the defining map. In these cases, one can show long-time existence and convergence of the mean curvature flow [2, 3, 14].

Considering maps between Euclidean spaces of the same dimension, Chau, Chen, and He obtained results for strictly length-decreasing Lipschitz continuous maps \(f:{\mathbb {R}}^m\rightarrow {\mathbb {R}}^m\) with graphs \(\varGamma (f)\) being Lagrangian submanifolds of \({\mathbb {R}}^m\times {\mathbb {R}}^m\). In particular, they showed short-time existence of solutions with bounded geometry, as well as decay estimates for the mean curvature vector and all higher-order derivatives of the defining map, which in turn imply the long-time existence of the solution [2]. This result was generalized in [3] by relaxing the length-decreasing condition and recently by the author to strictly length-decreasing maps between Euclidean spaces of arbitrary dimension [10].

In the article at hand we consider smooth maps \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) with bounded geometry, that is, they satisfy

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \Vert {\mathrm {D}}^k f(x) \Vert < \infty \quad \text {for all} \quad k\ge 1. \end{aligned}$$

In this case, we are able to relax the length-decreasing condition. Namely, we show the following result.

Theorem A

Suppose \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) is a smooth strictly area-decreasing map for \(\delta \in (0,1]\) with bounded geometry. Then the mean curvature flow with initial condition \(F(x):=\bigl (x,f(x)\bigr )\) has a long-time smooth solution for all \(t>0\) such that the following statements hold.

  1. (i)

    Along the flow, the evolving surface stays the graph of a strictly area-decreasing map \(f_t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) for all \(t>0\).

  2. (ii)

    The mean curvature vector of the graph satisfies the estimate

    for some constant \(C\ge 0\).

  3. (iii)

    All spatial derivatives of \(f_t\) of order \(k\ge 2\) satisfy the estimate

    $$\begin{aligned} t^{k-1} \sup _{x\in {\mathbb {R}}^2} \Vert {\mathrm {D}}^k f_t(x)\Vert ^2 \le C_{k,\delta } \quad \text {for all} \quad k\ge 2 \end{aligned}$$

    and for some constants \(C_{k,\delta }\ge 0\) depending only on k and \(\delta \). Moreover,

    $$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \Vert f_t(x)\Vert ^2 \le \sup _{x\in {\mathbb {R}}^2} \Vert f(x) \Vert ^2 \end{aligned}$$

    for all \(t>0\).

If in addition f satisfies \(\Vert f(x)\Vert \rightarrow 0\) as \(\Vert x\Vert \rightarrow \infty \), then \(\Vert f_t(x)\Vert \rightarrow 0\) smoothly on compact subsets of \({\mathbb {R}}^2\) as \(t\rightarrow \infty \).

Remark 1.1

In terms of the second fundamental form of the graph, Theorem A implies the decay estimate

$$\begin{aligned} t \Vert {\mathrm {A}}\Vert ^2 \le C \end{aligned}$$

for some constant \(C\ge 0\) depending only on \(\delta \).

Remark 1.2

  1. (i)

    Note that any strictly length-decreasing map is also strictly area-decreasing. Accordingly, for smooth maps with bounded geometry between two-dimensional Euclidean spaces, the statement of [10, Theorem A] follows from Theorem A.

  2. (ii)

    In the recent paper [11], the case of area-decreasing maps between complete Riemann surfaces with bounded geometry M and N is treated, where M is compact and the sectional curvatures satisfy \(\min _{x\in M} \sec _M(x) \ge \sup _{x\in N} \sec _N(x)\).

Remark 1.3

If one considers graphs generated by functions \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}\) with bounded geometry, the same strategy as in the proof of Theorem A can be applied. In particular, the area-decreasing property does not impose an additional condition on the map. The bounded geometry condition implies that the function has at most linear growth, so that it belongs to the class of functions studied in [6].

The outline of the paper is as follows. In Sect. 2, we introduce the main quantities in the graphical case which then will be deformed by the mean curvature flow described in Sect. 3. To obtain the statements of the following sections, we would like to apply a maximum principle. For this, we follow an idea from [2] to adapt the usual scalar maximum principle to the non-compact case. Then, in Sect. 4.1 we establish the preservation of the area-decreasing condition. In Sect. 4.2 we obtain estimates on and all derivatives of the map defining the graph by considering functions constructed similar to those in [18]. The main theorem is proven in Sect. 5 and some applications to self-similar solutions of the mean curvature flow are given in Sect. 6.

2 Maps Between Two-Dimensional Euclidean Spaces

2.1 Geometry of Graphs

We recall the geometric quantities in a graphical setting adopted to two-dimensional Euclidean spaces. For the setup in generic Euclidean spaces, see, e.g., [10, Sect. 2] and for the general setup, see, e.g., [14, Sect. 2].

Let \(({\mathbb {R}}^2,{\mathrm {g}}_{{\mathbb {R}}^2})\) be the two-dimensional Euclidean space equipped with its usual flat metric. On the product manifold \(({\mathbb {R}}^2\times {\mathbb {R}}^2,\langle \cdot ,\cdot \rangle :={\mathrm {g}}_{{\mathbb {R}}^2}\times {\mathrm {g}}_{{\mathbb {R}}^2})\), the projections onto the first and second factor

$$\begin{aligned} \pi _1, \pi _2: {\mathbb {R}}^2\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\end{aligned}$$

are submersions, that is, they are smooth and have maximal rank. A smooth map \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) defines an embedding \(F:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\times {\mathbb {R}}^2\) via

$$\begin{aligned} F(x) :=\bigl (x,f(x)\bigr ),\quad x\in {\mathbb {R}}^2. \end{aligned}$$

The graph off is defined to be the submanifold

$$\begin{aligned} \varGamma (f) :=F({\mathbb {R}}^2) = \left\{ \bigl (x,f(x)\bigr ) : x\in {\mathbb {R}}^2\right\} \subset {\mathbb {R}}^2\times {\mathbb {R}}^2. \end{aligned}$$

Since F is an embedding, it induces another Riemannian metric on \({\mathbb {R}}^2\), given by

$$\begin{aligned} {\mathrm {g}}:=F^{*}\langle \cdot ,\cdot \rangle . \end{aligned}$$

The metrics \({\mathrm {g}}_{{\mathbb {R}}^2},\langle \cdot ,\cdot \rangle \) and \({\mathrm {g}}\) are related by

$$\begin{aligned} \langle \cdot ,\cdot \rangle&= \pi _1^{*} {\mathrm {g}}_{{\mathbb {R}}^2}+ \pi _2^{*}{\mathrm {g}}_{{\mathbb {R}}^2}, \\ {\mathrm {g}}&= F^{*}\langle \cdot ,\cdot \rangle = {\mathrm {g}}_{{\mathbb {R}}^2}+ f^{*}{\mathrm {g}}_{{\mathbb {R}}^2}. \end{aligned}$$

Let us also define the symmetric 2-tensors introduced by Tsui and Wang [19],

$$\begin{aligned} {\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}&:=\pi _1^{*}{\mathrm {g}}_{{\mathbb {R}}^2}- \pi _2^{*}{\mathrm {g}}_{{\mathbb {R}}^2}\,, \\ {\mathrm {s}}&:=F^{*}{\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}= {\mathrm {g}}_{{\mathbb {R}}^2}- f^{*}{\mathrm {g}}_{{\mathbb {R}}^2}. \end{aligned}$$

We remark that \({\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}\) is a semi-Riemannian metric of signature (2, 2) on \({\mathbb {R}}^2\times {\mathbb {R}}^2\).

The Levi-Civita connection on \({\mathbb {R}}^2\) with respect to the induced metric \({\mathrm {g}}\) is denoted by \(\nabla \) and the corresponding curvature tensor by \({\mathrm {R}}\).

2.2 Second Fundamental Form

The second fundamental tensor of the graph \(\varGamma (f)\) is the section \({\mathrm {A}}\in \varGamma \bigl (T^{\perp }{\mathbb {R}}^2\otimes {\mathrm {Sym}}(T^*{\mathbb {R}}^2\otimes T^*{\mathbb {R}}^2)\bigr )\) defined as

$$\begin{aligned} {\mathrm {A}}(v,w) :=\bigl ( \nabla {\text {d}}{F}\bigr )(v,w) :={\mathrm {D}}_{{\text {d}}{F(v)}} {\text {d}} {F(w)} - {\text {d}} F(\nabla _v w), \end{aligned}$$

where \(v,w\in \varGamma (T{\mathbb {R}}^2)\) and where we denote the connection on \(F^{*}T({\mathbb {R}}^2\times {\mathbb {R}}^2)\otimes T^{*}{\mathbb {R}}^2\) induced by the Levi-Civita connection also by \(\nabla \). The trace of \({\mathrm {A}}\) with respect to the metric \({\mathrm {g}}\) is called the mean curvature vector field of \(\varGamma (f)\) and it will be denoted by

Let us denote the evaluation of the second fundamental form in the direction of a vector \(\xi \in \varGamma \bigl (F^*T({\mathbb {R}}^2\times {\mathbb {R}}^2)\bigr )\) by

$$\begin{aligned} {\mathrm {A}}_{\xi }(v,w) :=\bigl \langle {\mathrm {A}}(v,w),\xi \bigr \rangle . \end{aligned}$$

Note that is a section in the normal bundle of the graph. If vanishes identically, the graph is said to be minimal. A smooth map \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) is called minimal, if its graph \(\varGamma (f)\) is a minimal submanifold of the product space \(({\mathbb {R}}^2\times {\mathbb {R}}^2,\langle \cdot ,\cdot \rangle )\).

On the submanifold, the Gauß equation

$$\begin{aligned} {\mathrm {R}}(u_1,v_1,u_2,v_2) = \bigl \langle {\mathrm {A}}(u_1,u_2), {\mathrm {A}}(v_1,v_2) \bigr \rangle - \bigl \langle {\mathrm {A}}(u_1,v_2), {\mathrm {A}}(v_1,u_2) \bigr \rangle \end{aligned}$$
(2.1)

and the Codazzi equation

$$\begin{aligned} (\nabla _u {\mathrm {A}})(v,w) - (\nabla _v {\mathrm {A}})(u,w) = - {\text {d}}{F}\bigl ({\mathrm {R}}(u,v)w\bigr ) \end{aligned}$$

hold, where the induced connection on the bundle \(F^*T({\mathbb {R}}^2\times {\mathbb {R}}^2)\otimes T^*{\mathbb {R}}^2\otimes T^*{\mathbb {R}}^2\) is defined as

$$\begin{aligned} (\nabla _u {\mathrm {A}})(v,w) :={\mathrm {D}}_{{\text {d}}{F(u)}}({\mathrm {A}}(v,w)) - {\mathrm {A}}(\nabla _uv,w) - {\mathrm {A}}(v,\nabla _uw). \end{aligned}$$

2.3 Singular Value Decomposition

We recall the singular value decomposition theorem for the two-dimensional case (see, e.g., [22, p. 530] for the general setup).

Fix a point \(x\in {\mathbb {R}}^2\), and let

$$\begin{aligned} \lambda _1^2(x) \le \lambda _2^2(x) \end{aligned}$$

be the eigenvalues of \(f^{*}{\mathrm {g}}_{{\mathbb {R}}^2}\) with respect to \({\mathrm {g}}_{{\mathbb {R}}^2}\). The values \(0\le \lambda _1(x)\le \lambda _2(x)\) are called the singular values of the differential \({\text {d}}{f}\) of f and give rise to continuous functions on \({\mathbb {R}}^2\). At the point x consider an orthonormal basis \(\{\alpha _{1},\alpha _{2}\}\) with respect to \({\mathrm {g}}_{{\mathbb {R}}^2}\) which diagonalizes \(f^{*}{\mathrm {g}}_{{\mathbb {R}}^2}\). Moreover, at f(x) consider a basis \(\{\beta _{1},\beta _{2}\}\) that is orthonormal with respect to \({\mathrm {g}}_{{\mathbb {R}}^2}\), such that

$$\begin{aligned} {\text {d}}{f}(\alpha _{1}) = \lambda _1(x) \beta _{1}, \quad {\text {d}} f(\alpha _{2}) = \lambda _2(x) \beta _{2}. \end{aligned}$$

This procedure is called the singular value decomposition of the differential \({\text {d}} f\).

Now let us construct a special basis for the tangent and the normal space of the graph in terms of the singular values. The vectors

$$\begin{aligned} \widetilde{e}_{1} :=\frac{1}{\sqrt{1+\lambda _{1}^{2}(x)}}\bigl ( \alpha _{1} \oplus \lambda _{1}(x)\beta _{1} \bigr ) \quad \text {and} \quad \widetilde{e}_{2} :=\frac{1}{\sqrt{1+\lambda _{2}^{2}(x)}}\bigl ( \alpha _{2} \oplus \lambda _{2}(x)\beta _{2} \bigr ) \end{aligned}$$

form an orthonormal basis with respect to the metric \(\langle \cdot ,\cdot \rangle \) of the tangent space \({\text {d}} F(T_{x}{\mathbb {R}}^2)\) of the graph \(\varGamma (f)\) at x. It follows that with respect to the induced metric \({\mathrm {g}}\), the vectors

$$\begin{aligned} e_1 :=\frac{1}{\sqrt{1+\lambda _1^2(x)}} \alpha _1 \quad \text {and} \quad e_2 :=\frac{1}{\sqrt{1+\lambda _2^2(x)}} \alpha _2 \end{aligned}$$

form an orthonormal basis of \(T_x{\mathbb {R}}^2\). Moreover, the vectors

$$\begin{aligned} \xi _{1} :=\frac{1}{\sqrt{1+\lambda _{1}^{2}(x)}}\bigl (-\lambda _{1}(x)\alpha _{1} \oplus \beta _{1}\bigr ) \quad \text {and} \quad \xi _{2} :=\frac{1}{\sqrt{1+\lambda _{2}^{2}(x)}}\bigl (-\lambda _{2}(x)\alpha _{2} \oplus \beta _{2}\bigr ) \end{aligned}$$

form an orthonormal basis with respect to \(\langle \cdot ,\cdot \rangle \) of the normal space \(T^{\perp }_{x}{\mathbb {R}}^2\) of the graph \(\varGamma (f)\) at the point x. From the formulae above, we deduce that

$$\begin{aligned} {\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}\bigl (\widetilde{e}_{i},\widetilde{e}_{j}\bigr ) = {\mathrm {s}}(e_i,e_j) = \frac{1-\lambda _{i}^{2}(x)}{1+\lambda _{i}^{2}(x)}\delta _{ij}, \quad 1\le i,j\le 2. \end{aligned}$$

Therefore, the eigenvalues of the 2-tensor \({\mathrm {s}}\) with respect to \({\mathrm {g}}\) are given by

$$\begin{aligned} \frac{1-\lambda _{1}^{2}(x)}{1+\lambda _{1}^{2}(x)} \ge \frac{1-\lambda _{2}^{2}(x)}{1+\lambda _{2}^{2}(x)}. \end{aligned}$$
(2.2)

Moreover,

$$\begin{aligned} {\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}(\xi _{i},\xi _{j}) = - \frac{1-\lambda _{i}^{2}(x)}{1+\lambda _{i}^{2}(x)}\delta _{ij},\quad 1\le i,j\le 2\,, \end{aligned}$$
(2.3)

and

$$\begin{aligned} {\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}(\widetilde{e}_{i},\xi _{j}) = - \frac{2\lambda _{i}(x)}{1+\lambda _{i}^{2}(x)} \delta _{ij},\quad 1\le i,j\le 2. \end{aligned}$$

Further, we will use the notation

$$\begin{aligned} {\mathrm {A}}_{ij}^{\alpha } :=\langle {\mathrm {A}}(e_i,e_j), \xi _{\alpha } \rangle \end{aligned}$$

to denote the components of the second fundamental form with respect to this basis.

3 Mean Curvature Flow in Euclidean Space

Let \(f_0:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) be a smooth map and \(T>0\). We say that a family of maps \(F:{\mathbb {R}}^2\times [0,T) \rightarrow {\mathbb {R}}^2\times {\mathbb {R}}^2\) evolves under the mean curvature flow, if for all \(x\in {\mathbb {R}}^2\)

(3.1)

This system can also be described as follows. As in [2, Sect. 5], let us consider the non-parametric mean curvature flow equation for \(f:{\mathbb {R}}^2\times [0,T)\rightarrow {\mathbb {R}}^2\), given by the quasilinear system

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t f(x,t) = \sum _{i,j=1}^2 \widetilde{{\mathrm {g}}}^{ij} \partial ^2_{ij} f(x,t), \\ f(x,0) = f_0(x), \end{array}\right. } \end{aligned}$$
(3.2)

where \(\widetilde{{\mathrm {g}}}^{ij}\) are the components of the inverse of \(\widetilde{{\mathrm {g}}}:={\mathrm {g}}_{{\mathbb {R}}^2}+ f_t^*{\mathrm {g}}_{{\mathbb {R}}^2}\), where here we have set \(f_t(x):=f(x,t)\). If (3.2) has a smooth solution \(f:{\mathbb {R}}^2\times [0,T)\rightarrow {\mathbb {R}}^2\), then the mean curvature flow (3.1) has a smooth solution \(F:{\mathbb {R}}^2\times [0,T)\rightarrow {\mathbb {R}}^2\times {\mathbb {R}}^2\) given by the family of graphs

$$\begin{aligned} \varGamma \bigl (f(\cdot ,t)\bigr ) = \bigl \{ \bigl (x,f(x,t)\bigr ) : x\in {\mathbb {R}}^2\bigr \}, \end{aligned}$$

up to tangential diffeomorphisms (see, e.g., [1, Chapter 3.1]).

In the sequel, if there is no confusion, we will also use the notation \(F_t(x):=F(x,t)\) as well as \(f_t(x):=f(x,t)\).

For (3.2), we have the following short-time existence result.

Theorem 3.1

([2, Proposition 5.1 for m = n = 2]) Suppose \(f_0:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) is a smooth function with bounded geometry. Then (3.2) has a short-time smooth solution f on \({\mathbb {R}}^2\times [0,T)\) for some \(T>0\) with initial condition \(f_0\), such that \(\sup _{x\in {\mathbb {R}}^2}\Vert {\mathrm {D}}^lf_t(x)\Vert < \infty \) for every \(l\ge 1\) and \(t\in [0,T)\).

This motivates to consider the following type of solutions to (3.1).

Definition 3.2

Let \(F_t(x)\) be a smooth solution to the system (3.1) on \({\mathbb {R}}^2\times [0,T)\) for some \(0<T\le \infty \), such that for each \(t\in [0,T)\), the submanifold \(F_t({\mathbb {R}}^2)\subset {\mathbb {R}}^2\times {\mathbb {R}}^2\) satisfies

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \Vert \nabla ^k {\mathrm {A}}(x,t) \Vert< & {} \infty \quad \text {for all} \quad k\ge 0\,, \end{aligned}$$
(3.3)
$$\begin{aligned} C_1(t) {\mathrm {g}}_{{\mathbb {R}}^2}\le & {} {\mathrm {g}}\le C_2(t) {\mathrm {g}}_{{\mathbb {R}}^2}, \end{aligned}$$
(3.4)

where \(C_1(t)\) and \(C_2(t)\) for each \(t\in [0,T)\) are finite, positive constants depending only on t. Then we will say that the family of embeddings \(\{F_t\}_{t\in [0,T)}\) has bounded geometry.

Definition 3.3

Let \(f_t(x)\) be a smooth solution to the system (3.2) on \({\mathbb {R}}^2\times [0,T)\) for some \(0<T\le \infty \), such that each \(f_t\) for \(t\in [0,T)\) satisfies the estimate

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \Vert {\mathrm {D}}^k f_t(x) \Vert < \infty \quad \text {for all} \quad k\ge 1. \end{aligned}$$

Then we will say that \(f_t(x)\) has bounded geometry for every \(t\in [0,T)\).

3.1 Graphs

We recall some important notions in the graphic case, where we follow the presentation in [13, Sect. 3.1].

Let \(f_0:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) denote a smooth map with bounded geometry. Then Theorem 3.1 ensures that the system (3.2) has a short-time solution with initial data \(f_0(x)\) on a time interval [0, T) for some positive maximal time \(T>0\). Further, there is a diffeomorphism \(\phi _t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\), such that

$$\begin{aligned} F_t\circ \phi _t(x)=(x,f_t(x)), \end{aligned}$$
(3.5)

where \(F_t(x)\) is a solution of (3.1).

To obtain the converse of this statement, let \(\varOmega _{{\mathbb {R}}^2}\) be the volume form on \({\mathbb {R}}^2\) and extend it to a parallel 2-form on \({\mathbb {R}}^2\times {\mathbb {R}}^2\) by pulling it back via the natural projection onto the first factor \(\pi _1:{\mathbb {R}}^2\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\), that is, consider the 2-form \(\pi _1^*\varOmega _{{\mathbb {R}}^2}\). Define the time-dependent smooth function \(u:{\mathbb {R}}^2\times [0,T)\rightarrow {\mathbb {R}}\) by setting

$$\begin{aligned} u :=\star \varOmega _t, \end{aligned}$$

where \(\star \) is the Hodge star operator with respect to the induced metric \({\mathrm {g}}\) and

$$\begin{aligned} \varOmega _t :=F_t^*\bigl ( \pi _1^*\varOmega _{{\mathbb {R}}^2} \bigr ) = (\pi _1\circ F_t)^* \varOmega _{{\mathbb {R}}^2}\,. \end{aligned}$$

The function u is the Jacobian of the projection map from \(F_t({\mathbb {R}}^2)\) to \({\mathbb {R}}^2\). From the implicit mapping theorem it follows that \(u>0\) if and only if there is a diffeomorphism \(\phi _t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) and a map \(f_t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\), such that (3.5) holds, i.e., u is positive precisely if the solution of the mean curvature flow remains a graph. By Theorem 3.1, the solution will stay a graph at least in a short time interval [0, T).

3.2 Parabolic Scaling

For any \(\tau >0\) and \((x_0,t_0)\in {\mathbb {R}}^2\times [0,T)\), consider the change of variables

$$\begin{aligned} y :=\tau (x-x_0), \quad r :=\tau ^2(t-t_0), \quad \widetilde{f}_{\tau }(y, r) :=\tau \bigl ( f(x,t) - f(x_0,t_0) \bigr ), \end{aligned}$$

which we call the parabolic scaling by\(\tau \)at\((x_0,t_0)\). Let us denote the derivative with respect to y by \(\widetilde{{\mathrm {D}}}\), and let \(\widetilde{{\mathrm {g}}}_{\tau }\) be the scaled metric as well as \({\mathrm {A}}_{\tau }\) be the scaled second fundamental form. We calculate

$$\begin{aligned} (\widetilde{{\mathrm {D}}}^k \widetilde{f}_{\tau })(y,r) = \tau ^{1-k}({\mathrm {D}}^kf)(x,t) \,, \end{aligned}$$

which implies

$$\begin{aligned} \widetilde{{\mathrm {g}}}_{\tau \,|(y,r)} = \widetilde{{\mathrm {g}}}_{|(x,t)} \quad \text {and} \quad {\mathrm {A}}_{\tau \,|(y,r)} = \frac{1}{\tau } {\mathrm {A}}_{|(x,t)}, \end{aligned}$$

so that \(\widetilde{f}_{\tau }(y, r)\) satisfies Eq. (3.2) in the sense that

$$\begin{aligned} \frac{\partial \widetilde{f}_{\tau }}{\partial r}(y,r) = \sum _{i,j=1}^2 \widetilde{{\mathrm {g}}}_{\tau }^{ij} \frac{\partial ^2 \widetilde{f}_{\tau }}{\partial y^i \partial y^j}(y,r). \end{aligned}$$

3.3 Evolution Equations

Let us recall the evolution equation of the tensor \({\mathrm {s}}\) in the two-dimensional setting (which is basically calculated in [19, Eqs. (3.5) and (3.7)]), as well as the evolution equation for its trace.

Lemma 3.4

Under the mean curvature flow, the evolution of the tensor \({\mathrm {s}}\) for \(t\in [0,T)\) is given by the formula

$$\begin{aligned} \left( \nabla _{\partial _t} {\mathrm {s}}- \Delta \delta \right) (v,w)&= - {\mathrm {s}}({{\mathrm{Ric}}}v, w ) - {\mathrm {s}}( v, {{\mathrm{Ric}}}w ) \\&\qquad - 2 \sum _{k=1}^2 {\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}\bigl ( {\mathrm {A}}(e_k,v), {\mathrm {A}}(e_k,w) \bigr ), \end{aligned}$$

where \(\{e_1,e_2\}\) is any orthonormal frame with respect to \({\mathrm {g}}\) and where the Ricci operator is given by

$$\begin{aligned} {{\mathrm{Ric}}}v :=- \sum _{k=1}^2 {\mathrm {R}}(e_k,v) e_k. \end{aligned}$$

Corollary 3.5

Under the mean curvature flow, the evolution equation of the trace of the tensor \({\mathrm {s}}\) is given by

$$\begin{aligned} \left( \partial _t- \Delta \right) {{\mathrm{{tr}}}}({\mathrm {s}}) = - 2 \sum _{k,l=1}^2 \left( {\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}- \frac{1-\lambda _k^2}{1+\lambda _k^2} {\mathrm {g}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}\right) \bigl ( {\mathrm {A}}(e_k,e_l), {\mathrm {A}}(e_k,e_l) \bigr ), \end{aligned}$$

where \(\{e_1,e_2\}\) denotes the orthonormal frame field with respect to \({\mathrm {g}}\) constructed in Sect. 2.3.

Proof

From the Gauß equation (2.1) we obtain

Further, since

the claim follows from Lemma 3.4. \(\square \)

In the two-dimensional setting at hand, we can rewrite the evolution equation for the trace.

Lemma 3.6

Under the mean curvature flow, the trace of the tensor \({\mathrm {s}}\) satisfies

$$\begin{aligned} \left( \partial _t- \Delta \right) {{\mathrm{{tr}}}}({\mathrm {s}})&= 2 \Vert {\mathrm {A}}\Vert ^2 {{\mathrm{{tr}}}}({\mathrm {s}}) - \frac{1}{2} \frac{\Vert \nabla {{\mathrm{{tr}}}}({\mathrm {s}})\Vert ^2}{{{\mathrm{{tr}}}}({\mathrm {s}})} \\&\quad + \frac{2}{{{\mathrm{{tr}}}}({\mathrm {s}})} \sum _{k=1}^2 \left( \frac{2\lambda _2}{1+\lambda _2^2} {\mathrm {A}}_{1k}^1 + \frac{2\lambda _1}{1+\lambda _1^2} {\mathrm {A}}_{2k}^2 \right) ^2. \end{aligned}$$

Proof

This is [18, Eqs. (3.17) and (3.18)]. \(\square \)

4 Evolution of Submanifold Geometry

4.1 Preserved Quantities

Consider a smooth map \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\). The property of f being strictly area-decreasing can be expressed in terms of the singular values \(\lambda _1,\lambda _2\) of the differential \({\text {d}} f\) as

$$\begin{aligned} \lambda _1^2 \lambda _2^2 \le 1-\delta \end{aligned}$$

for some \(\delta \in (0,1]\). For maps with bounded geometry and using Eq. (2.2), this can also be rephrased in terms of the tensor \({\mathrm {s}}\) as follows. If f is strictly area-decreasing with bounded geometry, there is \(\varepsilon >0\), such that the inequality

$$\begin{aligned} {{\mathrm{{tr}}}}({\mathrm {s}}) = \frac{2(1-\lambda _1^2\lambda _2^2)}{(1+\lambda _1^2)(1+\lambda _2^2)} \ge \varepsilon \end{aligned}$$

holds. We will now modify \({{\mathrm{{tr}}}}({\mathrm {s}})-\varepsilon \) using the function

$$\begin{aligned} \phi _R(x) :=1 + \frac{\Vert x\Vert ^2_{{\mathbb {R}}^2}}{R^2}, \end{aligned}$$
(4.1)

where \(\Vert \cdot \Vert _{{\mathbb {R}}^2}\) is the Euclidean norm on \({\mathbb {R}}^2\) and \(R>0\) is a constant which will be chosen later.

Lemma 4.1

Let F(xt) be a smooth solution to (3.1) with bounded geometry and assume there is \(\varepsilon >0\), such that \({{\mathrm{{tr}}}}({\mathrm {s}}) \ge \varepsilon \) for any \(t\in [0,T)\). Fix any \(T'\in [0,T)\) and \((x_0,t_0)\in {\mathbb {R}}^2\times [0,T']\). Then the following estimates hold,

$$\begin{aligned} - c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} {{\mathrm{{tr}}}}({\mathrm {s}})&\le \langle \nabla \phi _R, \nabla {{\mathrm{{tr}}}}({\mathrm {s}}) \rangle \le c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} {{\mathrm{{tr}}}}({\mathrm {s}}), \\ | \Delta \phi _R |&\le c(T') \left( \frac{1}{R^2} + \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} \right) , \end{aligned}$$

where \(c(T')\ge 0\) is a constant depending only on \(T'\).

Proof

Note that

$$\begin{aligned} \nabla _u {{\mathrm{{tr}}}}({\mathrm {s}}) = \sum _{k=1}^2 (\nabla _u{\mathrm {s}})(e_k,e_k) = 2 \sum _{k=1}^2 {\mathrm {s}}_{{\mathbb {R}}^2\times {\mathbb {R}}^2}\bigl ( {\mathrm {A}}(u,e_k), {\text {d}} F(e_k) \bigr ). \end{aligned}$$

The bounded geometry assumptions (3.3) and (3.4) imply that \({\mathrm {s}}\), \(\nabla {\mathrm {s}}\), and therefore \(\nabla {{\mathrm{{tr}}}}({\mathrm {s}})\) are uniformly bounded on \({\mathbb {R}}^2\times [0,T']\) by a constant \(c(T')\) depending only on \(T'\). Thus, also using \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \), at \((x_0,t_0)\) we have

$$\begin{aligned} -c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} {{\mathrm{{tr}}}}({\mathrm {s}}) \le \langle \nabla \phi _R, \nabla {{\mathrm{{tr}}}}({\mathrm {s}}) \rangle \le c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} {{\mathrm{{tr}}}}({\mathrm {s}}). \end{aligned}$$

The statement for \(|\Delta \phi _R|\) is given in [2, Eq. 3.4]. \(\square \)

Let us define

$$\begin{aligned} \uppsi (x,t) :={\mathrm {e}}^{\sigma t} \phi _R(x) {{\mathrm{{tr}}}}({\mathrm {s}})_{|(x,t)} - \varepsilon . \end{aligned}$$

Lemma 4.2

Let F(xt) be a smooth solution to (3.1) with bounded geometry. Assume there is \(\varepsilon >0\) with \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) at \(t=0\), and \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \frac{\varepsilon }{2}\) for all \(t\in [0,T)\). Then it is \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) for all \(t\in [0,T)\).

Proof

The proof closely follows the strategy in [2, 10]. We will show that for any fixed \(T'\in [0,T)\) and \(\sigma >0\), there is \(R_0>0\) depending only on \(\sigma \) and \(T'\), such that \(\uppsi >0\) on \({\mathbb {R}}^2\times [0,T']\) for all \(R\ge R_0\).

On the contrary, suppose \(\uppsi \) is not positive on \({\mathbb {R}}^2\times [0,T']\) for some \(R\ge R_0\). Then as \(\uppsi >0\) on \({\mathbb {R}}^2\times \{0\}\), \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \frac{\varepsilon }{2}\) on \({\mathbb {R}}^2\times [0,T)\) and \(\phi _R(x)\rightarrow \infty \) as \(\Vert x\Vert \rightarrow \infty \), it follows that \(\uppsi >0\) outside some compact set \(K\subset {\mathbb {R}}^2\) for all \(t\in [0,T)\). We conclude that there is \((x_0,t_0)\in K\times [0,T']\) such that \(\uppsi (x_0,t_0)=0\) at \((x_0,t_0)\) and that \(t_0\) is the first such time. According to the second derivative criterion, at the point \((x_0,t_0)\) we have

$$\begin{aligned} \partial _t \uppsi \le 0, \quad \nabla \uppsi = 0 \quad \text {and} \quad \Delta \uppsi \ge 0. \end{aligned}$$
(4.2)

On the other hand, using Lemma 3.6, we estimate the terms in the evolution equation for \(\uppsi \), as given by

$$\begin{aligned} \displaystyle (\partial _t- \Delta ) \uppsi&= {\mathrm {e}}^{\sigma t} \phi _R \left\{ 2 \Vert {\mathrm {A}}\Vert ^2 {{\mathrm{{tr}}}}({\mathrm {s}}) - \frac{1}{2} \frac{\Vert \nabla {{\mathrm{{tr}}}}({\mathrm {s}})\Vert ^2}{{{\mathrm{{tr}}}}({\mathrm {s}})}\right. \\&\left. \qquad \qquad + \frac{2}{{{\mathrm{{tr}}}}({\mathrm {s}})} \sum _{k=1}^2 \left( \frac{2\lambda _2}{1+\lambda _2^2} {\mathrm {A}}_{1k}^1 + \frac{2\lambda _1}{1+\lambda _1^2} {\mathrm {A}}_{2k}^2 \right) ^2 \right\} \\&\qquad \qquad - {\mathrm {e}}^{\sigma t} \Bigl \{ (\Delta \phi _R) {{\mathrm{{tr}}}}({\mathrm {s}}) + 2 \langle \nabla \phi _R, \nabla {{\mathrm{{tr}}}}({\mathrm {s}}) \rangle - \sigma \phi _R{{\mathrm{{tr}}}}({\mathrm {s}}) \Bigr \} \\&=:{\mathcal {A}}+ {\mathcal {B}}, \end{aligned}$$

where we collect all terms coming from the evolution equation of \({{\mathrm{{tr}}}}({\mathrm {s}})\) (i.e., the first two lines) in \({\mathcal {A}}\) and the remaining terms (i.e., the third line) in \({\mathcal {B}}\). To estimate the terms in \({\mathcal {A}}\) at \((x_0,t_0)\), note that the vanishing of the first derivative in (4.2) implies the equality

$$\begin{aligned} (\nabla \phi _R){{\mathrm{{tr}}}}({\mathrm {s}}) = - \phi _R \nabla {{\mathrm{{tr}}}}({\mathrm {s}}). \end{aligned}$$

Consequently, since \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \frac{\varepsilon }{2}\) by assumption, at \((x_0,t_0)\) we derive the estimate

$$\begin{aligned} {\mathcal {A}}&={\mathrm {e}}^{\sigma t_0} \phi _R \Biggl \{ \underbrace{2 \Vert {\mathrm {A}}\Vert ^2 {{\mathrm{{tr}}}}({\mathrm {s}})}_{\ge \Vert {\mathrm {A}}\Vert ^2\varepsilon \ge 0} - \frac{1}{2} \frac{\Vert \nabla {{\mathrm{{tr}}}}({\mathrm {s}})\Vert ^2}{{{\mathrm{{tr}}}}({\mathrm {s}})} \\&\qquad \qquad \qquad +\underbrace{\frac{2}{{{\mathrm{{tr}}}}({\mathrm {s}})} \sum _{k=1}^2 \left( \frac{2\lambda _2}{1+\lambda _2^2} {\mathrm {A}}_{1k}^1 + \frac{2\lambda _1}{1+\lambda _1^2} {\mathrm {A}}_{2k}^2 \right) ^2}_{\ge 0} \Biggr \} \\&\ge - \frac{{\mathrm {e}}^{\sigma t_0} \phi _R}{2} \frac{\Vert \nabla {{\mathrm{{tr}}}}({\mathrm {s}})\Vert ^2}{{{\mathrm{{tr}}}}({\mathrm {s}})} = \frac{{\mathrm {e}}^{\sigma t_0}}{2} \langle \nabla \phi _R, \nabla {{\mathrm{{tr}}}}({\mathrm {s}}) \rangle \\ {\mathop {\ge }\limits ^{\text {Lem.}\ }}&- \frac{{\mathrm {e}}^{\sigma t_0}}{2} \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} c(T') {{\mathrm{{tr}}}}({\mathrm {s}}). \end{aligned}$$

Lemma 4.1 and further evaluation yield

$$\begin{aligned} \displaystyle {\mathcal {A}}+ {\mathcal {B}}&\ge - \frac{{\mathrm {e}}^{\sigma t_0}}{2} \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} c(T') {{\mathrm{{tr}}}}({\mathrm {s}}) \\&\qquad - {\mathrm {e}}^{\sigma t_0}\left\{ c(T') \left( \frac{1}{R^2} + \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} \right) + 2 c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} \right. \\&\left. \qquad \qquad \qquad - \sigma - \sigma \frac{\Vert x_0\Vert ^2_{{\mathbb {R}}^2}}{R^2} \right\} {{\mathrm{{tr}}}}({\mathrm {s}}) \\&= {\mathrm {e}}^{\sigma t_0}\left\{ \sigma + \sigma \frac{\Vert x_0\Vert ^2_{{\mathbb {R}}^2}}{R^2} - \frac{7}{2} c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} - \frac{c(T')}{R^2} \right\} {{\mathrm{{tr}}}}({\mathrm {s}})\,. \end{aligned}$$

Note that by choosing \(R_0>0\) (depending on \(\sigma \) and \(T'\)) large enough, the term

$$\begin{aligned} \frac{\sigma }{2} + \sigma \frac{\Vert x_0\Vert ^2_{{\mathbb {R}}^2}}{R^2} - \frac{7}{2} c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} - \frac{c(T')}{R^2} \end{aligned}$$

is strictly positive for any \(R\ge R_0\) and any \(\Vert x_0\Vert _{{\mathbb {R}}^2}\). Continuing with the above calculation, we obtain

But this is a contradiction to (4.2), which shows the claim.

The statement of the Lemma follows by first letting \(R\rightarrow \infty \), then \(\sigma \rightarrow 0\) and finally \(T'\rightarrow T\). \(\square \)

Lemma 4.3

Let F(xt) be a smooth solution to (3.1) for \(t\in [0,T)\) with bounded geometry. If there is \(\varepsilon >0\) with \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) at \(t=0\), then \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) for all \(t\in [0,T)\).

Proof

By Lemma 4.2, we only need to remove the assumption \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \frac{\varepsilon }{2}\) in [0, T). By the bounded geometry assumption on F(xt), the right-hand side of the evolution equation of \({{\mathrm{{tr}}}}({\mathrm {s}})\) is bounded, so that

$$\begin{aligned} \Vert \partial _t {{\mathrm{{tr}}}}({\mathrm {s}})\Vert \le C(t), \end{aligned}$$

where C(t) is a constant only depending on t. Since \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) at \(t=0\), it follows that there is a maximal time \(T_0>0\), such that \({{\mathrm{{tr}}}}({\mathrm {s}})>\frac{\varepsilon }{2}\) holds in \([0,T_0)\). From Lemma 4.2 we know that \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) on \({\mathbb {R}}^2\times [0,T_0)\). If \(T_0\ne T\), by continuity, we also know that \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) on \({\mathbb {R}}^2\times [0,T_0]\). By the same argument for finding \(T_0\) above, we can find some positive \(T_0'\), such that \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \frac{\varepsilon }{2}\) in \({\mathbb {R}}^2\times [T_0,T_0+T_0')\), where \([T_0,T_0+T_0')\subset [T_0,T)\). But this contradicts the choice of \(T_0\), so that \(T_0=T\). \(\square \)

Lemma 4.4

Let \(F_t\) be the mean curvature flow of a smooth strictly area-decreasing map \((f:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}^2)\) with bounded geometry. Then each \(F_t({\mathbb {R}}^2)\) is the graph of a strictly area-decreasing map for \(t\in [0,T)\).

Proof

The proof is the same as [13, Proof of Proposition 3.3]. \(\square \)

4.2 A Priori Estimates

To obtain estimates for the mean curvature vector, let us define the function

Lemma 4.5

The evolution equation for \(\upchi \) under the mean curvature flow is given by

Proof

We calculate

Now, recall (see, e.g., [17, Corollary 3.8]) that the square norm of the mean curvature vector evolves by

which together with Lemma 3.6 implies the claim. \(\square \)

Lemma 4.6

Let F(xt) be a smooth, graphic solution to (3.1) with bounded geometry and suppose \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon _1\) on [0, T) for some \(\varepsilon _1>0\). Then there is a constant \(C\ge 0\) depending only on \(\varepsilon _1\), such that

on \({\mathbb {R}}^2\times [0,T)\).

Proof

Fix \(0<\varepsilon _2 < \varepsilon _1\), so that \(\upchi \) is positive on \({\mathbb {R}}^2\times \{0\}\). Further, fix any \(T'\in [0,T)\). We will first show that we can choose \(R_0>0\), such that \(\upchi \ge 0\) on \({\mathbb {R}}^2\times [0,T']\) for all \(R\ge R_0\).

Suppose \(\upchi \) is not positive on \({\mathbb {R}}^2\times [0,T']\) for some \(R\ge R_0\). Then, as \(\upchi >0\) on \({\mathbb {R}}^2\times \{0\}\), \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon _1\) on [0, T), \(\phi _R(x)\rightarrow \infty \) as \(\Vert x\Vert \rightarrow \infty \) and by the bounded geometry condition (3.3), it follows that \(\upchi >0\) outside some compact set \(K\subset {\mathbb {R}}^2\) for all \(t\in [0,T']\). We conclude that there is \((x_0,t_0)\in K\times [0,T']\), such that \(\upchi (x_0,t_0)=0\) and that \(t_0\) is the first such time. By the second derivative criterion, at \((x_0,t_0)\) we have

$$\begin{aligned} \upchi = 0, \quad \nabla \upchi = 0, \quad \partial _t \upchi \le 0 \quad \text {and} \quad \Delta \upchi \ge 0. \end{aligned}$$
(4.3)

On the other hand, we estimate the terms in the evolution equation for \(\upchi \) from Lemma 4.5 at \((x_0,t_0)\). Using

and \(\phi _R\ge 1\) yields the estimate

where

Since , we derive

$$\begin{aligned} {\mathcal {A}}\ge 0. \end{aligned}$$
(4.4)

To estimate the terms in \(\mathcal {G}\), we want to exploit \(\nabla \upchi =0\) at \((x_0,t_0)\). This yields

and consequently

From \(\upchi (x_0,t_0)=0\) we get , so that

Noting \(\phi _R\ge 1\) and sorting the expression, we obtain

Thus, the gradient terms satisfy

(4.5)

Collecting the previous calculations and using \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon _1>0\) as well as \(\upchi (x_0,t_0)=0\), we estimate the evolution equation of \(\upchi \) at \((x_0,t_0)\) by

$$\begin{aligned} (\partial _t- \Delta ) \upchi&{\mathop {\ge }\limits ^{\text {Eqs.}\ 4.4, 4.5}} {\mathrm {e}}^{\sigma t_0} \Bigl \{ \sigma \phi _R {{\mathrm{{tr}}}}({\mathrm {s}}) - (\Delta \phi _R){{\mathrm{{tr}}}}({\mathrm {s}}) - \langle \nabla \phi _R, \nabla {{\mathrm{{tr}}}}({\mathrm {s}})\rangle \Bigr \} \\&\quad {\mathop {\ge }\limits ^{\text {Lem.}\ }}{\mathrm {e}}^{\sigma t_0} \left\{ \sigma \left( 1 + \frac{\Vert x_0\Vert ^2_{{\mathbb {R}}^2}}{R^2} \right) - c(T') \left( \frac{1}{R^2} + \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} \right) \right. \\&\left. \qquad \qquad \qquad \qquad - c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} \right\} {{\mathrm{{tr}}}}({\mathrm {s}}) \\&\qquad = {\mathrm {e}}^{\sigma t_0} \left\{ \sigma + \sigma \frac{\Vert x_0\Vert ^2_{{\mathbb {R}}^2}}{R^2} - 2 c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} - \frac{c(T')}{R^2} \right\} {{\mathrm{{tr}}}}({\mathrm {s}})\,. \end{aligned}$$

Now we choose \(R_0>0\) (depending on \(\sigma \) and \(T'\)) large enough, so that the term

$$\begin{aligned} \frac{\sigma }{2} + \sigma \frac{\Vert x_0\Vert ^2_{{\mathbb {R}}^2}}{R^2} - 2 c(T') \frac{\Vert x_0\Vert _{{\mathbb {R}}^2}}{R^2} - \frac{c(T')}{R^2} \end{aligned}$$

is strictly positive for any \(R\ge R_0\) and any \(\Vert x_0\Vert _{{\mathbb {R}}^2}\). Continuing with the above calculation, we obtain

But this is a contradiction to (4.3), which shows the claim.

By first letting \(R\rightarrow \infty \), then \(\sigma \rightarrow 0\) and finally \(T'\rightarrow T\), we have shown that

holds for all \(t\in [0,T)\). The statement of the Lemma follows by noting \({{\mathrm{{tr}}}}({\mathrm {s}})\le 2\), setting \(C:=\frac{2}{\varepsilon _2} - 1\) and recalling that \(\varepsilon _2\) only depends on \(\varepsilon _1\). \(\square \)

As in [2, 10], we go on by analyzing the non-parametric version of the mean curvature flow to obtain estimates on all higher derivatives of the map which defines the graph. Note that most proofs are very similar to the ones in the articles cited, but nevertheless need to be slightly modified to account for the weaker assumptions in the two-dimensional case.

Lemma 4.7

Let \(F:{\mathbb {R}}^2\times [0,T)\rightarrow {\mathbb {R}}^2\times {\mathbb {R}}^2\) be a smooth, graphic solution to (3.1) with bounded geometry. Suppose the corresponding maps \(f_t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) satisfy \(\Vert {\mathrm {D}}f_t\Vert \le C_1\) and \(\Vert {\mathrm {D}}^2 f_t\Vert \le C_2\) on \({\mathbb {R}}^2\times [0,T)\) for some constants \(C_1,C_2\ge 0\). Then for every \(l\ge 3\), there is a constant \(C_l\), such that

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \Vert {\mathrm {D}}^l f_t(x)\Vert ^2 \le C_l \end{aligned}$$

for all \(t\in [0,T)\).

Proof

The proof is essentially the same as [2, Proof of Lemma 4.2] (see also [10, Lemma 5.4] with \(m=n=2\)). \(\square \)

Lemma 4.8

Let \(F:{\mathbb {R}}^2\times [0,T)\rightarrow {\mathbb {R}}^2\times {\mathbb {R}}^2\) be a smooth, graphic solution to (3.1) with bounded geometry and denote by \(f_t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) the corresponding maps. Assume the condition \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) holds for a fixed \(\varepsilon >0\) at time \(t=0\). Further assume that on \({\mathbb {R}}^2\times [0,T)\) for some constant \(C\ge 0\). Then for every \(l\ge 1\), there is a constant \(C_l\ge 0\), such that

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \Vert {\mathrm {D}}^l f_t(x)\Vert ^2 \le C_l \end{aligned}$$

for all \(t\in [0,T)\).

Proof

By Lemma 4.3, the area-decreasing condition is preserved in [0, T), so that the relation \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \) holds in [0, T). Since \(\varepsilon \) is strictly positive, from

$$\begin{aligned} {{\mathrm{{tr}}}}({\mathrm {s}}) = \frac{2(1-\lambda _1^2\lambda _2^2)}{(1+\lambda _1^2)(1+\lambda _2^2)} \ge \varepsilon > 0 \end{aligned}$$

we infer

$$\begin{aligned} \varepsilon (1 + \lambda _i^2) \le \frac{2(1-\lambda _1^2\lambda _2^2)}{1+\lambda _j^2} \le 2, \quad (i,j) \in \bigl \{ (1,2), (2,1) \bigr \}, \end{aligned}$$
(4.6)

so that the singular values \(\lambda _1,\lambda _2\) of \({\mathrm {D}}f_t\) are bounded. This also means that \({\mathrm {D}}f_t\) itself is bounded, thus showing the claim for \(l=1\).

By Lemma 4.7, we now only need to prove the case \(l=2\). Suppose the claim was false for \(l=2\). Let

$$\begin{aligned} \eta (t) :=\sup _{\begin{array}{c} x\in {\mathbb {R}}^2\\ t'\le t \end{array}} \Vert {\mathrm {D}}^2 f(x,t')\Vert . \end{aligned}$$

Then there is a sequence \((x_k,t_k)\) along which we have \(\Vert {\mathrm {D}}^2 f(x_k,t_k)\Vert \ge \eta (t_k)/2\) while \(\eta (t_k)\rightarrow \infty \) as \(t_k\rightarrow T\). Let \(\tau _k:=\eta (t_k)\). For each k, let \((y,\widetilde{f}_{\tau _k}(y, r))\) be the parabolic scaling of the graph (xf(xt)) by \(\tau _k\) at \((x_k,t_k)\). Then \(\widetilde{f}_{\tau _k}(y, r)\) is a smooth solution to (3.2) on \({\mathbb {R}}^2\times [-\tau _k^2 t_k,0]\). Note that by the definition \(\tau _k=\eta (t_k)\), it is

$$\begin{aligned} \Vert \widetilde{{\mathrm {D}}}\widetilde{f}_{\tau _k}\Vert&= \Vert {\mathrm {D}}f\Vert \le C_1, \\ \Vert \widetilde{{\mathrm {D}}}^2 \widetilde{f}_{\tau _k}\Vert&= \tau _k^{-1} \Vert {\mathrm {D}}^2 f\Vert \le 1 \end{aligned}$$

on \({\mathbb {R}}^2\times [-\tau _k^2 t_k,0]\). Moreover, by the definition of the sequence \((x_k,t_k)\), the estimate

$$\begin{aligned} \Vert \widetilde{{\mathrm {D}}}^2 \widetilde{f}_{\tau _k}(0,0)\Vert = \frac{\Vert {\mathrm {D}}^2 f(x_k,t_k)\Vert }{\tau _k} = \frac{ \Vert {\mathrm {D}}^2 f(x_k,t_k)\Vert }{\eta (t_k)} \ge \frac{1}{2} \end{aligned}$$
(4.7)

holds. By Lemma 4.7, we conclude that all the higher derivatives of \(\widetilde{f}_{\tau _k}\) are uniformly bounded on \({\mathbb {R}}^2\times [-\tau _k^2 t_k,0]\). Thus, the theorem of Arzelà–Ascoli implies the existence of a subsequence of \(\widetilde{f}_{\tau _k}\) converging smoothly and uniformly on compact subsets of \({\mathbb {R}}^2\times (-\infty ,0]\) to a smooth solution \(\widetilde{f}_{\infty }\) to (3.2). Since for the graphs \(\bigl (x,f(x,t)\bigr )\) by assumption, after rescaling we have

for the graphs \(\bigl ( y, \widetilde{f}_{\tau _k}(y, r) \bigr )\). It follows that for each r the limiting graph \(\bigl ( y, \widetilde{f}_{\infty }(y, r) \bigr )\) must have everywhere, as well as \({{\mathrm{{tr}}}}({\mathrm {s}})\ge \varepsilon \). Note that by Eq. (4.6), this implies bounds for the singular values \(\widetilde{\lambda }_1,\widetilde{\lambda }_2\) of the limiting graph,

$$\begin{aligned} 1+\widetilde{\lambda }_k^2 \le \frac{2}{\varepsilon }, \quad k=1,2. \end{aligned}$$

It follows that we can estimate the Jacobian of the projection \(\pi _1\) from the graph \(\bigl ( y, \widetilde{f}_{\infty }(y, r) \bigr )\) to \({\mathbb {R}}^2\),

$$\begin{aligned} 0 < \frac{\varepsilon }{2} \le \star \varOmega _{\infty } = \frac{1}{\sqrt{(1+\widetilde{\lambda }_1^2)(1+\widetilde{\lambda }_2^2)}} \le 1. \end{aligned}$$

Thus, we can apply a Bernstein-type theorem of Wang [23, Theorem 1.1] to conclude that the graph \(\bigl (y,\widetilde{f}_{\infty }(y, r)\bigr )\) is an affine subspace of \({\mathbb {R}}^2\times {\mathbb {R}}^2\). Therefore, \(\widetilde{f}_{\infty }(y, r)\) has to be a linear map, but this contradicts (4.7), which (taking the limit \(k\rightarrow \infty \)) implies the estimate \(\Vert \widetilde{{\mathrm {D}}}^2 \widetilde{f}_{\infty }(y, r)(0,0)\Vert \ge 1/2\). \(\square \)

Lemma 4.9

Suppose f(xt) is a smooth solution to (3.2) on [0, T) that satisfies the bounded geometry condition. Then

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \Vert f(x,t)\Vert ^2 \le \sup _{x\in {\mathbb {R}}^2} \Vert f(x,0)\Vert ^2 \end{aligned}$$

holds for all \(t\in [0,T']\), where \(T'\in [0,T)\) is arbitrary.

Proof

This is [10, Lemma 5.6] with \(m=n=2\). \(\square \)

5 Proof of Theorem A

Using Lemma 4.3 and the estimates from the Lemmas 4.6, 4.7 and 4.8, the proof of the long-time existence of the solution is the same as in [2, Lemma 5.2]. By Lemma 4.4, the evolving surface stays a graph of an area-decreasing map \(f_t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\). The decay estimate for is given in Lemma 4.6.

Employing the Lemma 4.3, the bounds on the singular values from Eq. (4.6), and the decay estimates from Lemmas 4.6, 4.7, and 4.8, the proof of the decay estimates for the higher-order derivatives of \(f_t\) follows in the same way as in [10, Lemma 6.3]. The height estimate is provided by Lemma 4.9.

If we assume \(\Vert f_0\Vert \rightarrow 0\) for \(\Vert x\Vert \rightarrow \infty \), we know by Lemma 4.9 that \(\sup _{x\in {\mathbb {R}}^2}\Vert f(x,t)\Vert \) stays bounded. As the singular values \(\lambda _1,\lambda _2\ge 0\) are uniformly bounded, so is \(\widetilde{{\mathrm {g}}}\), which means the equation

$$\begin{aligned} \frac{\partial f}{\partial t}(x,t) = \sum _{i,j=1}^2 \widetilde{{\mathrm {g}}}^{ij} \frac{\partial ^2 f}{\partial x^i \partial x^j}(x,t) \end{aligned}$$

is uniformly parabolic. Then, by the theorem in [8], \(f(x,t)\rightarrow 0\) as \(t\rightarrow \infty \), uniformly with respect to x. This shows the convergence part of Theorem A and concludes the proof. \(\square \)

6 Applications

We demonstrate how to apply Theorem A to the examples considered in [10, Sect. 9]. Note that both proofs are formally the same as [10, Proofs of Examples 9.3 and 9.4], and we state them here for completeness.

Let \(F_t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\times {\mathbb {R}}^2\) be a graphical self-shrinking solution to the mean curvature flow, and denote by \(f_t:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) the corresponding map. Then \(f_1\) satisfies the equation

$$\begin{aligned} \sum _{i,j=1}^2 \widetilde{{\mathrm {g}}}^{ij} \frac{\partial ^2 f_1^k(x)}{\partial x^i \partial x^j} = - \frac{1}{2} f^k_1(x) + \frac{1}{2} \langle {\mathrm {D}}f_1^k(x), x \rangle , \quad k=1,2. \end{aligned}$$
(6.1)

If \(F_t\) is a translating solution to the mean curvature flow, then there is \(\xi \in {\mathbb {R}}^2\times {\mathbb {R}}^2\), such that . If the initial data are given by \(F_0(x) = (x,f(x))\), then for \(F_t\) to be a translating solution the function f has to satisfy

$$\begin{aligned} \sum _{i,j=1}^2 \widetilde{{\mathrm {g}}}^{ij} \frac{\partial ^2 f(x)}{\partial x^i \partial x^j} = {\mathrm {d}}\pi _2(\xi ) - \langle {\mathrm {D}}f(x), {\mathrm {d}}\pi _1(\xi ) \rangle . \end{aligned}$$
(6.2)

Example 6.1

(A Bernstein Theorem for Self-Shrinking Solutions) Let \(v:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) be a strictly area-decreasing map with bounded geometry and satisfying (6.1). Then v is a linear function.

Proof

Since v is a smooth solution to (6.1), the function

$$\begin{aligned} f_t(x) :=\sqrt{-t} v\left( \frac{x}{\sqrt{-t}} \right) \end{aligned}$$

is a solution to (3.2) for \(t\in (-\infty ,0]\) and \(f_{-1}(x) = v(x)\). Since this solution is unique by [4, Theorem 1.1], we can apply Theorem A. In particular, it is \(\Vert {\mathrm {D}}^2 f_t(x)\Vert \le C\) for some constant C for \(t\ge -1\) and any x. Since also

$$\begin{aligned} {\mathrm {D}}^2 f_t(x) = \frac{1}{\sqrt{-t}} {\mathrm {D}}^2 v\left( \frac{x}{\sqrt{-t}} \right) , \end{aligned}$$

we obtain the estimate

$$\begin{aligned} \Vert {\mathrm {D}}^2v(x)\Vert \le C\sqrt{-t} \end{aligned}$$

for any x. Letting \(t\rightarrow 0\), this implies \({\mathrm {D}}^2v(x) = 0\), so that v is a linear function. \(\square \)

Example 6.2

(A Bernstein Theorem for Translating Solutions) Let \(v:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) be a strictly area-decreasing map with bounded geometry and satisfying (6.2). Then v is a linear function.

Proof

If v solves (6.2), then there is a constant vector \(\xi \in {\mathbb {R}}^2\times {\mathbb {R}}^2\), such that

$$\begin{aligned} f_t(x) :=v\bigl ( x - {\mathrm {d}}\pi _1(\xi )t \bigr ) + {\mathrm {d}}\pi _2(\xi )t \end{aligned}$$

solves (3.2) with initial condition \(f_0(x) = v(x)\).

On the other hand, by Theorem A there is a long-time solution \(f_t(x)\) to (3.2) with initial condition \(f_0\) which satisfies \(\sup _{x\in {\mathbb {R}}^2}\Vert {\mathrm {D}}^2f_t(x) \Vert \rightarrow 0\) as \(t\rightarrow \infty \). By the uniqueness result [4, Theorem 1.1],

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^2} \bigl \Vert {\mathrm {D}}^2 v\bigl ( x - {\mathrm {d}}\pi _1(\xi )t \bigr ) \bigr \Vert = \sup _{x\in {\mathbb {R}}^2} \bigl \Vert {\mathrm {D}}^2 f_t(x) \bigr \Vert \rightarrow 0 \end{aligned}$$

as \(t\rightarrow \infty \). We conclude that \(\sup _{x\in {\mathbb {R}}^2} \Vert {\mathrm {D}}^2v(x)\Vert = 0\), so v must be linear. \(\square \)