1 Introduction

In this paper, we are concerned with the asymptotic Plateau type problem in hyperbolic space \({\mathbb {H}}^{n+1}\): to find a complete strictly locally convex hypersurface \(\Sigma \) with prescribed curvature and asymptotic boundary at infinity. For hyperbolic space, we will use the half-space model

$$\begin{aligned}{\mathbb {H}}^{n+1} = \,\{ (x, x_{n+1}) \in {\mathbb {R}}^{n+1} \,\big \vert \,x = (x_1, \ldots , x_n) \in {\mathbb {R}}^n, \, x_{n+1} > 0 \} \end{aligned}$$

equipped with the hyperbolic metric

$$\begin{aligned} d s^2 = \frac{1}{x_{n+1}^2} \,\sum _{i = 1}^{n+1} \, d x_i^2. \end{aligned}$$

The ideal boundary at infinity of \({\mathbb {H}}^{n+1}\) can be identified with

$$\begin{aligned}\partial _{\infty } {\mathbb {H}}^{n+1} = {\mathbb {R}}^n = {\mathbb {R}}^n \times \{0\} \,\subset {\mathbb {R}}^{n+1}\end{aligned}$$

and the asymptotic boundary \(\Gamma \) of \(\Sigma \) is given at \(\partial _{\infty } {\mathbb {H}}^{n+1}\), which consists of a disjoint collection of smooth closed embedded \((n - 1)\) dimensional submanifolds \(\{ \Gamma _1, \ldots , \Gamma _m \}\). Given a positive function \(\psi \in C^{\infty } ({\mathbb {H}}^{n+1})\), we are interested in finding a complete strictly locally convex hypersurfaces \(\Sigma \) in \({\mathbb {H}}^{n+1}\) satisfying the curvature equation

$$\begin{aligned} f (\kappa ) = \,\sigma _k^{1/k} ( \kappa ) = \psi ^{1/k}( \mathbf{x} ) \end{aligned}$$
(1.1)

as well as with the asymptotic boundary

$$\begin{aligned} \partial \Sigma = \Gamma , \end{aligned}$$
(1.2)

where \(\mathbf{x}\) is a conformal Killing field which will be specified in Sect. 6, \(\kappa = (\kappa _1, \ldots , \kappa _n)\) are the hyperbolic principal curvatures of \(\Sigma \) at \(\mathbf{x}\), and

$$\begin{aligned}\sigma _k (\lambda ) = \sum \limits _{1 \le i_1< \ldots < i_k \le n} \,\lambda _{i_1} \cdots \lambda _{i_k}\end{aligned}$$

is the k-th elementary symmetric function defined on k-th Gårding’s cone

$$\begin{aligned}\Gamma _k \equiv \{ \lambda \in {\mathbb {R}}^n \vert \, \sigma _j (\lambda ) > 0,\,\, j = 1, \ldots , k \}. \end{aligned}$$

\(\sigma _k (\kappa )\) is the so called k-th Weingarten curvature of \(\Sigma \). In particular, the 1st, 2nd and n-th Weingarten curvature correspond to mean curvature, scalar curvature and Gauss curvature respectively. We call a hypersurface \(\Sigma \) strictly locally convex (locally convex) if all principal curvatures at any point of \(\Sigma \) are positive (nonnegative).

In this paper, all hypersurfaces are assumed to be connected and orientable. We will see from Lemma 2.1 that a strictly locally convex hypersurface in \({\mathbb {H}}^{n+1}\) with compact (asymptotic) boundary must be a vertical graph over a bounded domain in \({\mathbb {R}}^n\). We thus assume the normal vector field on \(\Sigma \) to be upward. Write

$$\begin{aligned} \Sigma =\, \{ ( x, \,u(x) ) \in {\mathbb {R}}^{n+1}_+ \,\big \vert \, x \in \Omega \}, \end{aligned}$$

where \(\Omega \) is the bounded domain on \(\partial _{\infty } {\mathbb {H}}^{n+1} = {\mathbb {R}}^n\) enclosed by \(\Gamma \). Consequently, (1.1)–(1.2) can be expressed in terms of u,

$$\begin{aligned} \left\{ \begin{aligned} f ( \kappa [\, u \,] )\, =&\,\,\, \psi ^{\frac{1}{k}}(x,\, u) \quad \quad&\text{ in } \quad \Omega , \\ u \, =&\,\,\, 0 \quad \quad&\text{ on } \quad \Gamma . \end{aligned} \right. \end{aligned}$$
(1.3)

The essential difficulty for the Plateau type problem (1.3) is due to the singularity at \(u = 0\). When \(\psi \) is a positive constant, problem (1.3) has been extensively investigated in [1,2,3,4,5] (see also the references therein for some previous work). Their basic idea is: first, to prove the existence of a solution \(u^{\epsilon }\) to the approximate Dirichlet problem

$$\begin{aligned} \left\{ \begin{aligned} f ( \kappa [\, u \,] )\, =&\,\,\, \psi ^{\frac{1}{k}}(x,\, u) \quad \quad&\text{ in } \quad \Omega , \\ u \, =&\,\,\, \epsilon \quad \quad&\text{ on } \quad \Gamma , \end{aligned} \right. \end{aligned}$$
(1.4)

and then, to show these \(u^{\epsilon }\) converge to a solution of (1.3) after passing to a subsequence. For general \(\psi \), Szapiel [6] studied the existence of strictly locally convex solutions to (1.4) for \(f = \sigma _n^{1/n}\), but he also assumed a very strong assumption on f (see (1.11) in [6]) which excluded the case \(f = \sigma _n^{1/n}\). As far as the author knows, there is no literature which gives an existence result for the asymptotic Plateau type problem (1.3) for general \(\psi \).

Our first task in this paper is to improve the result of [6]. As in [7], we assume the existence of a strictly locally convex subsolution \({\underline{u}} \in C^4(\Omega )\), that is,

$$\begin{aligned} \left\{ \begin{aligned} f ( \kappa [\, {\underline{u}} \,] )\, \ge&\,\,\, \psi ^{\frac{1}{k}}(x, \,{\underline{u}} ) \quad \quad&\text{ in } \quad \Omega , \\ {\underline{u}} \, =&\,\,\, 0 \quad \quad&\text{ on } \quad \Gamma . \end{aligned} \right. \end{aligned}$$
(1.5)

Different from [2,3,4,5,6], we take a new approximate Dirichlet problem

$$\begin{aligned} \left\{ \begin{aligned} f ( \kappa [\, u \,] )\, =&\,\,\, \psi ^{\frac{1}{k}}(x,\, u) \quad \quad&\text{ in } \quad \Omega _{\epsilon }, \\ u \, =&\,\,\, \epsilon \quad \quad&\text{ on } \quad \Gamma _{\epsilon }, \end{aligned} \right. \end{aligned}$$
(1.6)

where the \(\epsilon \)-level set of \({\underline{u}}\) and its enclosed region in \({\mathbb {R}}^n\) are respectively

$$\begin{aligned} \Gamma _{\epsilon } = \,\{ x \in \Omega \,\big \vert \,\, {\underline{u}}(x) \, = \,\epsilon \, \} \quad \text{ and } \quad \Omega _{\epsilon } = \,\{ x \in \Omega \,\big \vert \,\, {\underline{u}}(x) \, > \,\epsilon \, \}. \end{aligned}$$

We may assume the dimension of \(\Gamma _{\epsilon }\) is \((n - 1)\) by Sard’s theorem, and in addition, \(\Gamma _{\epsilon } \in C^4\).

A crucial step for proving the existence of a strictly locally convex solution to (1.6) is to establish second order a priori estimates for strictly locally convex solutions u of (1.6) satisfying \(u \ge {\underline{u}}\) on \(\Omega _{\epsilon }\). An essential difference from [2,3,4,5] is that we allow the \(C^2\) bound to depend on \(\epsilon \). This looser requirement gives us more flexibility to apply techniques for general Dirichlet problem and with less technical assumptions (for example, there is no prescribed upper bound for \(\psi \)). For \(C^2\) boundary estimates, we change the variable from u to v by \(u = \sqrt{v}\) (see [8] for a similar idea for radial graphs), which is the main difference from [2, 6] and fundamentally improves the result in [6].

One reason that we purely study strictly locally convex hypersurfaces is due to \(C^2\) boundary estimates. In [3], Guan-Spruck assumed \(\Gamma \) to be mean convex. Then the solution u behaves nicely near \(\Gamma \) and therefore k-admissible solutions can be studied in their framework. However, without any geometric assumptions on \(\Gamma _{\epsilon }\), \(C^2\) boundary estimates can only be obtained for strictly locally convex hypersurfaces.

In order to apply continuity method and degree theory to prove the existence of a strictly locally convex solution to (1.6), the strict local convexity has to be preserved during the continuity process. This is true when \(k = n\) in view of the nondegeneracy of (1.6), while for \(1 \le k < n\), we have to impose certain assumptions on \(\Omega \), \({\underline{u}}\) and \(\psi \) to guarantee the full rank of the second fundamental form on locally convex \(\Sigma \) up to the boundary. In this paper, we want to apply the constant rank theorem developed in [9,10,11] to Dirichlet boundary value problems when assuming a subsolution. For this, we assume

$$\begin{aligned}&\left\{ \Big ( \frac{{\underline{u}}}{f(\kappa [{\underline{u}}])} \Big )_{x_{\alpha } x_{\beta }} \right\} _{n \times n} \,\ge 0, \end{aligned}$$
(1.7)
$$\begin{aligned}&\left( \begin{array}{cc} \frac{k+1}{k} \frac{\psi _{x_\alpha } \psi _{x_\beta }}{\psi } - \psi _{x_{\alpha } x_{\beta }} - \frac{k \psi }{u^2} \delta _{\alpha \beta } + \frac{\psi _u}{u} \delta _{\alpha \beta } &{} \frac{k+1}{k} \frac{\psi _{x_\alpha } \psi _{u}}{\psi } - \psi _{x_{\alpha } u} - \frac{\psi _{x_\alpha }}{u} \\ \frac{k+1}{k} \frac{\psi _{x_\alpha } \psi _{u}}{\psi } - \psi _{x_{\alpha } u} - \frac{\psi _{x_\alpha }}{u} &{} \frac{k+1}{k} \frac{\psi _{u}^2}{\psi } - \psi _{u u} - \frac{k \,\psi }{u^2} - \frac{\psi _u}{u} \\ \end{array} \right) \,\ge 0. \end{aligned}$$
(1.8)

Besides, we also need a condition which can guarantee that locally convex solutions to the associated equations of (1.6) are strictly locally convex near the boundary \(\Gamma _{\epsilon }\). However, we did not find such a condition. Therefore, our existence results are limited to \(k = n\).

Theorem 1.1

Under the subsolution condition (1.5), for \(k = n\), there exists a smooth strictly locally convex solution \(u^{\epsilon }\) to the Dirichlet problem (1.6) with \(u^{\epsilon } \ge {\underline{u}}\) in \(\Omega _{\epsilon }\).

Our second task in this paper is to solve (1.3). A central issue is to provide certain uniform \(C^2\) bound for \(u^\epsilon \). Different from [2,3,4,5], where the authors derived uniform bound for certain quantities regarding solutions of (1.4) under certain assumptions, we use (1.6) as an approximate Dirichlet problem and tolerate the \(\epsilon \)-dependent \(C^2\) bound for solutions to (1.6), since we are able to use the idea of Guan-Qiu [12], who established \(C^2\) interior estimates for convex hypersurfaces with prescribed scalar curvature in \({\mathbb {R}}^{n+1}\). We extend their estimates to \({\mathbb {H}}^{n+1}\), which, together with Evans-Krylov interior estimates (see [13, 14]) and standard diagonal process, lead to the following existence result. Since the pure \(C^2\) interior estimates can only be derived up to scalar curvature equations (see Pogorelov [15] and Urbas [16] for counterexamples when \(k \ge 3\)), we hope to investigate the cases \(k \ge 3\) in future work by other means. Meanwhile, interior \(C^2\) estimates are limited to hypersurfaces satisfying certain convexity property (see [12]), which also explains why we only focus on strictly locally convex hypersurfaces.

Theorem 1.2

In \({\mathbb {H}}^3\), for \(f = \sigma _2^{1/2}\), under the subsolution condition (1.5), there exists a smooth strictly locally convex solution \(u \ge {\underline{u}}\) to (1.3) on \(\Omega \), equivalently, there exists a smooth complete strictly locally convex vertical graph solving (1.1)–(1.2).

This paper is organized as follows: in Sect. 2, we provide some basic formulae, properties and calculations for vertical graphs. The \(C^2\) estimates for strictly locally convex solutions of (1.6) are presented in Sects. 3 and  4. In Sect. 5, we prove Theorem 1.1 via continuity method and degree theory. Section 6 provides the interior \(C^2\) estimates for convex solutions to prescribed scalar curvature equations in \({\mathbb {H}}^{n+1}\), which finishes the proof of Theorem 1.2.

2 Vertical graphs

Suppose \(\Sigma \) is locally represented as the graph of a positive \(C^2\) function over a domain \(\Omega \subset {\mathbb {R}}^n\):

$$\begin{aligned} \Sigma =\, \{ ( x, \,u(x) ) \in {\mathbb {R}}^{n+1}_+ \,\big \vert \, x \in \Omega \}. \end{aligned}$$

Since the coordinate vector fields on \(\Sigma \) are

$$\begin{aligned} \partial _i + u_i \,\partial _{n + 1}, \quad \quad i = 1, \ldots , n \quad \text{ where } \quad \partial _i = \frac{\partial }{\partial x_i}, \end{aligned}$$

thus the upward Euclidean unit normal vector field to \(\Sigma \), the Euclidean metric, its inverse and the Euclidean second fundamental form of \(\Sigma \) are given respectively by

$$\begin{aligned}\nu = \Big ( \frac{- D u}{w},\, \frac{1}{w} \Big ), \quad \quad w = \sqrt{ 1 + |D u |^2}, \end{aligned}$$
$$\begin{aligned} {\tilde{g}}_{ij} = \delta _{ij} + u_i u_j, \quad \quad {\tilde{g}}^{ij} = \delta _{ij} - \frac{u_i u_j}{w^2}, \quad \quad \tilde{h}_{ij} = \frac{u_{ij}}{w}. \end{aligned}$$

Consequently, the Euclidean principal curvatures \({\tilde{\kappa }} [ \Sigma ]\) are the eigenvalues of the symmetric matrix:

$$\begin{aligned} {\tilde{a}}_{ij} := \frac{1}{w} \gamma ^{ik} u_{kl} \gamma ^{lj}, \end{aligned}$$

where

$$\begin{aligned} \gamma ^{ik} = \delta _{ik} - \frac{u_i u_k}{w ( 1 + w )} \end{aligned}$$

and its inverse

$$\begin{aligned} \gamma _{ik} = \delta _{ik} + \frac{u_i u_k}{1 + w}, \quad \quad \gamma _{ik} \gamma _{kj} = {\tilde{g}}_{ij}. \end{aligned}$$

For geometric quantities in hyperbolic space, we first note that the upward hyperbolic unit normal vector field to \(\Sigma \) is

$$\begin{aligned} \mathbf{n} = u \,\nu = \, u \,\Big ( \frac{- D u}{w}, \,\,\frac{1}{w} \Big ) \end{aligned}$$

and the hyperbolic metric of \(\Sigma \) is

$$\begin{aligned} g_{ij} = \frac{1}{u^2}\, ( \delta _{ij} + u_i u_j ). \end{aligned}$$
(2.1)

To compute the hyperbolic second fundamental form \(h_{ij}\) of \(\Sigma \), applying the Christoffel symbols in \({\mathbb {H}}^{n + 1}\),

$$\begin{aligned} {\varvec{\Gamma }}_{ij}^k = \,\frac{1}{x_{n + 1}} \big (- \delta _{ik} \delta _{n + 1\, j} - \delta _{kj} \delta _{n + 1 \,i} + \delta _{k\, n + 1} \delta _{ij} \big ), \end{aligned}$$
(2.2)

we obtain

$$\begin{aligned} \mathbf{D}_{\partial _i + u_i \partial _{n + 1}} \big ( \partial _j + u_j \,\partial _{n + 1} \big ) = \, - \frac{u_j}{x_{n + 1}}\, \partial _i - \frac{u_i}{x_{n + 1}} \,\partial _j + \Big ( \frac{\delta _{ij}}{x_{n + 1}} + u_{ij} - \frac{u_i u_j}{x_{n + 1}} \Big )\, \partial _{n + 1}, \end{aligned}$$

where \(\mathbf{D}\) denotes the Levi-Civita connection in \({\mathbb {H}}^{n+1}\). Therefore,

$$\begin{aligned} h_{ij} = \frac{1}{u^2 w} ( \delta _{ij} + u_i u_j + u u_{ij} ). \end{aligned}$$

The hyperbolic principal curvatures \(\kappa [ \Sigma ]\) are the eigenvalues of the symmetric matrix \(A [u] = \{ a_{ij} \}\):

$$\begin{aligned} a_{ij} = \, u^2 \gamma ^{ik} h_{kl} \gamma ^{lj} = \,\frac{1}{w} \,\gamma ^{ik} ( \delta _{kl} + u_k u_l + u u_{kl} ) \,\gamma ^{lj} = \, \frac{1}{w} ( \delta _{ij} + u \gamma ^{ik} u_{kl} \gamma ^{lj} ). \end{aligned}$$

Remark 2.1

The graph of u is strictly locally convex if and only if the symmetric matrix \(\{ a_{ij }\}\), \(\{ h_{ij} \}\) or \(\{ \delta _{ij} + u_i u_j + u u_{ij} \}\) is positive definite.

Remark 2.2

From the above discussion, we can see that

$$\begin{aligned} h_{ij} = \frac{1}{u}\,\tilde{h}_{ij} + \frac{\nu ^{n+1}}{u^2}\, {\tilde{g}}_{ij}, \end{aligned}$$
(2.3)

where \(\nu ^{n+1} = \nu \cdot \partial _{n + 1}\) and \(\cdot \) is the inner product in \({\mathbb {R}}^{n+1}\). This formula indeed holds for any local frame on any hypersurface \(\Sigma \) (which may not be a graph). The relation between \(\kappa [\Sigma ]\) and \({\tilde{\kappa }} [\Sigma ]\) is

$$\begin{aligned} \kappa _i = \, u \,\tilde{\kappa _i} + \nu ^{n+1}, \quad \quad i = 1, \ldots , n. \end{aligned}$$
(2.4)

We observe the following phenomenon for strictly locally convex hypersurfaces in \({\mathbb {H}}^{n+1}\) (see also Lemma 3.3 in [2] for a similar assertion).

Lemma 2.1

Let \(\Sigma \) be a connected, orientable, strictly locally convex hypersurface in \({\mathbb {H}}^{n+1}\) with a specially chosen orientation. Then \(\Sigma \) must be a vertical graph.

Proof

Suppose \(\Sigma \) is not a vertical graph. Then there exists a vertical line (of dimension 1) intersecting \(\Sigma \) at two distinct points \(p_1\) and \(p_2\). Since \(\Sigma \) is orientable, we may assume that \(\nu ^{n+1} (p_1) \cdot \nu ^{n+1} (p_2) \le 0\). Since \(\Sigma \) is connected, there exists a 1-dimensional curve \(\gamma \) on \(\Sigma \) connecting \(p_1\) and \(p_2\). Among the tangent hyperplanes (of dimension n) to \(\Sigma \) along \(\gamma \), choose a vertical one which is tangent to \(\Sigma \) at a point \(p_3\). At \(p_3\), \(\nu ^{n+1} = 0\) and \(u > 0\). By (2.4), \({\tilde{\kappa }}_i > 0\) for all i at \(p_3\). On the other hand, let P be a 2-dimensional plane passing through \(p_1\), \(p_2\) and \(p_3\). If \(P \cap \Sigma \) is 1-dimensional and has nonpositive (Euclidean) curvature at \(p_3\) with respect to \(\nu \), we reach a contradiction; otherwise we take a different orientation of \(\Sigma \), then \(\Sigma \) is either not strictly locally convex or we reach a contradiction. If \(P \cap \Sigma \) is 2-dimensional, then any line on \(P \cap \Sigma \) through \(p_3\) leads to a contradiction. \(\square \)

Equation (1.1) can be written as

$$\begin{aligned} f( \kappa [\, u \,] ) = f( \lambda ( A[ \, u \,] )) = F( A[ \,u\, ] ) = \, \psi ^{1/k} ( x,\, u ). \end{aligned}$$
(2.5)

Recall that the curvature function f satisfies the fundamental structure conditions

$$\begin{aligned}&f_i (\lambda ) \equiv \frac{\partial f(\lambda )}{\partial \lambda _i} > 0 \quad \text{ in } \,\, \Gamma _k,\quad i = 1, \ldots , n, \end{aligned}$$
(2.6)
$$\begin{aligned}&f \,\,\text{ is } \,\,\text{ concave }\,\, \text{ in } \,\, \Gamma _k, \end{aligned}$$
(2.7)
$$\begin{aligned}&f > 0 \quad \text{ in } \,\, \Gamma _k, \quad \quad f = 0 \quad \text{ on } \,\, \partial \Gamma _k. \end{aligned}$$
(2.8)

3 Second order boundary estimates

In this section and the next section, we derive a priori \(C^2\) estimates for strictly locally convex solution u to the Dirichlet problem (1.6) with \(u \ge {\underline{u}}\) in \(\Omega _{\epsilon }\). By Evans-Krylov theory [13, 14], classical continuity method and degree theory (see [17]) we prove the existence of a strictly locally convex solution to (1.6). Higher-order regularity then follows from classical Schauder theory.

Let \(u \ge {\underline{u}}\) be a strictly locally convex function over \(\Omega _{\epsilon }\) with \(u = {\underline{u}}\) on \(\Gamma _{\epsilon }\). We have the following \(C^0\) estimate:

$$\begin{aligned} {\underline{u}}\,\, \le u \le \sqrt{\epsilon ^2 + (\text{ diam } \Omega )^2} \quad \text{ in } \quad \overline{\Omega _{\epsilon }}. \end{aligned}$$
(3.1)

In fact, by Remark 2.1, for any \(x_0 \in \Omega _{\epsilon }\), the function \(u^2 + |x - x_0|^2\) is Euclidean strictly locally convex in \(\Omega _{\epsilon }\), over which, we have

$$\begin{aligned} u^2 \le u^2 + |x - x_0|^2 \le \max \limits _{\Gamma _{\epsilon }} ( u^2 + |x - x_0|^2) \le \epsilon ^2 + (\text{ diam }\Omega )^2.\end{aligned}$$

Therefore we obtain (3.1).

For the gradient estimate, we perform a transformation \(u = \sqrt{v}\). Denote

$$\begin{aligned}W = \sqrt{4 v + |D v|^2}. \end{aligned}$$

The geometric quantities in Sect. 2 can be expressed in terms of v,

$$\begin{aligned} \begin{aligned} \gamma ^{ik} = \delta _{ik} - \frac{v_i v_k}{W (2 \sqrt{v} + W)}, \quad \quad&\gamma _{ik} = \delta _{ik} + \frac{v_i v_k}{2 \sqrt{v} ( 2 \sqrt{v} + W )}, \\ h_{ij} = \frac{2}{\sqrt{v}\, W}\,\,\big ( \delta _{ij} + \frac{1}{2}\, v_{ij} \big ), \quad \quad&a_{ij} = \frac{2 \sqrt{v}}{W} \gamma ^{ik} \big (\delta _{kl} + \frac{1}{2}\, v_{kl} \big ) \gamma ^{lj}. \end{aligned}\end{aligned}$$

Since the graph is strictly locally convex, v satisfies

$$\begin{aligned} \left\{ \begin{aligned} \Delta v + 2 n&> 0 \quad&\text{ in } \quad \Omega _{\epsilon },\\ v&= \epsilon ^2 \quad&\text{ on } \quad \Gamma _{\epsilon }, \end{aligned} \right. \end{aligned}$$

where \(\Delta \) is the Laplace-Beltrami operator in \({\mathbb {R}}^n\). Let \({\overline{v}}\) be the solution of

$$\begin{aligned} \left\{ \begin{aligned} \Delta {\overline{v}} + 2 n&= 0 \quad&\text{ in } \quad \Omega _{\epsilon },\\ {\overline{v}}&= \epsilon ^2 \quad&\text{ on } \quad \Gamma _{\epsilon }. \end{aligned} \right. \end{aligned}$$

By the comparison principle,

$$\begin{aligned} {\underline{u}}^2 = {\underline{v}} \le v \le {\overline{v}} \quad \text{ in } \quad \Omega _{\epsilon }. \end{aligned}$$

Consequently,

$$\begin{aligned} |D v| \le C \quad \text{ on } \quad \Gamma _{\epsilon }, \end{aligned}$$
(3.2)

where C is a positive constant depending on \(\epsilon \). Hereinafter in this section, C always denotes such a constant which may change from line to line. Equivalently,

$$\begin{aligned} |D u| \le C \quad \text{ on } \quad \Gamma _{\epsilon }. \end{aligned}$$
(3.3)

For global gradient estimate, consider the test function

$$\begin{aligned} W = \sqrt{4 v + |D v|^2}. \end{aligned}$$

Assume its maximum is achieved at an interior point \(x_0 \in \Omega _{\epsilon }\). Then at \(x_0\),

$$\begin{aligned} W W_i = \big ( v_{ki} + 2 \delta _{ki} \big ) v_k = 0, \quad \quad i = 1, \ldots , n. \end{aligned}$$

Since the matrix \(\big ( v_{ki} + 2 \delta _{ki} \big )\) is positive definite, thus \(v_k = 0\) for all k at \(x_0\). Along with (3.1) and (3.2), we obtain

$$\begin{aligned} \max \limits _{\overline{\Omega _{\epsilon }}} |D v| \le \max \limits _{\overline{\Omega _{\epsilon }}} \sqrt{4 v + |D v|^2} \le \max \Big \{ \max \limits _{\Gamma _{\epsilon }} \sqrt{4 \epsilon ^2 + |D v|^2}, 2 \max \limits _{\overline{\Omega _{\epsilon }}} \sqrt{v} \Big \} \le C. \end{aligned}$$
(3.4)

Equivalently,

$$\begin{aligned} \max \limits _{\overline{\Omega _{\epsilon }}} |D u| \le C. \end{aligned}$$
(3.5)

For second order boundary estimate, we change Equ. (2.5) under the transformation \(u = \sqrt{v}\) into

$$\begin{aligned} G( D^2 v, \,D v, v ) = \,F ( a_{ij} ) = \,f( \lambda ( a_{ij} ) ) = \,\psi ( x,\, v ). \end{aligned}$$
(3.6)

By direct calculation, we obtain the following formulae.

Lemma 3.1

$$\begin{aligned} \begin{aligned} G^{st} =&\frac{\partial G}{\partial v_{st}} = \frac{ \sqrt{v}}{W} F^{ij} \gamma ^{is} \gamma ^{t j}, \\ G_v =&\frac{\partial G}{\partial v} = \Big (\frac{1}{2 v} - \frac{2}{W^2}\Big ) F^{ij} a_{ij} + \frac{v_i v_q}{ W^2 v} F^{ij} a_{qj}, \\ G^s =&\frac{\partial G}{\partial v_s} = - \frac{v_s}{W^2} F^{ij} a_{ij} - \frac{W \gamma ^{is} v_q + 2 \sqrt{v} \gamma ^{qs} v_i}{\sqrt{v} W ( 2 \sqrt{v} + W )} F^{ij} a_{qj}. \end{aligned} \end{aligned}$$

In addition,

$$\begin{aligned} \vert G^s \vert \,\le \, C \quad \text{ and } \quad \vert G_v \vert \,\le \, C. \end{aligned}$$

Proof

Since

$$\begin{aligned} G( D^2 v, D v, v ) = F \Big ( \frac{2 \sqrt{v}}{W} \gamma ^{ik} \big (\delta _{kl} + \frac{1}{2}\, v_{kl} \big ) \gamma ^{lj} \Big ), \end{aligned}$$

we have,

$$\begin{aligned} G^{st} = \,\frac{\partial F}{\partial a_{ij}} \frac{\partial a_{ij}}{\partial v_{st}} = \frac{ \sqrt{v}}{W} F^{ij} \gamma ^{is} \gamma ^{t j}. \end{aligned}$$

To compute \(G_v\), note that

$$\begin{aligned} \frac{\partial W}{\partial v} = \frac{2}{W} \quad \text{ and } \quad \frac{\partial \gamma _{ik}}{\partial v} = - \frac{v_i v_k}{4 v^{3/2} W}. \end{aligned}$$

Consequently,

$$\begin{aligned} \frac{\partial \gamma ^{ik}}{\partial v} = \gamma ^{ip} \,\frac{v_p v_q}{4 v^{3/2} W} \,\gamma ^{qk}. \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned} G_v&= F^{ij} \Big ( \frac{\partial }{\partial v} \big (\frac{2 \sqrt{v}}{W} \big ) \gamma ^{ik} (\delta _{kl} + \frac{1}{2} v_{kl} ) \gamma ^{lj} + \frac{4 \sqrt{v}}{W} \frac{\partial \gamma ^{ik}}{\partial v} (\delta _{kl} + \frac{1}{2} v_{kl} ) \gamma ^{lj} \Big ) \\&= \Big (\frac{1}{2 v} - \frac{2}{W^2}\Big ) F^{ij} a_{ij} + \frac{\gamma ^{ip} v_p v_q}{2 v^{3/2} W} F^{ij} a_{qj}. \end{aligned} \end{aligned}$$

We then obtain \(G_v\) in view of

$$\begin{aligned} \gamma ^{ip} v_p = \,\frac{2 \sqrt{v} \,v_i}{W}. \end{aligned}$$

For \(G^s\), note that

$$\begin{aligned} \frac{\partial W}{\partial v_s}= & {} \frac{v_s}{W}, \quad \quad \frac{ \partial \gamma ^{ik}}{\partial v_s} = - \gamma ^{ip}\, \frac{\partial \gamma _{pq}}{\partial v_s} \, \gamma ^{qk}, \quad \text{ and } \\ \frac{\partial \gamma _{p q}}{\partial v_s}= & {} \frac{\delta _{ps} v_q + \delta _{q s} v_p }{2 \sqrt{v} ( 2 \sqrt{v} + W)} - \frac{v_p v_q v_s}{2 \sqrt{v} (2 \sqrt{v} + W)^2 W} = \frac{\delta _{p s} v_q + v_p \gamma ^{q s}}{2 \sqrt{v} ( 2 \sqrt{v} + W)}. \end{aligned}$$

It follows that

$$\begin{aligned} \begin{aligned} G^s&= F^{ij} \Big ( - \frac{ 2 \sqrt{v} v_s}{W^3} \gamma ^{ik} (\delta _{kl} + \frac{1}{2} v_{kl} ) \gamma ^{lj} + \frac{4 \sqrt{v}}{W} \frac{\partial \gamma ^{ik}}{\partial v_s} (\delta _{kl} + \frac{1}{2} v_{kl} ) \gamma ^{lj} \Big ) \\&= - \frac{v_s}{W^2} F^{ij} a_{ij} - \frac{W \gamma ^{is} v_q + 2 \sqrt{v} \gamma ^{qs} v_i}{\sqrt{v} W ( 2 \sqrt{v} + W )} F^{ij} a_{qj}. \end{aligned} \end{aligned}$$

\(\square \)

For an arbitrary point on \(\Gamma _{\epsilon }\), we may assume it to be the origin of \({\mathbb {R}}^n\). Choose a coordinate system so that the positive \(x_n\) axis points to the interior normal of \(\Gamma _{\epsilon }\) at the origin. There exists a uniform constant \(r > 0\) such that \(\Gamma _{\epsilon } \cap B_r (0)\) can be represented as a graph

$$\begin{aligned} x_n = \rho ( x' ) = \frac{1}{2} \sum \limits _{\alpha , \beta < n} B_{\alpha \beta } x_{\alpha } x_{\beta } + O ( |x'|^3 ), \quad x' = (x_1, \ldots , x_{n - 1}). \end{aligned}$$

Since

$$\begin{aligned} v = \epsilon ^2 \quad \quad \text{ on } \quad \Gamma _{\epsilon }, \end{aligned}$$

or equivalently

$$\begin{aligned} v ( x', \rho ( x' )) = \epsilon ^2, \end{aligned}$$

we have

$$\begin{aligned} v_\alpha + v_n \,\rho _\alpha = 0 \end{aligned}$$
(3.7)

and

$$\begin{aligned} v_{\alpha \beta } + v_{\alpha n} \rho _\beta + (v_{n \beta } + v_{n n} \rho _\beta ) \rho _\alpha + v_n \rho _{\alpha \beta } = 0. \end{aligned}$$

Therefore,

$$\begin{aligned} v_{\alpha \beta } (0) = - v_n (0)\, \rho _{\alpha \beta } (0), \quad \quad \alpha , \beta < n. \end{aligned}$$

Consequently,

$$\begin{aligned} | v_{\alpha \beta } (0) | \le C, \quad \quad \quad \alpha , \beta < n, \end{aligned}$$
(3.8)

where C is a constant depending on \(\epsilon \).

For the mixed tangential-normal derivative \(v_{\alpha n} (0)\) with \(\alpha < n\), note that the graph of \({\underline{u}}\) is strictly locally convex on \(\overline{\Omega _{\epsilon }}\). Hence we have

$$\begin{aligned} I + \frac{1}{2} \, D^2 {\underline{v}} \,\,\ge \, 3 \, c_0\, I \end{aligned}$$

for some positive constant \(c_0\). Let d(x) be the distance from \(x \in \overline{\Omega _{\epsilon }}\) to \(\Gamma _{\epsilon }\) in \({\mathbb {R}}^n\). Consider the barrier function

$$\begin{aligned} \Psi = A \, V + B\, |x|^2 \end{aligned}$$

with

$$\begin{aligned} V = \, v - {\underline{v}} + \tau d - N d^2, \end{aligned}$$

where the positive constant N, \(\tau \), B and A are to be determined.

Define the linear operator   \( L = \, G^{s t} \,D_{s t} + G^s \,D_s\). By the concavity of G with respect to \(D^2 v\),

$$\begin{aligned} \begin{aligned} L V = \,&G^{st} D_{st} ( v - {\underline{v}} - N \,d^2 ) + \tau \,G^{st} D_{st} d + G^s D_s ( v - {\underline{v}} + \tau \, d - N \,d^2 ) \\ \le \,&G( D^2 v, D v, v ) - G\Big ( D^2\big ( {\underline{v}} + N \,d^2 \big ) - 2 c_0 I, D v, v \Big ) \\&+ ( C \tau - 2 c_0 ) \sum G^{ii} + C ( 1 + \tau + N \delta ). \end{aligned} \end{aligned}$$

Note that

$$\begin{aligned} I + \frac{1}{2} \,D^2\big ( {\underline{v}} + N \,d^2 \big ) - c_0 I \ge \,\, 2 c_0 I + N D d \otimes D d - C N \delta I := {\mathcal {H}}. \end{aligned}$$

Denote \(\gamma = ( \gamma ^{ik} )\). We have

$$\begin{aligned} \begin{aligned}&G\Big ( D^2\big ( {\underline{v}} + N \,d^2 \big ) - 2 c_0 I, D v, v \Big ) = \, F \Big ( \frac{2 \sqrt{v}}{W} \gamma \big ( I + \frac{1}{2} \,D^2\big ( {\underline{v}} + N \,d^2 \big ) - c_0 I \big ) \gamma \Big ) \\&\quad \ge F \Big ( \frac{2 \sqrt{v}}{W} \gamma \,{\mathcal {H}}\, \gamma \Big ) = \,F \Big ( \frac{2 \sqrt{v}}{W} \,{\mathcal {H}}^{1/2} \,\gamma \gamma \,{\mathcal {H}}^{1/2} \Big ) \ge \, F ( \tilde{c}\, {\mathcal {H}} ), \end{aligned} \end{aligned}$$

where \(\tilde{c}\) is a positive constant. Hence

$$\begin{aligned} L V \le \, - F ( \tilde{c}\, {\mathcal {H}} ) + ( C \tau - 2 c_0 ) \sum G^{ii} + C ( 1 + \tau + N \delta ). \end{aligned}$$

Note that \({\mathcal {H}} = \text{ diag } \Big ( 2 c_0 - C N \delta ,\,\, \ldots ,\,\, 2 c_0 - C N \delta ,\,\, 2 c_0 - C N \delta + N\Big )\). We can choose N sufficiently large and \(\tau \), \(\delta \) sufficiently small (\(\delta \) depends on N) such that

$$\begin{aligned} C \tau \le c_0, \quad C N \delta \le c_0, \quad -F ( \tilde{c}\,{\mathcal {H}} ) + C + 2 c_0 \le - 1. \end{aligned}$$

Hence the above inequality becomes

$$\begin{aligned} L V \le - c_0 \sum G^{ii} - 1. \end{aligned}$$
(3.9)

We then require \(\delta \le \frac{\tau }{N}\) so that

$$\begin{aligned} V \,\ge \, 0 \quad \text{ in } \quad \Omega _{\epsilon } \cap B_{\delta } ( 0 ). \end{aligned}$$

By Lemma 3.1,

$$\begin{aligned} L \big ( |x|^2 \big ) \le \,C \big (1 + \sum G^{ii}\big ). \end{aligned}$$

This, together with (3.9) yields,

$$\begin{aligned} L \Psi \,\le \,A \big ( - c_0 \sum G^{ii} - 1 \big ) + B C \big ( 1 + \sum G^{ii} \big ) \quad \text{ in }\quad \Omega _{\epsilon } \cap B_{\delta } ( 0 ). \end{aligned}$$
(3.10)

Now, we consider the operator

$$\begin{aligned} T = \partial _\alpha + \sum \limits _{\beta < n} B_{\alpha \beta } ( x_{\beta } \partial _n - x_n \partial _{\beta } ). \end{aligned}$$

Note that for \(\delta > 0\) sufficiently small,

$$\begin{aligned} \vert T v \vert \le \,C \,\, \quad \quad \text{ in } \quad \Omega _{\epsilon } \cap B_{\delta } ( 0 ). \end{aligned}$$

Also, in view of (3.7),

$$\begin{aligned} \vert T v \vert \le C \,|x|^2 \,\, \quad \quad \text{ on } \quad \Gamma _{\epsilon } \cap B_{\delta } ( 0 ). \end{aligned}$$

To compute L(Tv), we need the following lemma (see [2]).

Lemma 3.2

For \(1 \le i, j \le n\),

$$\begin{aligned} ( L + G_v - \psi _v )( x_i v_j - x_j v_i ) = x_i \psi _{x_j} - x_j \psi _{x_i}. \end{aligned}$$

Proof

For \(\theta \in {\mathbb {R}}\), let

$$\begin{aligned} \begin{aligned} y_i = \,&x_i \cos \theta - x_j \sin \theta , \\ y_j = \,&x_i \sin \theta + x_j \cos \theta , \\ y_k = \,&x_k, \quad k \ne i, j. \end{aligned} \end{aligned}$$

Since \(G - \psi \) is invariant for the rotations of \({\mathbb {R}}^n\), we have

$$\begin{aligned} G( D^2 v( y ), D v( y ), v( y ) ) = \psi ( y, v( y ) ). \end{aligned}$$

Differentiate with respect to \(\theta \) and change the order of differentiation,

$$\begin{aligned} ( L + G_v - \psi _v) \vert _y \,\frac{\partial v}{\partial \theta } = \psi _{y_i} \frac{\partial y_i}{\partial \theta } + \psi _{y_j} \frac{\partial y_j}{\partial \theta }. \end{aligned}$$

Set \(\theta = 0\) in the above equality and notice that at \(\theta = 0\),

$$\begin{aligned} y = x, \quad \quad \frac{\partial y_i}{\partial \theta } = - x_j, \quad \quad \frac{\partial y_j}{\partial \theta } = x_i, \quad \quad \frac{\partial v}{\partial \theta } = x_i v_j - x_j v_i. \end{aligned}$$

We thus proved the lemma. \(\square \)

By Lemma 3.2 and  3.1, we have

$$\begin{aligned} \vert L ( T v ) \vert \le C. \end{aligned}$$
(3.11)

Choose B sufficiently large such that

$$\begin{aligned} \Psi \pm T v \ge 0 \quad \text{ on } \quad \partial (\Omega _{\epsilon } \cap B_{\delta } ( 0 )). \end{aligned}$$

From (3.10) and (3.11) we have

$$\begin{aligned} L ( \Psi \pm T v ) \le A \big ( - c_0 \sum G^{ii} - 1 \big ) + B C \big ( 1 + \sum G^{ii} \big ) + C. \end{aligned}$$

Choose A sufficiently large such that

$$\begin{aligned} L ( \Psi \pm T v ) \le 0 \quad \text{ in } \quad \Omega _{\epsilon } \cap B_{\delta } ( 0 ). \end{aligned}$$

By the maximum principle,

$$\begin{aligned} \Psi \pm T v \ge 0 \quad \text{ in } \quad \Omega _{\epsilon } \cap B_{\delta } ( 0 ), \end{aligned}$$

which implies

$$\begin{aligned} \vert v_{\alpha n} (0) \vert \le C. \end{aligned}$$
(3.12)

Up to now, we have proved that

$$\begin{aligned} |v_{\xi \eta } (x)| \le C, \quad | v_{\xi \gamma } (x) | \le C, \quad \quad \forall \quad x \in \Gamma _{\epsilon }, \end{aligned}$$

where \(\xi \) and \(\eta \) are any unit tangential vectors and \(\gamma \) the unit interior normal vector to \(\Gamma _{\epsilon }\) on \(\Omega _{\epsilon }\). It suffices to give an upper bound

$$\begin{aligned} v_{\gamma \gamma } \le C \quad \text{ on } \quad \Gamma _{\epsilon }. \end{aligned}$$
(3.13)

Motivated by [18] (see also [19, 20]), we derive (3.13).

First recall some general facts. The projection of \(\Gamma _k \subset {\mathbb {R}}^n\) onto \({\mathbb {R}}^{n - 1}\) is exactly

$$\begin{aligned} \Gamma '_{k - 1} = \,\{ (\lambda _1, \ldots , \lambda _{n-1}) \in {\mathbb {R}}^{n - 1} \,| \, \sigma _{j} ( \lambda _1, \ldots , \lambda _{n-1} ) > 0,\,\, \, j = 1, \ldots , k - 1 \}. \end{aligned}$$

Let \(\kappa ' = (\kappa '_1, \ldots , \kappa '_{n-1})\) be the roots of

$$\begin{aligned} \det ( \kappa '_{\zeta } \, g_{\alpha \beta } - h_{\alpha \beta } ) = 0, \end{aligned}$$
(3.14)

where \((h_{\alpha \beta })\) and \((g_{\alpha \beta })\) are the first \((n - 1) \times (n - 1)\) principal minors of \((h_{ij})\) and \((g_{ij})\) respectively. Then \(\kappa [v] \in \Gamma _k\) implies \(\kappa '[v] \in \Gamma '_{k - 1}\), and this is true for any local frame field. Note that \(\kappa '[v]\) may not be \((\kappa _1, \ldots , \kappa _{n-1})[v]\).

For \(x \in \Gamma _{\epsilon }\), let the indices in (3.14) be given by the tangential directions to \(\Gamma _{\epsilon }\) and \(\kappa '[v](x)\) be the roots of (3.14). Define

$$\begin{aligned} \tilde{d} (x) =\, \sqrt{v} \,W \,\,\text{ dist } ( \kappa '[v](x), \,\partial \Gamma '_{k - 1} ) \quad \quad \text{ and } \quad \quad m = \min \limits _{x \in \Gamma _{\epsilon }}\,\tilde{d} (x). \end{aligned}$$

Choose a coordinate system in \({\mathbb {R}}^n\) such that m is achieved at \(0 \in \Gamma _{\epsilon }\) and the positive \(x_n\) axis points to the interior normal of \(\Gamma _{\epsilon }\) at 0. We want to prove that m has a uniform positive lower bound.

Let \(\xi _1, \ldots , \xi _{n-1}, \gamma \) be a local frame field around 0 on \(\Omega _{\epsilon }\), obtained by parallel translation of a local frame field \(\xi _1, \ldots , \xi _{n-1}\) around 0 on \(\Gamma _{\epsilon }\) satisfying

$$\begin{aligned} g_{\alpha \beta } = \delta _{\alpha \beta }, \quad \quad h_{\alpha \beta }(0) = \kappa '_{\alpha }(0) \, \delta _{\alpha \beta }, \quad \quad \kappa '_1 (0) \le \ldots \le \kappa '_{n-1} (0) \end{aligned}$$

and the interior, unit, normal vector field \(\gamma \) to \(\Gamma _{\epsilon }\), along the directions perpendicular to \(\Gamma _{\epsilon }\) on \(\Omega _{\epsilon }\). We can see that this choice of frame field has nothing to do with v (or equivalently, u). In fact, if we denote

$$\begin{aligned} \xi _{\alpha } = \sum _{\beta = 1}^{n - 1} \eta _{\alpha }^{\beta } \,e_{\beta }, \quad \quad \alpha = 1, \ldots , n - 1, \end{aligned}$$

where \(e_1, \ldots , e_{n - 1}\) is a fixed local orthonormal frame on \(\Gamma _{\epsilon }\), and consider a general boundary value condition, say \(v = \varphi \) on \(\Gamma _{\epsilon }\), then on \(\Gamma _{\epsilon }\),

$$\begin{aligned} \begin{aligned} g_{\alpha \beta } =&\frac{1}{u^2} \Big ( \xi _{\alpha } \cdot \xi _{\beta } + D_{\xi _{\alpha }} u \, D_{\xi _{\beta }} u \Big ) = \frac{1}{\varphi } \Big ( \xi _{\alpha } \cdot \xi _{\beta } + D_{\xi _{\alpha }} (\sqrt{\varphi }) \, D_{\xi _{\beta }} (\sqrt{\varphi }) \Big ) \\=&\frac{1}{\varphi } \sum \limits _{\tau , \zeta = 1}^{n - 1} \eta _{\alpha }^{\tau } \,\Big ( \delta _{\tau \zeta } + \frac{D_{e_{\tau }}\varphi \, D_{e_{\zeta }}\varphi }{4 \varphi } \Big ) \,\eta _{\beta }^{\zeta }. \end{aligned} \end{aligned}$$

Note that there exist \(\eta _{\alpha }^{\tau }\) for \(\alpha , \tau = 1, \ldots , n-1\) such that \(g_{\alpha \beta } = \delta _{\alpha \beta }\) on \(\Gamma _{\epsilon }\). By a rotation, we can further make \((h_{\alpha \beta }(0))\) to be diagonal.

By Lemma 6.1 of [21], there exists \(\mu = (\mu _1, \ldots , \mu _{n-1}) \in {\mathbb {R}}^{n - 1}\) with \(\mu _1 \ge \ldots \ge \mu _{n - 1} \ge 0\) such that

$$\begin{aligned} \sum \limits _{\alpha = 1}^{n - 1} \mu _{\alpha }^2 = 1, \quad \quad \Gamma '_{k-1} \subset \{ \lambda ' \in {\mathbb {R}}^{n-1} \,|\, \mu \cdot \lambda ' > 0 \} \quad \quad \text{ and } \end{aligned}$$
$$\begin{aligned} m = \tilde{d}(0) = \,\sqrt{v} \, W \, \sum \limits _{\alpha< n} \mu _{\alpha } \,\kappa '_{\alpha } (0) = \,\sum \limits _{\alpha < n} \,\mu _{\alpha } \,\big ( D_{\xi _{\alpha } \xi _{\alpha }} v + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \big ) (0). \end{aligned}$$
(3.15)

Since \({\underline{v}}\) is strictly locally convex near \(\Gamma _{\epsilon }\) and \(\sum \mu _{\alpha } \ge 1\),

$$\begin{aligned}\sum \limits _{\alpha < n} \mu _{\alpha } \big ( D_{\xi _{\alpha } \xi _{\alpha }} {\underline{v}} + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \big ) (0) \,\ge \, 2 \, c_1 \end{aligned}$$

for a uniform positive constant \(c_1\). Consequently,

$$\begin{aligned} \begin{aligned}&({\underline{v}} - v)_{\gamma } (0)\, \sum \limits _{\alpha< n} \mu _{\alpha } \, d_{\xi _{\alpha } \xi _{\alpha }} (0) = \sum \limits _{\alpha< n} \mu _{\alpha } D_{\xi _{\alpha } \xi _{\alpha }} ( {\underline{v}} - v ) (0) \\&= \,\, \sum \limits _{\alpha< n} \mu _{\alpha } \big ( D_{\xi _{\alpha } \xi _{\alpha }} {\underline{v}} + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \big ) (0) - \sum \limits _{\alpha < n} \mu _{\alpha } \big ( D_{\xi _{\alpha } \xi _{\alpha }} v + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \big ) (0) \ge 2 \, c_1 - \tilde{d}(0). \end{aligned}\nonumber \\ \end{aligned}$$
(3.16)

The first line in (3.16) is true, since we can write \(v - {\underline{v}} = \omega \,\,d\) for some function \(\omega \) defined in a neighborhood of \(\Gamma _{\epsilon }\) in \(\Omega _{\epsilon }\). Differentiate this identity,

$$\begin{aligned} ( v - {\underline{v}} )_i = \,\omega _i \,\,d + \omega \,d_i, \quad \quad ( v - {\underline{v}} )_{\gamma } = \,\omega _{\gamma } \,d + \omega \,d_{\gamma },\\ ( v - {\underline{v}} )_{ij} = \,\omega _{ij} \,\,d + \omega _i \,d_j + \omega _j \,d_i + \omega \, d_{ij}. \end{aligned}$$

Note that \(d_{\xi _{\alpha }} (0) = 0\) and \(d_{\gamma } (0) = 1\). Thus,

$$\begin{aligned} D_{\xi _{\alpha } \xi _{\alpha }}( v - {\underline{v}} ) (0) = ( v - {\underline{v}} )_{\gamma }(0) \, d_{\xi _{\alpha } \xi _{\alpha }}(0). \end{aligned}$$

We may assume \(\tilde{d} (0) \le \, c_1\), for, otherwise we are done. Then from (3.16),

$$\begin{aligned} ( {\underline{v}} - v )_{\gamma } (0) \sum \limits _{\alpha < n} \mu _{\alpha }\, d_{\xi _{\alpha } \xi _{\alpha }} (0) \ge c_1. \end{aligned}$$

Since \(0 < ( v - {\underline{v}} )_{\gamma } (0) \le C\),

$$\begin{aligned} \sum \limits _{\alpha < n} \mu _{\alpha }\, d_{\xi _{\alpha } \xi _{\alpha }} (0) \le - \,2 \, c_2 \end{aligned}$$

for some uniform constant \(c_2 > 0\). By continuity of \(d_{\xi _{\alpha } \xi _{\alpha }} (x)\) at 0 and \(0 \le \mu _{\alpha } \le 1\),

$$\begin{aligned} \sum \limits _{\alpha< n} \mu _{\alpha }\,\Big ( d_{\xi _{\alpha } \xi _{\alpha }} (x) - d_{\xi _{\alpha } \xi _{\alpha }} (0) \Big )< \sum \limits _{\alpha < n} \mu _{\alpha }\,\frac{c_2}{n - 1} \le c_2 \quad \quad \text{ in } \quad \Omega _\epsilon \cap B_{\delta }(0) \end{aligned}$$

for some uniform constant \(\delta > 0\). Thus

$$\begin{aligned} \sum \limits _{\alpha< n} \mu _{\alpha } \,d_{\xi _{\alpha } \xi _{\alpha }} (x) \,<\, - c_2 \quad \quad \text{ in } \quad \Omega _\epsilon \cap B_{\delta }(0). \end{aligned}$$
(3.17)

On the other hand, by Lemma 6.2 of [21], for any \(x \in \Gamma _{\epsilon }\) near 0,

$$\begin{aligned}\begin{aligned}&\,\sum \limits _{\alpha< n} \,\mu _{\alpha } \,\Big ( D_{\xi _{\alpha } \xi _{\alpha }} v + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \Big ) (x) \,= \sum \limits _{\alpha< n} \,\mu _{\alpha } \sqrt{v}\, W \,h_{\alpha \alpha } (x) \\&\quad \ge \, \sqrt{v} \, W \,\sum \limits _{\alpha < n} \,\mu _{\alpha } \,\kappa '_{\alpha } [ v ] (x) \,\ge \,\tilde{d} (x)\, \ge \, \tilde{d} (0). \end{aligned}\end{aligned}$$

Thus for any \(x \in \Gamma _{\epsilon }\) near 0,

$$\begin{aligned} \begin{aligned}&( v - \varphi )_{\gamma } (x) \sum \limits _{\alpha< n} \mu _{\alpha }\, d_{\xi _{\alpha } \xi _{\alpha }} (x) = \sum \limits _{\alpha< n} \mu _{\alpha }\, D_{\xi _{\alpha } \xi _{\alpha }} ( v - \varphi ) (x) \\&\quad = \sum \limits _{\alpha< n} \mu _{\alpha } \Big ( D_{\xi _{\alpha } \xi _{\alpha }} v + 2\, \xi _{\alpha } \cdot \xi _{\alpha } \Big ) (x) - \sum \limits _{\alpha< n} \mu _{\alpha } \Big ( D_{\xi _{\alpha } \xi _{\alpha }} \varphi + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \Big ) (x) \\&\quad \ge \,\,\, \tilde{d}(0) - \sum \limits _{\alpha < n} \mu _{\alpha } \Big ( D_{\xi _{\alpha } \xi _{\alpha }} \varphi + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \Big ) (x). \end{aligned} \end{aligned}$$
(3.18)

In view of (3.17), define in \(\Omega _\epsilon \cap B_{\delta }(0)\),

$$\begin{aligned} \Phi \, = \, \frac{1}{\sum \limits _{\alpha< n} \mu _{\alpha }\, d_{\xi _{\alpha } \xi _{\alpha }} } \,\left( \,\tilde{d}(0) - \sum \limits _{\alpha < n} \mu _{\alpha } \Big ( D_{\xi _{\alpha } \xi _{\alpha }} \varphi + 2 \,\xi _{\alpha } \cdot \xi _{\alpha } \Big ) \right) - ( v - \varphi )_{\gamma }. \end{aligned}$$

By (3.17) and (3.18), \(\Phi \ge 0\) on \(\Gamma _{\epsilon } \cap B_{\delta } (0)\). In addition, we have in \(\Omega _\epsilon \cap B_{\delta }(0)\),

$$\begin{aligned} L(\Phi ) \le \, C \big ( 1 + \sum G^{ii} \big ) - L \Big ( D ( v - \varphi ) \cdot D d \Big ) \le \, C \big ( 1 + \sum G^{ii} \big ). \end{aligned}$$
(3.19)

This is because \(0 \le \mu _{\alpha } \le 1\) and

$$\begin{aligned} \begin{aligned}&\Big | L \big ( D ( v - \varphi ) \cdot D d \big ) \Big | = \Big | D d \cdot L \big ( D (v - \varphi ) \big ) + D ( v - \varphi ) \cdot L (D d) + 2 G^{st} (v - \varphi )_{is} d_{it} \Big | \\&\quad \le \, C \big ( 1 + \sum G^{ii} \big ) \, + \,\Big \vert 2\, G^{st}\, d_{it} \Big ( \frac{W}{\sqrt{v}} \gamma _{ki} \gamma _{sl} a_{kl} - 2 \delta _{is} \Big ) \Big \vert \\&\quad = \, C \big ( 1 + \sum G^{ii} \big ) \, + \,\Big \vert 2\,\gamma _{ki} d_{it} \gamma ^{tj}\, F^{lj}\, a_{kl} \, - 4 \, G^{st}\, d_{st} \Big \vert \le \, C \big ( 1 + \sum G^{ii} \big ). \end{aligned} \end{aligned}$$

By (3.10) and (3.19), we may choose \(A>> B> > 1\) such that \(\Psi + \Phi \ge 0\) on \(\partial (\Omega _{\epsilon } \cap B_{\delta }(0))\) and \(L(\Psi + \Phi ) \le 0\) in \(\Omega _{\epsilon } \cap B_{\delta }(0)\). By the maximum principle, \(\Psi + \Phi \ge 0\) in \(\Omega _{\epsilon } \cap B_{\delta }(0)\). Since \((\Psi + \Phi )(0) = 0\) by (3.18) and (3.15), we have \((\Psi + \Phi )_n (0) \ge 0\). Therefore, \(v_{nn} (0) \le C\), which, together with (3.8) and (3.12), gives a bound \(|D^2 v (0)| \le C\), and consequently a bound for all the principal curvatures at 0. By (2.8),

$$\begin{aligned} \text{ dist } ( \kappa [v](0), \,\partial \Gamma _k ) \ge c_3 \end{aligned}$$

and therefore on \(\Gamma _{\epsilon }\),

$$\begin{aligned} \tilde{d}(x) \ge \tilde{d}(0) = \sqrt{v}\, W\,\text{ dist } ( \kappa '[v](0), \,\partial \Gamma '_{k - 1} ) \ge c_4, \end{aligned}$$

where \(c_3\) and \(c_4\) are positive uniform constants.

By a proof similar to Lemma 1.2 of [21], we know that there exists \(R > 0\) depending on the bounds (3.8) and (3.12) such that if \(v_{\gamma \gamma }(x_0) \ge R\) and \(x_0 \in \Gamma _{\epsilon }\), then the principal curvatures \((\kappa _1, \ldots , \kappa _n)\) at \(x_0\) satisfy

$$\begin{aligned} \kappa _{\alpha }= & {} \kappa '_{\alpha } + o(1), \quad \quad \alpha < n, \\\kappa _n= & {} \frac{ h_{nn} - g_{1n} h_{n1} - \ldots - g_{n n-1} h_{n n-1} }{g_{n n} - g_{1 n}^2 - \ldots - g_{n n-1}^2} \Big ( 1 + {{\mathcal {O}}} \Big ( \frac{g_{n n} - g_{1 n}^2 - \ldots - g_{n n-1}^2}{ h_{nn} - g_{1n} h_{n1} - \ldots - g_{n n-1} h_{n n-1} } \Big ) \Big ) \end{aligned}$$

in the local frame \(\xi _1, \ldots , \xi _{n-1}, \gamma \) around \(x_0\). When R is sufficiently large, we have

$$\begin{aligned} G( D^2 v, D v, v ) (x_0) \, > \, \psi (x_0, \epsilon ^2),\end{aligned}$$

contradicting with Equ. (3.6). Hence \(v_{\gamma \gamma } < R\) on \(\Gamma _{\epsilon }\). (3.13) is proved.

4 Global curvature estimates

For a hypersurface \(\Sigma \subset {\mathbb {H}}^{n+1}\), let g and \(\nabla \) be the induced hyperbolic metric and Levi-Civita connection on \(\Sigma \) respectively, and let \({\tilde{g}}\) and \({\tilde{\nabla }}\) be the metric and Levi-Civita connection induced from \({\mathbb {R}}^{n+1}\) when \(\Sigma \) is viewed as a hypersurface in \({\mathbb {R}}^{n+1}\). The Christoffel symbols associated with \(\nabla \) and \({\tilde{\nabla }}\) are related by the formula

$$\begin{aligned} \Gamma _{ij}^k = {\tilde{\Gamma }}_{ij}^k - \frac{1}{u} (u_i \delta _{kj} + u_j \delta _{ik} - {\tilde{g}}^{kl} u_l {\tilde{g}}_{ij}). \end{aligned}$$

Consequently, for any \(v \in C^2(\Sigma )\),

$$\begin{aligned} \nabla _{ij} v = (v_i)_j - \Gamma _{ij}^k v_k = {\tilde{\nabla }}_{ij} v + \frac{1}{u}( u_i v_j + u_j v_i - {\tilde{g}}^{kl} u_l v_k {\tilde{g}}_{ij} ). \end{aligned}$$
(4.1)

Note that (4.1) holds for any local frame.

Lemma 4.1

In \({\mathbb {R}}^{n+1}\), we have the following identities.

$$\begin{aligned} {\tilde{g}}^{kl} u_k u_l= & {} |{{\tilde{\nabla }}} u|^2 = 1 - (\nu ^{n+1})^2, \end{aligned}$$
(4.2)
$$\begin{aligned} {\tilde{\nabla }}_{ij} u= & {} \tilde{h}_{ij} \nu ^{n+1} \quad \text{ and } \quad {\tilde{\nabla }}_{ij} x_{k} = \tilde{h}_{ij} \nu ^{k}, \quad k = 1, \ldots , n, \end{aligned}$$
(4.3)
$$\begin{aligned} (\nu ^{n+1})_i= & {} - \tilde{h}_{ij} \,{\tilde{g}}^{j k} u_k, \end{aligned}$$
(4.4)
$$\begin{aligned} {\tilde{\nabla }}_{ij} \nu ^{n+1}= & {} - {\tilde{g}}^{kl} ( \nu ^{n+1} \tilde{h}_{il} \tilde{h}_{kj} + u_l {\tilde{\nabla }}_k \tilde{h}_{ij} ), \end{aligned}$$
(4.5)

where \(\tau _1, \ldots , \tau _n\) is any local frame on \(\Sigma \).

Proof

To prove (4.2), we may write

$$\begin{aligned} \partial _{n + 1} = \sum \limits _{k = 1}^n a_k \tau _k + b \nu . \end{aligned}$$
(4.6)

Taking inner product of (4.6) with \(\nu \) in \({\mathbb {R}}^{n+1}\), we obain

$$\begin{aligned} \nu ^{n + 1} = \partial _{n + 1} \cdot \nu = b.\end{aligned}$$

Taking inner product of (4.6) with \(\tau _j\) in \({\mathbb {R}}^{n+1}\), we have

$$\begin{aligned} u_j = (X \cdot \partial _{n+1})_j = \partial _{n + 1} \cdot \tau _j = a_k \tau _k \cdot \tau _j = a_k {\tilde{g}}_{kj}, \end{aligned}$$

where X is the position vector field of \(\Sigma \) (note that this is different from the conformal Killing field when using half space model for \({\mathbb {H}}^{n + 1}\)). Thus,

$$\begin{aligned} a_k = u_j {\tilde{g}}^{jk}. \end{aligned}$$

Therefore,

$$\begin{aligned} \partial _{n + 1} = u_j {\tilde{g}}^{jk} \tau _k + \nu ^{n + 1} \nu = {\tilde{\nabla }} u + \nu ^{n + 1} \nu , \end{aligned}$$

which implies (4.2).

For (4.3), note that

$$\begin{aligned}\begin{aligned} {\tilde{\nabla }}_{ij} (X \cdot \partial _k)&= \big ( (X \cdot \partial _k)_j \big )_i - {\tilde{\Gamma }}_{ij}^l (X \cdot \partial _k)_l \\&=(\tau _j \cdot \partial _k)_i - {\tilde{\Gamma }}_{ij}^l \,\tau _l \cdot \partial _k = \tilde{D}_{\tau _i} \tau _j \cdot \partial _k - {\tilde{\Gamma }}_{ij}^l \,\tau _l \cdot \partial _k \\&= ( {\tilde{\nabla }}_{\tau _i} \tau _j + \tilde{h}_{ij} \nu ) \cdot \partial _k - {\tilde{\Gamma }}_{ij}^l \,\tau _l \cdot \partial _k = \tilde{h}_{ij} \nu \cdot \partial _k, \quad \quad k = 1, \ldots , n+1. \end{aligned}\end{aligned}$$

Here we have applied the Gauss formula for \(\Sigma \) as a hypersurface in \({\mathbb {R}}^{n+1}\).

For (4.4), by the Weingarten formula for \(\Sigma \) as a hypersurface in \({\mathbb {R}}^{n+1}\), we have

$$\begin{aligned} (\nu ^{n+1})_i = (\nu \cdot \partial _{n + 1})_i = \tilde{D}_{\tau _i} \nu \cdot \partial _{n + 1} = - \tilde{h}_{ik} \,{\tilde{g}}^{kl} \tau _l \cdot \partial _{n + 1} = - \tilde{h}_{ik} {\tilde{g}}^{kl} u_l. \end{aligned}$$

Finally, (4.5) follows from (4.4), (4.3) and the Codazzi equation for \(\Sigma \) as a hypersurface in \({\mathbb {R}}^{n + 1}\). In fact,

$$\begin{aligned}{\tilde{\nabla }}_{ij} \nu ^{n+1} = - {\tilde{g}}^{kl} ( u_l {\tilde{\nabla }}_i \tilde{h}_{jk} + \tilde{h}_{jk} {\tilde{\nabla }}_{il} u ) = - {\tilde{g}}^{kl} ( u_l {\tilde{\nabla }}_k \tilde{h}_{ij} + \nu ^{n+1} \tilde{h}_{il} \tilde{h}_{jk} ). \end{aligned}$$

\(\square \)

Lemma 4.2

Let \(\Sigma \) be a strictly locally convex hypersurface in \({\mathbb {H}}^{n+1}\) satisfying equation (2.5). Then in a local orthonormal frame on \(\Sigma \),

$$\begin{aligned} \begin{aligned} F^{ij} \nabla _{ij} \nu ^{n+1} =&- \nu ^{n+1} F^{ij} h_{ik} h_{kj} + \big ( 1 + (\nu ^{n+1})^2 \big ) F^{ij} h_{ij} - \nu ^{n+1} \sum f_i \\&- \frac{2}{u^2} F^{ij} h_{jk} u_i u_k + \frac{2 \nu ^{n+1}}{u^2} F^{ij} u_i u_j - \frac{u_k}{u} \psi _k. \end{aligned} \end{aligned}$$
(4.7)

Proof

By (4.1), (4.5),

$$\begin{aligned} \begin{aligned}&F^{ij} \nabla _{ij} \nu ^{n+1} \\&\quad = F^{ij} \Big ({\tilde{\nabla }}_{ij} \nu ^{n+1} + \frac{1}{u}\big ( u_i (\nu ^{n+1})_j + u_j (\nu ^{n+1})_i - {\tilde{g}}^{kl} u_l (\nu ^{n+1})_k {\tilde{g}}_{ij} \big )\Big ) \\&\quad = - \frac{\nu ^{n+1}}{u^2} F^{ij} \tilde{h}_{ik} \tilde{h}_{kj} - \frac{u_k}{u^2} F^{ij} {\tilde{\nabla }}_k \tilde{h}_{ij} - \frac{2}{u^3} F^{ij} \tilde{h}_{jk} u_i u_k - \frac{u_k}{u} (\nu ^{n+1})_k \,\sum f_i. \end{aligned} \end{aligned}$$
(4.8)

Since \(\Sigma \) can also be viewed as a hypersurface in \({\mathbb {R}}^{n+1}\),

$$\begin{aligned} F(g^{il} h_{lj}) = F\Big ( u^2 {\tilde{g}}^{il} \big ( \frac{1}{u}\,\tilde{h}_{lj} + \frac{\nu ^{n+1}}{u^2}\, {\tilde{g}}_{lj} \big )\Big ) = F \Big ( u \, {\tilde{g}}^{il}\,\tilde{h}_{lj} + \nu ^{n+1} \delta _{ij} \Big ) = \psi . \end{aligned}$$

Differentiate this equation with respect to \({\tilde{\nabla }}_k\) and then multiply by \(\frac{u_k}{u}\),

$$\begin{aligned} \frac{u_k^2}{u^3}\, F^{ij} \tilde{h}_{ij} + \frac{u_k}{u^2} F^{ij} {\tilde{\nabla }}_k \tilde{h}_{ij} + \frac{u_k}{u} (\nu ^{n+1})_k \sum f_i = \, \frac{ u_k}{u} \,\psi _k. \end{aligned}$$

Take this identity into (4.8),

$$\begin{aligned} F^{ij} \nabla _{ij} \nu ^{n+1} = - \frac{\nu ^{n+1}}{u^2} F^{ij} \tilde{h}_{ik} \tilde{h}_{kj} - \frac{2}{u^3} F^{ij} \tilde{h}_{jk} u_i u_k + \frac{u_k^2}{u^3} F^{ij} \tilde{h}_{ij} - \frac{u_k}{u} \,\psi _k. \end{aligned}$$

In view of (2.3), we obtain (4.7). \(\square \)

For global curvature estimates, we use the method in [4]. Assume

$$\begin{aligned} \nu ^{n+1} \,\ge \,2\, a > 0 \quad \quad \text{ on } \quad \Sigma \end{aligned}$$

for some constant a. Let \(\kappa _{\max } ({ \mathbf{x} })\) be the largest principal curvature of \(\Sigma \) at \(\mathbf{x}\). Consider

$$\begin{aligned} M_0 = \sup \limits _{\mathbf{x} \in \Sigma } \,\frac{\kappa _{\max \,}(\mathbf{x})}{{\nu }^{n+1} - a}. \end{aligned}$$

Assume \(M_0 > 0\) is attained at an interior point \({ \mathbf{x}}_0 \in \Sigma \). Let \(\tau _1, \ldots , \tau _n\) be a local orthonormal frame about \({ \mathbf{x}}_0\) such that \(h_{ij}(\mathbf{x}_0) = \kappa _i \,\delta _{ij}\), where \(\kappa _1, \ldots , \kappa _n\) are the hyperbolic principal curvatures of \(\Sigma \) at \(\mathbf{x}_0\). We may assume \(\kappa _1 = \kappa _{\max \,}(\mathbf{x}_0)\). Thus, \(\ln h_{11} - \ln ( {\nu }^{n+1} - a )\) has a local maximum at \(\mathbf{x}_0\), at which,

$$\begin{aligned} \frac{h_{11i}}{h_{11}} - \frac{\nabla _i \nu ^{n + 1}}{\nu ^{ n + 1 } - a} = 0, \end{aligned}$$
(4.9)
$$\begin{aligned} \frac{h_{11ii}}{h_{11}} - \frac{\nabla _{ii} \nu ^{n + 1}}{\nu ^{n + 1} - a} \,\le 0. \end{aligned}$$
(4.10)

Differentiate equation (2.5) twice,

$$\begin{aligned} F^{ii}\,h_{ii11}\, + \,F^{ij,\,rs} h_{ij1} h_{rs1}\,=\,\psi _{11} \, \ge \, - C \kappa _1. \end{aligned}$$
(4.11)

By Gauss equation, we have the following formula when changing the order of differentiation for the second fundamental form,

$$\begin{aligned} h_{iijj} = h_{jjii} + ( \kappa _i\,\kappa _j - 1 )\,( \kappa _i - \kappa _j ). \end{aligned}$$
(4.12)

Combining (4.10), (4.11), (4.12) and (4.7) yields,

$$\begin{aligned} \begin{aligned}&\Big ( \kappa _1^2 - \frac{1 + (\nu ^{n+1})^2}{\nu ^{n+1} - a} \kappa _1 + 1 \Big ) \,\sum f_i\,\kappa _i + \frac{a \kappa _1}{\nu ^{n+1} - a} \big (\sum f_i + \sum f_i \, \kappa _i^2 \big ) \\&- F^{ij, rs}\,h_{ij1}\,h_{rs1} + \frac{2 \kappa _1}{\nu ^{n+1} - a} \, \sum f_i \frac{u_i^2}{u^2} \big (\kappa _i - \nu ^{n+1}\big ) - C \kappa _1 \, \le 0. \end{aligned} \end{aligned}$$
(4.13)

Next, take (4.4), (2.3) into (4.9),

$$\begin{aligned} h_{11i} = \frac{\kappa _1}{\nu ^{n+1} - a}\, \frac{u_i}{u} (\nu ^{n+1} - \kappa _i), \end{aligned}$$

and recall an inequality of Andrews [22] and Gerhardt [23],

$$\begin{aligned} - F^{ij, rs}\,h_{ij1}\,h_{rs1} \, \ge \, \sum \limits _{i \ne j} \frac{f_i - f_j}{\kappa _j - \kappa _i} h_{ij1}^2 \,\ge \, 2 \sum \limits _{i \ge 2} \frac{f_i - f_1}{\kappa _1 - \kappa _i}\, h_{i11}^2. \end{aligned}$$

Therefore, (4.13) becomes,

$$\begin{aligned} \begin{aligned} 0&\ge \Big ( \kappa _1^2 - \frac{1 + (\nu ^{n+1})^2}{\nu ^{n+1} - a} \kappa _1 + 1 \Big ) \,\sum f_i\,\kappa _i - C \kappa _1 + \frac{a \kappa _1}{\nu ^{n+1} - a} \big (\sum f_i + \sum f_i \, \kappa _i^2 \big ) \\&\quad + \frac{2 \,\kappa _1^2}{(\nu ^{n+1} - a)^2}\,\sum \limits _{i \ge 2} \frac{f_i - f_1}{\kappa _1 - \kappa _i}\, \frac{u_i^2}{u^2} (\nu ^{n+1} - \kappa _i)^2 + \frac{2 \kappa _1}{\nu ^{n+1} - a} \, \sum f_i \frac{u_i^2}{u^2} \big (\kappa _i - \nu ^{n+1}\big ). \end{aligned}\nonumber \\ \end{aligned}$$
(4.14)

For some fixed \(\theta \in (0, 1)\) which will be determined later, denote

$$\begin{aligned} J = \{ i: \, f_1 \ge \theta f_i, \quad \kappa _i< \nu ^{n+1} \},\quad \quad L = \{ i: \, f_1< \theta f_i, \quad \kappa _i < \nu ^{n+1} \}. \end{aligned}$$

The second line of (4.14) can be estimated as follows.

$$\begin{aligned} \begin{aligned}&\frac{2 \,\kappa _1^2}{(\nu ^{n+1} - a)^2}\,\sum \limits _{i \ge 2} \frac{f_i - f_1}{\kappa _1 - \kappa _i}\, \frac{u_i^2}{u^2} (\nu ^{n+1} - \kappa _i)^2 + \frac{2 \kappa _1}{\nu ^{n+1} - a} \sum f_i \frac{u_i^2}{u^2} \big (\kappa _i - \nu ^{n+1}\big ) \\&\quad \ge \, \frac{2 \kappa _1^2}{(\nu ^{n+1} - a)^2}\sum \limits _{i \in L} \frac{f_i - f_1}{\kappa _1 - \kappa _i} \frac{u_i^2}{u^2} (\nu ^{n+1} - \kappa _i)^2 + \frac{2 \kappa _1}{\nu ^{n+1} - a} \big ( \sum \limits _{i \in L} + \sum \limits _{i \in J} \big ) \frac{f_i u_i^2}{u^2} \big (\kappa _i - \nu ^{n+1}\big ) \\&\quad \ge \, \frac{2 (1 - \theta )\kappa _1}{(\nu ^{n+1} - a)^2}\sum \limits _{i \in L} \frac{f_i u_i^2}{u^2} (\nu ^{n+1} - \kappa _i)^2 + \frac{2 \kappa _1}{\nu ^{n+1} - a} \sum \limits _{i \in L} \frac{f_i u_i^2}{u^2} \big (\kappa _i - \nu ^{n+1}\big ) - \frac{2}{\theta a} \sum f_i \kappa _i \\&\quad = \, \frac{2 \kappa _1}{\nu ^{n+1} - a} \sum \limits _{i \in L} \frac{f_i u_i^2}{u^2} \Big (\frac{(\nu ^{n+1} - \kappa _i)^2}{\nu ^{n+1} - a} + \kappa _i - \nu ^{n+1}\Big ) \\&\qquad - \frac{ 2 \,\theta \kappa _1}{(\nu ^{n+1} - a)^2}\sum \limits _{i \in L} \frac{f_i u_i^2}{u^2} (\nu ^{n+1} - \kappa _i)^2 - \frac{2}{\theta a} \sum f_i \kappa _i \\&\quad \ge \, - \frac{2 \kappa _1}{\nu ^{n+1} - a} \sum \limits _{i \in L} \frac{f_i u_i^2}{u^2} \cdot \frac{\nu ^{n+1} + a}{\nu ^{n+1} - a} \, \kappa _i - \frac{4 \theta \kappa _1}{ a (\nu ^{n + 1} - a)} \sum f_i \big ( 1 + \kappa _i^2 \big ) - \frac{2}{\theta a} \sum f_i \kappa _i \\&\quad \ge - \frac{4 \theta \kappa _1}{a (\nu ^{n+1} - a)} \sum f_i \big ( 1 + \kappa _i^2 \big ) - \Big ( \frac{2}{\theta a} + \frac{4 \kappa _1}{a^2} \Big ) \sum f_i \kappa _i. \end{aligned} \end{aligned}$$

Here we have applied \({\tilde{g}}^{kl} u_k u_l = \frac{\delta _{kl}}{u^2} u_k u_l = 1 - (\nu ^{n+1})^2\) due to (4.2) in deriving the above inequality. Choosing \(\theta = \frac{a^2}{4}\) and taking the above inequality into (4.14), we obtain an upper bound for \(\kappa _1\).

5 Existence of strictly locally convex solutions to (1.6)

The convexity of solutions is a very important prerequisite in this paper, due to the following two reasons: first, the \(C^2\) boundary estimates derived in Sect. 3 require the condition of convexity; second, the \(C^2\) interior estimates for prescribed scalar curvature equations in Sect. 6 need certain convexity assumption (see [12]). Therefore, the preservation of convexity of solutions is vital in order to perform the continuity process. In this section, we first give a constant rank theorem in hyperbolic space (see [9,10,11, 24]).

Theorem 5.1

Let \(\Sigma \) be a \(C^4\) oriented connected hypersurface in \({\mathbb {H}}^{n+1}\) satisfying the prescribed curvature equation

$$\begin{aligned} \sigma _k (\kappa ) \, = \, \Psi ( x_1, \ldots , x_n, u ) > 0. \end{aligned}$$
(5.1)

Assume that the second fundamental form \(\{h_{ij}\}\) on \(\Sigma \) is positive semi-definite, and for any \(\mathbf{x} \in \Sigma \) and a local orthonormal frame \(\tau _1, \ldots , \tau _n\) around \(\mathbf{x}\) with \(\{ h_{ij} (\mathbf{x}) \}\) diagonal,

$$\begin{aligned} \sum \limits _{i \in B} \Big ( \Psi _{ii} - \frac{k+1}{k}\,\frac{\Psi _i^2}{\Psi } + k \,\Psi \Big ) (\mathbf{x}) \,\lesssim \, 0, \end{aligned}$$
(5.2)

where the symbol \(\lesssim \) is defined in [10] and B is the set of bad indices of \(\mathbf{x}\). Then the second fundamental form on \(\Sigma \) is of constant rank.

Let \(\Sigma \) be a locally convex hypersurface to equation (5.1) for \(k < n\) with boundary \(\partial \Sigma \). If we can find a condition (we call it Condition I) to guarantee that \(\Sigma \) is strictly locally convex in a neighbourhood of the boundary \(\partial \Sigma \), then together with condition (5.2) in Theorem 5.1, we can prove that \(\Sigma \) is strictly locally convex up to the boundary. However, we did not find a suitable Condition I. Still, we proceed to prove the existence as if we have had Condition I in order to show how (5.2) and Condition I play the roles in the continuity process.

Now we prove the existence. We use the geometric quantities in Sect. 2 which are expressed in terms of u and write Equ. (2.5) as

$$\begin{aligned} G( D^2 u, \,D u, u ) = \,F ( a_{ij} ) = \,f( \lambda ( a_{ij} ) ) = \, \sigma _k^{1/k} (\kappa ) = \,\psi ^{1/k} ( x,\, u ). \end{aligned}$$
(5.3)

For convenience, denote

$$\begin{aligned} G[u] = \, G (D^2 u, D u, u), \quad G^{ij}[u] = G^{ij} (D^2 u, D u, u), \quad \text{ etc. }\end{aligned}$$

Let \(\delta \) be a small positive constant such that

$$\begin{aligned} G[{\underline{u}}] = \, G( D^2 {\underline{u}}, \,D {\underline{u}}, \,{\underline{u}} ) \,> \delta \,{\underline{u}} \quad \text{ in }\quad \Omega _{\epsilon }. \end{aligned}$$
(5.4)

For \(t \in [0, 1]\), consider the following two auxiliary equations (see also [27]).

$$\begin{aligned}&\left\{ \begin{aligned} G (D^2 u, D u, u) \, =&\, \Big ( ( 1 - t ) \frac{{\underline{u}}}{ G[{\underline{u}}]} + t \,\delta ^{-1} \Big )^{-1} \,u \quad \quad&\text{ in } \quad \Omega _{\epsilon }, \\ u \, =&\,\, \epsilon \quad \quad&\text{ on } \quad \Gamma _{\epsilon }. \end{aligned} \right. \end{aligned}$$
(5.5)
$$\begin{aligned}&\left\{ \begin{aligned} G (D^2 u, D u, u) \, =&\,\,\Big ( ( 1 - t ) \,\delta ^{-1} \,u^{-1} + t \, \psi ^{- 1/k}(x, u) \Big )^{-1} \quad \quad&\text{ in } \quad \Omega _{\epsilon }, \\ u \, =&\,\, \epsilon \quad \quad&\text{ on } \quad \Gamma _{\epsilon }. \end{aligned} \right. \end{aligned}$$
(5.6)

Lemma 5.1

Let \(\psi (x)\) be a positive function defined on \(\overline{\Omega _{\epsilon }}\). For \(x \in \overline{\Omega _{\epsilon }}\) and a positive \(C^2\) function u which is strictly locally convex near x, if

$$\begin{aligned}G [u] (x) = F ( a_{ij}[u] )(x) = f (\kappa )(x) = \psi (x) \,u, \end{aligned}$$

then

$$\begin{aligned} G_u [u] (x) - \,\psi (x) \, < 0. \end{aligned}$$

Proof

By direct calculation,

$$\begin{aligned} G_u = F^{ij} \frac{1}{w} \gamma ^{ik} u_{k l} \gamma ^{lj} = \frac{1}{u} \Big ( \sum f_i \kappa _i - \frac{1}{w} \sum f_i \Big ). \end{aligned}$$

Since \( \sum f_i \kappa _i \le \psi (x) \,u\) by the concavity of f and \(f(0) = 0\),

$$\begin{aligned} G_u [ u ] (x) - \,\psi (x) \, \le \, - \frac{1}{w u}\, \sum f_i < 0. \end{aligned}$$

\(\square \)

Lemma 5.2

For any \(t \in [0, 1]\), if \({\underline{U}}\) and u are respectively any positive strictly locally convex subsolution and solution of (5.5), then \(u \ge {\underline{U}}\). In particular, the Dirichlet problem (5.5) has at most one strictly locally convex solution.

Proof

We only need to prove that \(u \ge {\underline{U}}\) in \(\Omega _{\epsilon }\). If not, then \({\underline{U}} - u\) achieves a positive maximum at \(x_0 \in \Omega _{\epsilon }\), at which,

$$\begin{aligned} {\underline{U}}(x_0) > u(x_0),\quad D {\underline{U}}(x_0) = D u(x_0), \quad D^2{\underline{U}}(x_0) \le D^2 u(x_0). \end{aligned}$$
(5.7)

Note that for any \(s \in [0, 1]\), the deformation \(u[s] := s \,{\underline{U}} + (1 - s)\, u\) is strictly locally convex near \(x_0\). This is because at \(x_0\),

$$\begin{aligned} \begin{aligned}&\delta _{ij} \,+ \,u[s] \cdot \,{\gamma }^{ik} \big [ u[s] \big ] \,\cdot ( u[s] )_{kl}\,\cdot \gamma ^{lj}\big [ u[s] \big ] \, \ge \, \delta _{ij} \,+ \,u[s] \, \,{\gamma }^{ik} [ {\underline{U}} ] \,\cdot {\underline{U}}_{kl}\,\cdot \gamma ^{lj}[{\underline{U}}] \\&\quad = (1 - s) \Big ( 1 - \frac{u}{{\underline{U}}}\Big ) \delta _{ij} + \frac{u[s]}{{\underline{U}}} \Big ( \delta _{ij} + {\underline{U}} \cdot \gamma ^{ik}[{\underline{U}}] \,\cdot {\underline{U}}_{kl}\,\cdot \gamma ^{lj}[{\underline{U}}] \Big ) > 0. \end{aligned} \end{aligned}$$

Denote

$$\begin{aligned} \theta (x, t) = \Big ( ( 1 - t ) \frac{{\underline{u}}}{ G[{\underline{u}}]} + t \,\delta ^{-1} \Big )^{-1} \end{aligned}$$
(5.8)

and define a differentiable function of \(s \in [0, 1]\):

$$\begin{aligned} a(s): = G \Big [ u[s] \Big ] (x_0) \,-\, \theta (x_0, t)\, \,u[s](x_0). \end{aligned}$$

Note that

$$\begin{aligned} a(0) = G[u](x_0) \,-\, \, \theta (x_0, t)\, \,u(x_0) \, = 0 \end{aligned}$$

and

$$\begin{aligned} a(1) = G [{\underline{U}}](x_0) \,-\, \theta (x_0, t)\, \,{\underline{U}} (x_0) \, \ge 0.\end{aligned}$$

Thus there exists \(s_0 \in [0, 1]\) such that \(a(s_0) = 0\) and \(a'(s_0) \ge 0\), i.e.,

$$\begin{aligned} G\big [ u[s_0] \big ] (x_0) \,=\, \theta (x_0, t) \, u[s_0] (x_0) \end{aligned}$$
(5.9)

and

$$\begin{aligned} \begin{aligned}&G^{ij}\big [ u[s_0] \big ](x_0) \,\, D_{ij} ({\underline{U}} - u)(x_0) + G^i \big [ u[s_0] \big ](x_0)\,\, D_i ({\underline{U}} - u)(x_0) \\&+ \Big (G_u \big [ u[s_0] \big ](x_0) - \theta (x_0, t)\, \Big ) ({\underline{U}} - u)(x_0) \ge 0. \end{aligned} \end{aligned}$$
(5.10)

However, the above inequality can not hold by (5.7), (5.9) and Lemma 5.1. \(\square \)

Theorem 5.2

Under assumption (1.7) and Condition I, for any \(t \in [0, 1]\), the Dirichlet problem (5.5) has a unique strictly locally convex solution u, which satisfies \(u \ge {\underline{u}}\) in \(\Omega _{\epsilon }\).

Proof

Uniqueness is proved in Lemma 5.2. For existence of a strictly locally convex solution, we first verify that \(\Psi = (\theta (x, t) \, u)^k = \Theta (x, t) \,u^k\) satisfies condition (5.2) in the constant rank theorem. By direct calculation,

$$\begin{aligned} \begin{aligned}&\Psi _{ii} - \frac{k+1}{k}\,\frac{\Psi _i^2}{\Psi } + k \,\Psi \\&\quad = \sum \limits _{\alpha , \beta = 1}^n \,\Big ( \Theta _{x_{\alpha } x_{\beta }} - \frac{k+1}{k} \,\frac{\Theta _{x_\alpha } \Theta _{x_\beta }}{\Theta } \Big ) (x_{\alpha })_i (x_{\beta })_i \,u^k + \sum \limits _{\alpha = 1}^n \Theta _{x_{\alpha }} (x_{\alpha })_{ii} \,u^k \\&\qquad - 2 \sum \limits _{\alpha = 1}^n \Theta _{x_{\alpha }} (x_{\alpha })_i \, u^{k-1} u_i - 2 k\, \Theta u^{k-2} u_i^2 + \Theta \, k\, u^{k-1} u_{ii} + k\, \Theta \,u^k. \end{aligned} \end{aligned}$$

By (4.1), (4.3), (2.3) and (4.2), for \(i \in B\) and \(\alpha = 1, \ldots , n\), we have

$$\begin{aligned} \begin{aligned} (x_{\alpha })_{ii} \sim&- \nu ^{n+1} \,u \,\nu ^{\alpha } + \frac{2}{u} (x_\alpha )_i \,u_i - \frac{1}{u} \sum \limits _{l = 1}^n u_l \,(x_{\alpha })_l \\ =&- u \,(\nu \cdot \partial _{n+1}) (\nu \cdot \partial _{\alpha }) - u \sum \limits _{l = 1}^n \big (\frac{\tau _l}{u} \cdot \partial _{n+1}\big ) \,\big (\frac{\tau _l}{u} \cdot \partial _{\alpha }\big ) + \frac{2}{u} (x_\alpha )_i \,u_i \\ = \,&\frac{2}{u} (x_\alpha )_i \,u_i \end{aligned} \end{aligned}$$
(5.11)

and

$$\begin{aligned} u_{ii} \,\sim \, \frac{2}{u} \,u_i^2 - u. \end{aligned}$$
(5.12)

Therefore by (1.7),

$$\begin{aligned} \sum \limits _{i \in B} \Big (\Psi _{ii} - \frac{k+1}{k}\,\frac{\Psi _i^2}{\Psi } + k \,\Psi \Big ) \sim \, - k\, \Theta ^{ \frac{1}{k} + 1} \sum \limits _{i \in B} \sum \limits _{\alpha , \beta = 1}^n \,\Big ( \Theta ^{- \frac{1}{k}} \Big )_{x_{\alpha } x_{\beta }} (x_{\alpha })_i (x_{\beta })_i \,u^k \le 0. \end{aligned}$$

Next, we use the standard continuity method to prove the existence. Note that \({\underline{u}}\) is a subsolution of (5.5) by (5.4). We have obtained the \(C^2\) bound for strictly locally convex solution u (satisfying \(u \ge {\underline{u}}\) by Lemma 5.2) of (5.5), which implies the uniform ellipticity of Equ. (5.5). By Evans-Krylov theory [13, 14], we obtain the \(C^{2, \alpha }\) estimate which is independent of t,

$$\begin{aligned} \Vert u \Vert _{C^{2, \alpha } (\, \overline{ \Omega _{\epsilon } } \,)} \le C. \end{aligned}$$
(5.13)

Denote

$$\begin{aligned} C_0^{2, \alpha } (\, \overline{ \Omega _{\epsilon } } \,):= & {} \{ w \in C^{2, \alpha }( \, \overline{ \Omega _{\epsilon } } \, ) \,| \,w = 0 \,\, \text{ on } \,\, \Gamma _{\epsilon } \}, \\ {\mathcal {U}}:= & {} \left\{ w \in C_0^{2, \alpha } (\, \overline{ \Omega _{\epsilon } } \,) \,\Big | \, {\underline{u}} + w \,\,\text{ is }\,\,\text{ strictly }\,\,\text{ locally }\,\,\text{ convex }\,\,\text{ in } \,\,\overline{\Omega _{\epsilon }} \right\} . \end{aligned}$$

We can see that \(C_0^{2, \alpha } (\, \overline{ \Omega _{\epsilon } } \,)\) is a subspace of \(C^{2, \alpha }( \,\overline{ \Omega _{\epsilon } }\, )\) and \({\mathcal {U}}\) is an open subset of \(C_0^{2, \alpha } (\,\overline{ \Omega _{\epsilon } }\,)\). Consider the map \({\mathcal {L}}: \,{\mathcal {U}} \times [ 0, 1 ] \rightarrow C^{\alpha }(\, \overline{ \Omega _{\epsilon } } \,)\),

$$\begin{aligned} {\mathcal {L}} ( w, t ) = G [ {\underline{u}} + w ] \, - \, \theta (x, t) \,({\underline{u}} + w). \end{aligned}$$

Set

$$\begin{aligned} {\mathcal {S}} = \{ t \in [0, 1] \,|\, {\mathcal {L}}(w, t) = 0 \,\,\text{ has }\,\,\text{ a }\,\,\text{ solution }\,\,w \,\,\text{ in }\,\,{\mathcal {U}}\, \}. \end{aligned}$$

Note that \({\mathcal {S}} \ne \emptyset \) since \({\mathcal {L}}(0, 0) = 0\).

We claim that \({\mathcal {S}}\) is open in [0, 1]. In fact, for any \(t_0 \in {\mathcal {S}}\), there exists \(w_0 \in {\mathcal {U}}\) such that \({\mathcal {L}} ( w_0, t_0 ) = 0\). The Fréchet derivative of \({\mathcal {L}}\) with respect to w at \((w_0, t_0)\) is a linear elliptic operator from \(C^{2, \alpha }_0 (\, \overline{\Omega _{\epsilon }} \,)\) to \(C^{\alpha }(\, \overline{\Omega _{\epsilon }} \,)\),

$$\begin{aligned} {\mathcal {L}}_w \big |_{(w_0, t_0)} ( h )= & {} G^{ij}[{\underline{u}} + w_0]\, D_{ij} h + G^i [ {\underline{u}} + w_0 ]\, D_i h \\&\quad + \Big (G_u [ {\underline{u}} + w_0] - \theta (x, t_0) \Big ) h. \end{aligned}$$

By Lemma 5.1, \({\mathcal {L}}_w \big |_{(w_0, t_0)}\) is invertible. By implicit function theorem, a neighborhood of \(t_0\) is also contained in \({\mathcal {S}}\).

Next, we show that \({\mathcal {S}}\) is closed in [0, 1]. Let \(t_i\) be a sequence in \({\mathcal {S}}\) converging to \(t_0 \in [0, 1]\) and \(w_i \in {\mathcal {U}}\) be the unique (by Lemma 5.2) solution corresponding to \(t_i\), i.e. \({\mathcal {L}} (w_i, t_i) = 0\). By Lemma 5.2, \(w_i \ge 0\). By (5.13), \(u_i := {\underline{u}} + w_i\) is a bounded sequence in \(C^{2, \alpha }(\,\overline{\Omega _{\epsilon }}\,)\), which possesses a subsequence converging to a locally convex solution \(u_0\) of (5.5). By Condition I and Theorem 5.1, we know that \(u_0\) is strictly locally convex in \(\overline{\Omega _{\epsilon }}\). Since \(w_0 := u_0 - {\underline{u}} \in {\mathcal {U}}\) and \({\mathcal {L}}(w_0, t_0) = 0\), thus \(t_0 \in {\mathcal {S}}\). \(\square \)

From now on we may assume \({\underline{u}}\) is not a solution of (1.6), since otherwise we are done.

Lemma 5.3

If \(u \ge {\underline{u}}\) is a strictly locally convex solution of (5.6) in \(\Omega _{\epsilon }\), then \(u > {\underline{u}}\) in \(\Omega _{\epsilon }\) and \((u - {\underline{u}})_{\gamma } > 0\) on \(\Gamma _{\epsilon }\).

Proof

To keep the strict local convexity of the variations in our proof, we rewrite (5.6) in terms of v,

$$\begin{aligned} \left\{ \begin{aligned} G(D^2 v, D v, v) =&\,\,\psi ^t(x, v) \quad \quad&\text{ in } \quad&\Omega _{\epsilon }, \\ v =&\,\,\epsilon ^2 \quad \quad&\text{ on } \quad&\Gamma _{\epsilon }. \end{aligned} \right. \end{aligned}$$
(5.14)

Since \({\underline{u}}\) is a subsolution but not a solution of (5.6), equivalently, \({\underline{v}}\) is a subsolution but not a solution of (5.14), thus,

$$\begin{aligned} G[{\underline{v}}] - G[v] \ge \psi ^t(x, {\underline{v}}) - \psi ^t(x, v). \end{aligned}$$
(5.15)

Denote \(v[s] := s \,{\underline{v}} + (1 - s)\, v\), which is strictly locally convex over \(\Omega _{\epsilon }\) for any \(s \in [0, 1]\) since

$$\begin{aligned} \delta _{ij} + \frac{1}{2} \big ( v[s] \big )_{ij} = \,s \Big ( \delta _{ij} + \frac{1}{2}\, {\underline{v}}_{ij} \Big ) + ( 1 - s ) \Big ( \delta _{ij} + \frac{1}{2}\, v_{ij} \Big ) > 0 \quad \text{ in } \quad \Omega _{\epsilon }. \end{aligned}$$

From (5.15) we can deduce that

$$\begin{aligned} a_{ij}(x) D_{ij}( {\underline{v}} - v ) + b_i(x) D_i({\underline{v}} - v) + c(x) ({\underline{v}} - v) \ge 0 \quad \quad \text{ in } \quad \Omega _{\epsilon }, \end{aligned}$$

where

$$\begin{aligned} a_{ij}(x)= & {} \int _0^1 G^{ij}\big [ v[s] \big ] (x) \,d s, \quad b_{i}(x) = \int _0^1 G^{i}\big [ v[s] \big ] (x) \,d s, \\c(x)= & {} \int _0^1 G_v \big [ v[s] \big ] (x) - {\psi ^t}_v (x, v[s] ) \,d s. \end{aligned}$$

Applying the Maximum Principle and Lemma H (see p. 212 of [25]) we conclude that \(v > {\underline{v}}\) in \(\Omega _{\epsilon }\) and \((v - {\underline{v}})_{\gamma } > 0\) on \(\Gamma _{\epsilon }\). Hence the lemma is proved. \(\square \)

Theorem 5.3

Under assumption (1.7), (1.8) and Condition I, for any \(t \in [0, 1]\), the Dirichlet problem (5.6) possesses a strictly locally convex solution satisfying \(u \ge {\underline{u}}\) in \(\Omega _{\epsilon }\). In particular, the Dirichlet problem (1.6) has a strictly locally convex solution \(u^{\epsilon }\) satisfying \(u^{\epsilon } \ge {\underline{u}}\) in \(\Omega _{\epsilon }\).

Proof

We first verify that

$$\begin{aligned}\Psi = \,\Big ( ( 1 - t ) \,\delta ^{-1} \,u^{-1} + t \, \psi ^{- 1/k}(x, u) \Big )^{-k}\end{aligned}$$

satisfies condition (5.2) in the constant rank theorem. In fact, by assumption (1.8), (5.11) and (5.12),

$$\begin{aligned}\begin{aligned}&k \,\psi ^{ \frac{1}{k} + 1} \sum \limits _{i \in B} \,\left( \big (\psi ^{- \frac{1}{k}}\big )_{ii} - \psi ^{- \frac{1}{k}} \right) \\&\quad \sim \, \sum \limits _{i \in B} \tau _i^{T} \left( \begin{array}{cc} \frac{k+1}{k} \frac{\psi _{x_\alpha } \psi _{x_\beta }}{\psi } - \psi _{x_{\alpha } x_{\beta }} + \frac{u \psi _u - k \psi }{u^2} \delta _{\alpha \beta } &{} \frac{k+1}{k} \frac{\psi _{x_\alpha } \psi _{u}}{\psi } - \psi _{x_{\alpha } u} - \frac{\psi _{x_\alpha }}{u} \\ \frac{k+1}{k} \frac{\psi _{x_\alpha } \psi _{u}}{\psi } - \psi _{x_{\alpha } u} - \frac{\psi _{x_\alpha }}{u} &{} \frac{k+1}{k} \frac{\psi _{u}^2}{\psi } - \psi _{u u} - \frac{k \,\psi }{u^2} - \frac{\psi _u}{u} \\ \end{array} \right) \tau _i \ge 0, \end{aligned} \end{aligned}$$

and consequently,

$$\begin{aligned} \begin{aligned}&\sum \limits _{i \in B} \Big (\Psi _{ii} - \frac{k+1}{k}\,\frac{\Psi _i^2}{\Psi } + k \,\Psi \Big ) \\&\quad = \, - k\, \Psi ^{\frac{ k + 1}{k}} \sum \limits _{i \in B} \left( (1 - t) \delta ^{-1} \Big ( (u^{-1})_{ii} - u^{-1} \Big ) + t \Big ( (\psi ^{-1/k})_{ii} - \psi ^{-1/k} \Big ) \right) \, \lesssim 0. \end{aligned} \end{aligned}$$

We have established \(C^{2, \alpha }\) estimates for strictly locally convex solutions \(u \ge {\underline{u}}\) of (5.6), which further imply \(C^{4, \alpha }\) estimates by classical Schauder theory,

$$\begin{aligned} \Vert u \Vert _{C^{4,\alpha }(\overline{\Omega _{\epsilon }})} < C_4. \end{aligned}$$
(5.16)

In addition, we have

$$\begin{aligned} \text{ dist }(\kappa [u], \partial \Gamma _k)> c_2 > 0 \quad \text{ in } \,\, \overline{\Omega _{\epsilon }}, \end{aligned}$$
(5.17)

where \(C_4\), \(c_2\) are independent of t. Denote

$$\begin{aligned} C_0^{ 4, \alpha } (\,\overline{\Omega _{\epsilon }}\,) := \{ w \in C^{ 4, \alpha }( \,\overline{\Omega _{\epsilon }}\, ) \,| \,w = 0 \,\, \text{ on } \,\, \Gamma _{\epsilon } \} \end{aligned}$$

and

$$\begin{aligned} {{\mathcal {O}}} := \left\{ w \in C_0^{4, \alpha } (\overline{\Omega _{\epsilon }}) \,\left| \,\begin{aligned}&w> 0 \,\,\text{ in }\,\,\Omega _{\epsilon }, \quad w_{\gamma }> 0 \,\,\text{ on }\,\, \Gamma _{\epsilon }, \quad \Vert w {\Vert }_{C^{4,\alpha }(\overline{\Omega _{\epsilon }})} < C_4 + \Vert {\underline{u}}\Vert _{C^{4,\alpha }(\overline{\Omega _{\epsilon }})}\\&\{ \delta _{ij} + ({\underline{u}} + w)_i ({\underline{u}} + w)_j + ({\underline{u}} + w) ({\underline{u}} + w)_{ij} \}> 0 \,\, \text{ in } \,\, \overline{\Omega _{\epsilon }}, \\&\text{ dist }(\kappa [{\underline{u}} + w], \partial \Gamma _k) > c_2 \,\, \text{ in } \,\, \overline{\Omega _{\epsilon }} \end{aligned} \right. \right\} , \end{aligned}$$

which is a bounded open subset of \(C_0^{ 4, \alpha } (\,\overline{\Omega _{\epsilon }}\,)\). Define \({{\mathcal {M}}}_t (w): \,{{\mathcal {O}}} \times [ 0, 1 ] \rightarrow C^{2,\alpha }(\overline{\Omega _{\epsilon }})\),

$$\begin{aligned} {{\mathcal {M}}}_t (w) = G [ {\underline{u}} + w ] \, -\, \Big (( 1 - t ) \,\delta ^{ - 1} \cdot ({\underline{u}} + w)^{-1} + \, t \, \psi ^{-1/k}(x, {\underline{u}} + w) \Big )^{-1}. \end{aligned}$$

Let \(u^0\) be the unique strictly locally convex solution of (5.5) at \(t = 1\) (the existence and uniqueness are guaranteed by Theorem 5.2 and Lemma 5.2). Observe that \(u^0\) is also the unique solution of (5.6) when \(t = 0\). By Lemma 5.2, \(w^0: = u^0 - {\underline{u}} \ge 0\) in \(\Omega _{\epsilon }\). By Lemma 5.3, \(w^0 > 0\) in \(\Omega _{\epsilon }\) and \({w^0}_{\gamma } > 0\) on \(\Gamma _{\epsilon }\). Also, \({\underline{u}} + w^0\) satisfies (5.16) and (5.17). Thus, \(w^0 \in {{\mathcal {O}}}\). By Condition I, Theorem 5.1, Lemma 5.3, (5.16) and (5.17), \({{\mathcal {M}}}_t(w) = 0\) has no solution on \(\partial {{\mathcal {O}}}\) for any \(t \in [0, 1]\). Besides, \({{\mathcal {M}}}_t\) is uniformly elliptic on \({{\mathcal {O}}}\) independent of t. Therefore, we can define the t-independent degree of \({{\mathcal {M}}}_t\) on \({{\mathcal {O}}}\) at 0:

$$\begin{aligned} \deg ({{\mathcal {M}}}_t, {{\mathcal {O}}}, 0). \end{aligned}$$

To find this degree, we only need to compute \( \deg ({{\mathcal {M}}}_0, {{\mathcal {O}}}, 0) \). By the above discussion, we know that \({{\mathcal {M}}}_0 ( w ) = 0\) has a unique solution \(w^0 \in {{\mathcal {O}}}\). The Fréchet derivative of \({{\mathcal {M}}}_0\) with respect to w at \(w^0\) is a linear elliptic operator from \(C^{4, \alpha }_0 (\overline{\Omega _{\epsilon }})\) to \(C^{2, \alpha }(\overline{\Omega _{\epsilon }})\),

$$\begin{aligned} {{\mathcal {M}}}_{0,w} |_{w^0} ( h ) = \, G^{ij}[ u^0 ]\, D_{ij} h + G^i [ u^0 ] \,D_i h \, + ( G_u [ u^0 ] - \,\delta \, ) h. \end{aligned}$$
(5.18)

By Lemma 5.1, \(G_u [ u^0 ] - \,\delta \, < 0\) in \(\overline{\Omega _{\epsilon }}\) and thus \({{\mathcal {M}}}_{0,w} |_{w^0}\) is invertible. By the degree theory established in [17],

$$\begin{aligned} \deg ({{\mathcal {M}}}_0, {{\mathcal {O}}}, 0) = \deg ( {{\mathcal {M}}}_{0, w^0}, B_1, 0 ) = \pm 1 \ne 0, \end{aligned}$$

where \(B_1\) is the unit ball in \(C_0^{4,\alpha }(\overline{\Omega _{\epsilon }})\). Thus \(\deg ({{\mathcal {M}}}_t, {{\mathcal {O}}}, 0) \ne 0\) for all \(t \in [0, 1]\), which implies that the Dirichlet problem (5.6) has at least one strictly locally convex solution \(u \ge {\underline{u}}\) for any \(t \in [0, 1]\). \(\square \)

6 Interior second order estimates for prescribed scalar curvature equations in \(\pmb {{\mathbb {H}}}^{n+1}\)

Let \(u^{\epsilon } \ge {\underline{u}}\) be a strictly locally convex solution over \(\Omega _{\epsilon }\) to the Dirichlet problem (1.6). For any fixed \( \epsilon _0 > 0\), we want to establish the uniform \(C^2\) estimates for \(u^{\epsilon }\) for any \(0< \epsilon < \frac{\epsilon _0}{4}\) on \(\overline{\Omega _{\epsilon _0}}\), namely,

$$\begin{aligned} \Vert u^{\epsilon } \Vert _{C^2(\,\overline{\Omega _{\epsilon _0}}\,)} \le C, \quad \quad \quad \forall \quad 0< \epsilon < \frac{\epsilon _0}{4}. \end{aligned}$$
(6.1)

In what follows, let C be a positive constant which is independent of \(\epsilon \) but depends on \(\epsilon _0\). By (3.1), we immediately obtain the uniform \(C^0\) estimate:

$$\begin{aligned} \epsilon _0 \,\le \, u^{\epsilon } \, \le \, C \quad \text{ on } \quad \overline{\Omega _{\epsilon _0}}, \quad \quad \quad \forall \quad 0< \epsilon < \epsilon _0. \end{aligned}$$
(6.2)

For uniform \(C^1\) estimate on \(\overline{\Omega _{\epsilon _0}}\), we make use of the Euclidean strict local convexity of \((u^\epsilon )^2 + |x|^2\) (see [26] for a similar idea) to obtain

$$\begin{aligned} \max \limits _{\overline{\Omega _{\epsilon _0}}} \big \vert D \big ((u^\epsilon )^2 + |x|^2\big ) \big \vert \le \frac{C (n) \max \limits _{\overline{\Omega _{\epsilon _0 / 2}}} \big ( (u^\epsilon )^2 + |x|^2 \big )}{\text{ dist } (\Gamma _{\epsilon _0 / 2}, \overline{\Omega _{\epsilon _0}})}, \quad \quad \quad \forall \quad 0< \epsilon < \frac{\epsilon _0}{2}. \end{aligned}$$

It follows that,

$$\begin{aligned} \Vert u^{\epsilon } \Vert _{C^1 (\,\overline{\Omega _{\epsilon _0}})} \,\le \, C, \quad \quad \quad \forall \quad 0< \epsilon < \frac{\epsilon _0}{2}. \end{aligned}$$
(6.3)

We are now in a position to prove

$$\begin{aligned} \big | D^2 u^{\epsilon } \big |\,\le \, C \quad \text{ on } \quad \overline{\Omega _{\epsilon _0}}, \quad \quad \quad \forall \quad 0< \epsilon < \frac{\epsilon _0}{4}, \end{aligned}$$
(6.4)

which is equivalent to

$$\begin{aligned} \max \limits _{\overline{\Omega _{\epsilon _0}}} \big | \kappa _i [ u^{\epsilon }] \big |\,\le \, C, \quad \quad \quad \forall \quad 0< \epsilon < \frac{\epsilon _0}{4}. \end{aligned}$$
(6.5)

Choose \(r = \text{ dist }(\overline{\Omega _{\epsilon _0}}, \Gamma _{\epsilon _0 / 2})\), and cover \(\overline{\Omega _{\epsilon _0}}\) by finitely many open balls \(B_{\frac{r}{2}}\) with radius \(\frac{r}{2}\) and centered in \(\Omega _{\epsilon _0}\). Note that the number of such open balls depends on \(\epsilon _0\). In addition, the corresponding balls \(B_r\) are all contained in \(\Omega _{\epsilon _0 / 2}\), over which, we are able to apply the gradient estimate due to (6.3):

$$\begin{aligned}\Vert u^{\epsilon } \Vert _{C^1 (\,\overline{\Omega _{\epsilon _0 / 2}})} \,\le \, C, \quad \quad \quad \forall \quad 0< \epsilon < \frac{\epsilon _0}{4}. \end{aligned}$$

If we are able to establish the following interior \(C^2\) estimate on each \(B_r\):

$$\begin{aligned} \sup \limits _{B_{r/2}} \, \big | \kappa _i [u^{\epsilon }] \big | \,\le \, C (\Vert u^{\epsilon } \Vert _{C^1 (B_r)} ), \quad \quad \quad \forall \quad 0< \epsilon < \frac{\epsilon _0}{4}, \end{aligned}$$

then (6.5) can be proved. Since the principal curvatures \(\kappa _i [u^{\epsilon }]\), \(i = 1, \ldots , n\) and the gradient \(D u^{\epsilon }\) are invariant under the change of Euclidean coordinate system, we may assume the center of \(B_r\) is 0. For convenience, we also omit the superscript in \(u^{\epsilon }\) and write as u.

In what follows, we will use Guan-Qiu’s idea [12] to derive the interior \(C^2\) estimate

$$\begin{aligned} \sup \limits _{B_{r/2}} \, | \kappa _i (x) | \,\le \, C \end{aligned}$$
(6.6)

for strictly locally convex hypersurface \(\Sigma \) in \({\mathbb {H}}^{n+1}\) to the following equation

$$\begin{aligned} \sigma _2 ( \kappa ) = \,\psi ( \mathbf{x} ), \end{aligned}$$
(6.7)

where \(B_r \subset {\mathbb {R}}^n\) is the open ball with radius r centered at 0 and C is a positive constant depending only on n, r, \(\Vert \Sigma \Vert _{C^1( B_r )}\), \(\Vert \psi \Vert _{C^2(B_r)}\) and \(\inf _{B_r} \psi \).

For \(x \in B_r\) and \(\xi \in \mathbb {S}^{n-1} \cap T_{(x, u)} \Sigma \), consider the test function

$$\begin{aligned} \Theta (x, u, \xi ) = \, 2 \ln \rho (x) + \alpha \Big (\frac{u}{\nu ^{n+1}} \Big )^2 - \beta \left( \frac{\mathbf{x} \cdot \nu }{{\nu }^{n+1}}\right) + \ln \ln h_{\xi \xi }, \end{aligned}$$

where \(\rho (x) = r^2 - |x|^2\) with \(|x|^2 = \sum _{i = 1}^n x_i^2\) and \(\alpha \), \(\beta \) are positive constants to be determined later. At this point, we remind the readers that \(\cdot \) means the inner product in \({\mathbb {R}}^{n+1}\) while \(\langle \,\, , \,\, \rangle \) represents the inner product in \({\mathbb {H}}^{n+1}\).

The maximum value of \(\Theta \) can be attained in an interior point \(x^0 = (x_1, \ldots , x_n) \in B_r\). Let \(\tau _1, \ldots , \tau _n\) be a normal coordinate frame around \((x^0, u(x^0))\) on \(\Sigma \) and assume the direction obtaining the maximum to be \(\xi = {\tau }_1\). By rotation of \(\tau _2, \ldots , \tau _n\) we may assume that \(\big ( h_{ij}(x^0) \big )\) is diagonal. Thus, the function

$$\begin{aligned} 2 \ln \rho (x) + \alpha \Big (\frac{u}{\nu ^{n+1}} \Big )^2 - \beta \,\Big ( \frac{\mathbf{x} \cdot \nu }{{\nu }^{n+1}} \Big ) + \ln \ln h_{11} \end{aligned}$$

also achieves its maximum at \(x^0\). Therefore, at \(x^0\),

$$\begin{aligned}&\frac{2 \,{\rho }_i}{\rho } + 2 \alpha \frac{u}{\nu ^{n + 1}} \Big ( \frac{u}{\nu ^{n+1}} \Big )_i - \beta \left( \frac{\mathbf{x} \cdot \nu }{ {\nu }^{n+1}}\right) _i + \frac{h_{11i}}{h_{11}\, \ln h_{11}} = 0, \end{aligned}$$
(6.8)
$$\begin{aligned}&\begin{aligned}&\frac{2 \sigma _2^{ii} \rho _{ii}}{\rho } - \frac{2 \sigma _2^{ii} \rho _i^2}{\rho ^2} + 2 \alpha \sigma _2^{ii} \left( \Big ( \frac{u}{\nu ^{n+1}} \Big )_i^2 + \Big ( \frac{u}{\nu ^{n+1}} \Big ) \Big ( \frac{u}{\nu ^{n+1}} \Big )_{ii} \right) \\&- \beta \sigma _2^{ii} \left( \frac{\mathbf{x} \cdot \nu }{{\nu }^{n+1}}\right) _{ii} + \frac{\sigma _2^{ii} h_{11ii}}{h_{11} \ln h_{11}} - (1 + \ln h_{11}) \frac{\sigma _2^{ii} h_{11i}^2}{(h_{11} \ln h_{11})^2} \,\,\le 0. \end{aligned} \end{aligned}$$
(6.9)

To compute the quantities in (6.8) and (6.9), we first convert them into quantities in \({\mathbb {H}}^{n+1}\), and apply the Gauss formula and Weingarten formula

$$\begin{aligned} \mathbf{D}_{\tau _i} \tau _j = \nabla _{\tau _i} \tau _j + h_{ij}\, \mathbf{n}, \\\mathbf{n}_i = - h_{ij} \,\tau _j. \end{aligned}$$

We also note that in \({\mathbb {H}}^{n+1}\),

$$\begin{aligned} \mathbf{D}_\mathbf{y} \,\partial _{n+1} = \, - \frac{1}{u} \,\mathbf{y}, \end{aligned}$$

where \(\mathbf{y}\) is any vector field in \({\mathbb {H}}^{n+1}\). This implies that \(\partial _{n+1}\) is a conformal Killing field in \({\mathbb {H}}^{n+1}\). By straightforward calculation, we obtain

$$\begin{aligned} \Big ( \frac{u}{\nu ^{n+1}} \Big )_i \,= & {} \, \Big ( \frac{1}{\langle \mathbf{n}, \,\partial _{n+1} \rangle } \Big )_i = \,\kappa _i \,\frac{\tau _i \cdot \partial _{n+1}}{(\nu ^{n+1})^2}, \end{aligned}$$
(6.10)
$$\begin{aligned} \Big ( \frac{u}{\nu ^{n+1}} \Big )_{ii}= & {} h_{iij} \frac{\tau _j \cdot \partial _{n+1}}{ (\nu ^{n+1})^2} + \kappa _i^2 \frac{u}{\nu ^{n+1}} - \frac{u}{(\nu ^{n+1})^2} \,\kappa _i + 2 \kappa _i^2 \frac{(\tau _i \cdot \partial _{n+1})^2}{u (\nu ^{n+1})^3 }. \end{aligned}$$
(6.11)

Now we choose the conformal Killing field \(\mathbf{x}\) in \({\mathbb {H}}^{n+1}\) to be

$$\begin{aligned} \mathbf{x} = x_{n+1} \,\sum _{i = 1}^n \, x_i \partial _i + \frac{1}{2} \,\Big ( x_{n+1}^2 - |x|^2 \Big ) \,\partial _{n+1}. \end{aligned}$$

We can verify that

$$\begin{aligned} \mathbf{D}_\mathbf{y} \, \mathbf{x} \,=\,\phi \,\,\mathbf{y}, \quad \quad \quad \phi = \frac{x_{n+1}^2 + |x|^2}{2\, x_{n+1}}, \end{aligned}$$

where \(\mathbf{y}\) is any vector field in \({\mathbb {H}}^{n+1}\).

Again, by straightforward calculation, we find that

$$\begin{aligned}&\left( \frac{\mathbf{x} \cdot \nu }{{\nu }^{n+1}}\right) _i = \frac{\kappa _i}{u \, \nu ^{n + 1}} \,\left( \frac{ (\mathbf{x} \cdot \nu ) \, (\tau _i \cdot \partial _{n+1} )}{\nu ^{n+1}} - \mathbf{x} \cdot \tau _i \right) , \end{aligned}$$
(6.12)
$$\begin{aligned}&\begin{aligned} \left( \frac{\mathbf{x \cdot \nu }}{\nu ^{ n + 1}} \right) _{ii} = \,&\, - \Big ( \frac{ \phi \, u}{\nu ^{n + 1}} + \frac{\mathbf{x} \cdot \nu }{(\nu ^{n+1})^2} \Big ) \kappa _i + \frac{2 \kappa _i (\tau _i \cdot \partial _{n + 1})}{u \nu ^{n+1}} \Big ( \frac{\mathbf{x} \cdot \nu }{\nu ^{n+1}} \Big )_i \\&+ \frac{1}{u (\nu ^{n+1})^2} \Big ( (\mathbf{x} \cdot \nu ) (\tau _j \cdot \partial _{n+1}) - (\mathbf{x} \cdot \tau _j) \nu ^{n + 1}\Big ) h_{iij}. \end{aligned} \end{aligned}$$
(6.13)

Also, since

$$\begin{aligned} |x|^2 \,=\, \frac{1 - 2 \langle \mathbf{x}, \partial _{n+1} \rangle }{\langle \partial _{n+1}, \partial _{n+1} \rangle }, \end{aligned}$$

by direct calculation we obtain

$$\begin{aligned}&\begin{aligned} \rho _i&= \, 2 u^3 \langle \tau _i, \partial _{n+1} \rangle \langle \mathbf{x}, \partial _{n+1} \rangle - 2 u \langle \mathbf{x}, \tau _i \rangle \\&=\frac{2}{u} \Big ( ( \tau _i \cdot \partial _{n+1}) ( \mathbf{x} \cdot \partial _{n+1} ) - \mathbf{x} \cdot \tau _i \Big ), \end{aligned} \end{aligned}$$
(6.14)
$$\begin{aligned}&\begin{aligned} \rho _{ii}&= \kappa _i \Big ( (u^2 - |x|^2) \nu ^{n+1} - 2 \mathbf{x} \cdot \nu \Big ) \\&\quad + \frac{4 u^2 - 2 |x|^2 }{u^2} (\tau _i \cdot \partial _{n+1})^2 - \frac{4}{u^2} (\tau _i \cdot \mathbf{x})(\tau _i \cdot \partial _{n+1}) - 2 u^2. \end{aligned} \end{aligned}$$
(6.15)

Differentiate (6.7) twice,

$$\begin{aligned} \sigma _2^{ii} h_{iik} \,= & {} \,\psi _k, \end{aligned}$$
(6.16)
$$\begin{aligned} \sum \limits _{i \ne j} h_{ii1} h_{jj1} - \sum \limits _{ i \ne j} h_{i j 1}^2 + \sigma _2^{ii} h_{ii11}= & {} \,\, \psi _{11} \,\ge \, - C \kappa _1. \end{aligned}$$
(6.17)

Now taking (6.15), (6.10), (6.11), (6.13), (6.8), (6.16), (4.12), (6.17) into (6.9), we obtain

$$\begin{aligned} \begin{aligned}&- \frac{C}{\rho } \,\sigma _1 - C \alpha - C \beta - \frac{2 \sigma _2^{ii} \rho _i^2}{\rho ^2} + 2 \alpha \frac{u^2}{(\nu ^{n+1})^2} \sigma _2^{ii} \kappa _i^2 - \frac{2 \sigma _2^{ii} \kappa _i (\tau _i \cdot \partial _{n+1}) \,h_{11i}}{u \,\nu ^{n+1} \kappa _1 \ln \kappa _1} \\&\quad + \frac{\sum _{i \ne j} h_{ij1}^2 - \sum _{i \ne j} h_{ii1} h_{jj1}}{\kappa _1 \,\ln \kappa _1} - \frac{C \sigma _1}{\ln \kappa _1} - \frac{\sigma _2^{ii} \kappa _i^2}{\ln \kappa _1} - \big ( 1 + \ln \kappa _1 \big ) \frac{\sigma _2^{ii} h_{11i}^2}{(\kappa _1 \ln \kappa _1)^2} \, \le 0. \end{aligned} \end{aligned}$$
(6.18)

By Theorem 1.2 of [28] (see also Lemma 2 of [12]), we have

$$\begin{aligned} - \sum _{i \ne j} h_{ii1} h_{jj1} \ge \, \frac{1}{2 \sigma _2} \,\frac{(n-1) \big (2 \sigma _2 \,h_{111} - \kappa _1 \,\psi _1 \big )^2}{(n - 1) \kappa _1^2 + 2 (n - 2) \sigma _2} - \frac{\psi _1^2}{2 \sigma _2}.\end{aligned}$$

Also,

$$\begin{aligned} - \frac{2 \sigma _2^{ii} \kappa _i (\tau _i \cdot \partial _{n+1}) \,h_{11i}}{u \,\nu ^{n+1} \kappa _1 \ln \kappa _1} \ge \, - \frac{u^2}{(\nu ^{n+1})^2} \sigma _2^{ii} \kappa _i^2 - \frac{(\tau _i \cdot \partial _{n+1})^2}{u^4} \, \frac{\sigma _2^{ii} h_{11i}^2}{(\kappa _1 \ln \kappa _1)^2}. \end{aligned}$$

Thus, when \(\kappa _1\) is sufficiently large, (6.18) reduces to

$$\begin{aligned} - \frac{C}{\rho } \sigma _1 - \frac{2 \,\sigma _2^{ii} \,\rho _i^2}{\rho ^2} + (2 \alpha - 2) \frac{u^2}{(\nu ^{n+1})^2} \sigma _2^{ii} \kappa _i^2 + \frac{\sigma _2^{ii}\, h_{11i}^2}{20 \,\kappa _1^2 \,\ln \kappa _1} \, \le 0. \end{aligned}$$
(6.19)

As in [12], we divide our discussion into three cases. We show all the details to indicate the tiny differences due to the outer space \({\mathbb {H}}^{n+1}\).

Case (i): when \(|x|^2 \le \frac{r^2}{2}\), we have \(\frac{1}{\rho } \le \frac{2}{r^2}\). Then (6.19) reduces to

$$\begin{aligned} - C \sigma _1 + ( 2 \alpha - 2 ) \frac{u^2}{(\nu ^{n+1})^2} (\sigma _2 \sigma _1 - 3 \sigma _3) \,\le 0. \end{aligned}$$

Choosing \(\alpha \) sufficiently large we obtain an upper bound for \(\kappa _1\).

Next, we consider the cases when \(|x|^2 \ge \frac{r^2}{2}\), which implies \(\rho \le \frac{r^2}{2}\). We observe that

$$\begin{aligned} \rho _i = - \frac{2}{u} \Big ( \mathbf{x} - ( \mathbf{x} \cdot \partial _{n+1}) \, \partial _{n+1} \Big ) \cdot \tau _i = \,- \frac{2}{u} \sum \limits _{j = 1}^n ( \mathbf{x} \cdot \partial _{j}) \, (\partial _{j} \cdot \tau _i). \end{aligned}$$
(6.20)

Therefore,

$$\begin{aligned} \begin{aligned} \sum \limits _i \rho _i^2 = \,\,&\frac{4}{u^2} \sum \limits _{jk} (\mathbf{x} \cdot \partial _j) (\mathbf{x} \cdot \partial _k) \sum \limits _i (\partial _j \cdot \tau _i) (\partial _k \cdot \tau _i) \\ = \,\,&4 \sum \limits _{jk} (\mathbf{x} \cdot \partial _j) (\mathbf{x} \cdot \partial _k) \Big ( \sum \limits _i \big (\partial _j \cdot \frac{\tau _i}{u} \big ) \frac{\tau _i}{u} \Big ) \cdot \partial _k \\ = \,\,&4 \sum \limits _{jk} (\mathbf{x} \cdot \partial _j) (\mathbf{x} \cdot \partial _k) \Big ( \partial _j - (\partial _j \cdot \nu ) \nu \Big ) \cdot \partial _k \\ \ge \,\,&4 \Big ( \sum \limits _j (\mathbf{x} \cdot \partial _j)^2 - \sum \limits _j (\mathbf{x} \cdot \partial _j)^2 \, \sum \limits _j (\partial _j \cdot \nu )^2 \Big ) \\ = \,\,&4 \sum \limits _j (\mathbf{x} \cdot \partial _j)^2 (\nu ^{n+1})^2 = \,4 u^2 |x|^2 (\nu ^{n+1})^2 \ge \, 2\, r^2 u^2 (\nu ^{n+1})^2. \end{aligned} \end{aligned}$$
(6.21)

Case (ii): if for some \( 2 \le j \le n\), we have \(|\rho _j| > d\), where d is a small positive constant to be determined later.

By (6.8), (6.10) and (6.12), we have

$$\begin{aligned} \begin{aligned} \frac{h_{11j}}{\kappa _1 \,\ln \kappa _1} = - \frac{2 \,\rho _j}{\rho } + \Big ( \beta \frac{ (\mathbf{x} \cdot \nu ) (\tau _j \cdot \partial _{n+1}) - (\mathbf{x} \cdot \tau _j) \, \nu ^{n+1} }{u(\nu ^{n+1})^2} - 2 \alpha \frac{u (\tau _j \cdot \partial _{n+1})}{(\nu ^{n+1})^3} \Big ) \kappa _j. \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} \frac{h_{11j}^2}{\kappa _1^2 \,(\ln \kappa _1)^2} \ge \, \frac{2 \, \rho _j^2}{\rho ^2} - C (\alpha + \beta )^2 \,\kappa _j^2 \ge \, \frac{ d^2}{\rho ^2} + \frac{4 \, d^2}{r^4} - \frac{ C (\alpha + \beta )^2 }{\kappa _1^2} \ge \, \frac{ d^2}{\rho ^2}\end{aligned}$$

when \(\kappa _1\) is sufficiently large. Consequently, (6.19) reduces to

$$\begin{aligned} - \frac{C \,\sigma _1}{\rho ^2} + \frac{d^2}{20 \,\rho ^2} \, \sigma _2^{jj} \,\ln \kappa _1 \, \le 0. \end{aligned}$$

Since \(\sigma _2^{jj} \ge \frac{9}{10} \,\sigma _1\) when \(\kappa _1\) is sufficiently large, we obtain an upper bound for \(\kappa _1\).

Case (iii): if \(|\rho _j| \le d\) for all \( 2 \le j \le n\), from (6.21) we can deduce that \(|\rho _1| \ge c_0 > 0\). By (6.8), (6.10) and (6.12), we have

$$\begin{aligned} \frac{h_{111}}{\kappa _1\, \ln \kappa _1} = \, \frac{ \beta \kappa _1 \,b_1}{ (\nu ^{n + 1})^2} \, - \frac{2 \,{\rho }_1}{\rho } - \frac{2 \alpha u \kappa _1 (\tau _1 \cdot \partial _{n+1}) }{(\nu ^{n+1})^3}, \end{aligned}$$
(6.22)

where

$$\begin{aligned} \begin{aligned} b_1 = \,\,&(\mathbf{x} \cdot \nu ) \, \Big (\frac{\tau _1}{u} \cdot \partial _{n+1} \Big ) - \Big (\mathbf{x} \cdot \frac{\tau _1}{u} \Big ) \,\nu ^{n+1} \\ = \,\,&\frac{\nu ^{n+1}}{2} \,\rho _1 + \Big ( \frac{\tau _1}{u} \cdot \partial _{n+1} \Big ) \Big ( \mathbf{x} \cdot \big ( \nu - (\nu \cdot \partial _{n+1}) \partial _{n+1} \big ) \Big ) \\ = \,\,&\frac{\nu ^{n+1}}{2} \,\rho _1 + \frac{1}{\nu ^{n+1}} \Big ( \frac{\tau _1}{u} \cdot \partial _{n+1} \Big ) (\nu \cdot \partial _{n+1}) \sum \limits _i (\nu \cdot \partial _{i}) ( \mathbf{x} \cdot \partial _{i} ) \\ = \,\,&\frac{\nu ^{n+1}}{2} \,\rho _1 + \frac{1}{\nu ^{n+1}} \sum \limits _i \Big ( \big ( \frac{\tau _1}{u} \cdot \partial _{n+1} \big ) \partial _{n+1} \Big ) \cdot \Big ( (\partial _i \cdot \nu ) \nu \Big )( \mathbf{x} \cdot \partial _{i} ) \\ = \,\,&\frac{\nu ^{n+1}}{2} \,\rho _1 + \frac{1}{\nu ^{n+1}} \sum \limits _i \Big ( \frac{\tau _1}{u} - \sum \limits _j \big (\frac{\tau _1}{u} \cdot \partial _{j} \big ) \partial _{j} \Big ) \cdot \Big ( \partial _i - \sum \limits _k \big (\partial _i \cdot \frac{\tau _k}{u}\big ) \frac{\tau _k}{u} \Big )( \mathbf{x} \cdot \partial _{i} ) \\ = \,\,&\frac{\nu ^{n+1}}{2} \,\rho _1 + \frac{1}{\nu ^{n+1}} \sum \limits _i \Big ( - \frac{\tau _1}{u} \cdot \partial _i + \sum \limits _{jk} \big (\frac{\tau _1}{u} \cdot \partial _j \big ) \big ( \partial _i \cdot \frac{\tau _k}{u} \big ) \big ( \partial _j \cdot \frac{\tau _k}{u} \big ) \Big )( \mathbf{x} \cdot \partial _{i} ) \\ = \,\,&\frac{\nu ^{n+1}\,\rho _1}{2} + \frac{\rho _1 }{2 \,\nu ^{n+1}} - \frac{1}{2 \,\nu ^{n+1}} \sum \limits _{jk} \big (\frac{\tau _1}{u} \cdot \partial _j \big ) \big ( \partial _j \cdot \frac{\tau _k}{u} \big ) \,\rho _k. \end{aligned}\end{aligned}$$

Note that in the last equality we have applied (6.20). Hence

$$\begin{aligned} |b_1| \ge \frac{\nu ^{n+1}}{2} \,|\rho _1| - \frac{1}{2 \,\nu ^{n+1}} \sum \limits _{k \ne 1} |\rho _k| \,\ge c_1 > 0 \end{aligned}$$

and (6.22) can be estimated as

$$\begin{aligned} \Big | \frac{h_{111}}{\kappa _1 \ln \kappa _1} \Big | \ge \,\frac{\beta c_1 \,\kappa _1 }{2 (\nu ^{n+1})^2} - \frac{C}{\rho } \ge \, \frac{\beta c_1 \,\kappa _1 }{4 (\nu ^{n+1})^2}\end{aligned}$$

when \(\beta>> \alpha \) and \(\kappa _1 \rho \) is sufficiently large. Taking this into (6.19) and observing that

$$\begin{aligned} \sigma _2^{11} \kappa _1^2 \ge \frac{9}{10 \,n} \sigma _2 \,\sigma _1 \end{aligned}$$

as \(\kappa _1\) is sufficiently large, we then obtain an upper bound for \(\rho ^2 \ln \kappa _1\).