1 Introduction

Convex geometry plays important role in the development of fully nonlinear partial differential equations. The classical Minkowski problem and the Christoffel–Minkowski problem in general, are beautiful examples of such interactions (e.g., [1, 3, 5, 10, 19,20,21]). The core of convex geometry is the Brunn–Minkowski theory, the Minkowski sum, the mixed volumes, curvature and area measures are fundamental concepts. The notion of the Minkowski sum was extended by Firey [6], he introduced the so-called p-sum (\(p>1\)) for convex bodies. Lutwak [16] further developed a corresponding Brunn–Minkowski–Firey theory based on Firey’s p-sums. Lutwak initiated the study of the Minkowski problem for p-sums and established the uniqueness of the problem, along with the existence in the even case. The regularity of the solution in the even case was proved subsequently by Lutwak–Oliker  [17]. Chou–Wang [4] and Guan–Lin [9] studied this problem from the PDE point of view, extensive study was carried out by Lutwak–Yang–Zhang in a series of papers, we refer [2, 18] for further references in this direction.

This paper concerns the intermediate Christoffel–Minkowski problem related to p-sums, which we call it the \(L^p\)-Christoffel–Minkowski problem. While the \(L^p\)-Minkwoski problem corresponds to a Monge–Ampère type equations, the \(L^p\)-Christoffel–Minkowski problem corresponds to a fully nonlinear partial differential equation of Hessian type.

For a convex body K in \(\mathbb {R}^{n+1}\), we denote by \(h(K, \cdot )\) its support function. For any \(p\ge 1\), the p-sum of two convex bodies K and L, \(K+_p L\), is defined through its support function,

$$\begin{aligned} h^p(\lambda _1 K+_p\lambda _2 L,\cdot )=\lambda _1 h^p(K, \cdot )+\lambda _2 h^p (L, \cdot ), \quad \lambda _1, \lambda _2\in \mathbb {R}_+. \end{aligned}$$

The mixed p-quermassintegrals for K and L are defined by

$$\begin{aligned} \frac{n+1-k}{p}W_{p,k}(K, L)=\lim _{\epsilon \rightarrow 0}\frac{W_k(K+_p \epsilon L)-W_k(K)}{\epsilon }, \quad p\ge 1, 1\le k\le n. \end{aligned}$$

Here \(W_k(K)\) is the usual quermassintegral for K. It was shown by Lutwak [16] that \(W_{p,k}(K,L)\) has the following integral representation:

$$\begin{aligned} W_{p,k}(K,L)=\frac{1}{n+1}\int _{\mathbb {S}^n} h(L, x)^ph(K, x)^{1-p} dS_{k}(K, x), \end{aligned}$$

where \(dS_{k}(K,\cdot )\) is the k-th surface area measure of K. Thus \(h(K, x)^{1-p} dS_{k}(K, x)\) is the local version of the mixed p-quermassintegral. We call it k-th p-area measure. When \(p=1\), it reduces to the usual k-th area measures.

If K is a convex body with \(C^2\) boundary and support function h, then

$$\begin{aligned} dS_{k}(K, \cdot )=\sigma _{n-k}(\nabla ^2 h+h g_{\mathbb {S}^n})d\mu _{\mathbb {S}^n}. \end{aligned}$$

Here \(\nabla ^2 h\) is the Hessian on \(\mathbb {S}^n\), \(\sigma _{n-k}\) is the \((n-k)\)-th elementary symmetric function. Therefore, to solve the Minkowski problem for p-sum is equivalent to solve the following PDE:

$$\begin{aligned} \sigma _{n}(\nabla ^2 u+u g_{\mathbb {S}^n})=u^{p-1}f \hbox { on }\mathbb {S}^n. \end{aligned}$$
(1.1)

After the development of \(L^p\)-Minkowski problem, it is natural to consider the \(L^p\)-Christoffel–Minkowski problem, i.e., the problem of prescribing the k-th p-area measure for general \(1\le k\le n-1\) and \(p\ge 1\). As before, this problem can be reduced to the following nonlinear PDE:

$$\begin{aligned} \sigma _{k}(\nabla ^2 u+u g_{\mathbb {S}^n})=u^{p-1}f\hbox { on }\mathbb {S}^n. \end{aligned}$$
(1.2)

A solution u to (1.2) is called admissible if \((\nabla ^2u+u g_{\mathbb {S}^n})\in \Gamma _k\) and u is (strictly) spherically convex if \((\nabla ^2u+u g_{\mathbb {S}^n})\ge 0\) \((>0)\). For \(k<n\) and \(p=1\), the above is exactly the equation for the intermediate Christoffel–Minkowski problem of prescribing k-th area measures. Note that admissible solutions to equation (1.2) is not necessary a geometric solution to \(L^p\)-Christoffel–Minkowski problem if \(k<n\). As in the classical Christoffel–Minkowski problem [10], one needs to deal with the convexity of the solutions of (1.2). Under a sufficient condition on the prescribed function, Guan–Ma [10] proved the existence of a unique convex solution. The key tool to handle the convexity is the constant rank theorem for fully nonlinear partial differential equations. Equation (1.2) has been studied by Hu et al. [13] in the case \(p\ge k+1\). In this case, there is a uniform lower bound for solutions if \(f>0\) and they proved the existence of convex solutions to (1.2) under some appropriate sufficient condition. The case \(1<p<k+1\) is different, Eq. (1.2) is degenerate even for \(f>0\) as there is no uniform lower bound for solutions in general.

The focus of this paper is to address two questions regarding equation (1.2) when \(1<p<k+1\).

  1. (1)

    When does there exist a smooth convex solution?

  2. (2)

    Regularity of general admissible solutions of Eq. (1.2).

Our first result is the following.

Theorem 1.1

Let \(1\le k\le n-1\) be an integer and \(1<p<k+1\) be a real number. For any positive even function \(f\in C^l({\mathbb S}^n)\) \((l\ge 2)\) satisfying

$$\begin{aligned} (\nabla ^2 f^{-\frac{1}{k+p-1}}+f^{-\frac{1}{k+p-1}} g_{\mathbb {S}^n})\ge 0, \end{aligned}$$
(1.3)

there is a unique even, strictly spherically convex solution u of the Eq. (1.2). Moreover, for each \(\alpha \in (0,1)\), there is some constant C, depending on \(n, k, p, l, \alpha , \min f\) and \(\Vert f\Vert _{C^l({\mathbb S}^n)}\), such that

$$\begin{aligned} \Vert u\Vert _{C^{l+1,\alpha }({\mathbb S}^n)}\le C. \end{aligned}$$
(1.4)

An immediate consequence of the previous theorem is the following existence result for the \(L^p\) Christoffel–Minkowski problem for the case \(1<p< k+1\).

Corollary 1.1

Let \(1\le k\le n-1\) be an integer and \(1<p<k+1\) be a real number. For any positive even function \(f\in C^l({\mathbb S}^n)\) \((l\ge 2)\) satisfying (1.3). Then there is a unique closed strictly convex hypersurface M in \(\mathbb {R}^{n+1}\) of class \(C^{l+1,\alpha }\) (for all \(0<\alpha <1\)) such that the \((n-k)\)-th p-area measure of M is \(fd\mu _{{\mathbb S}^n}\).

This is an analogue result of Lutwak–Oliker [17]. We use method of continuity to prove Theorem 1.1. The strictly convexity can be preserved along the continuity method by the constant rank theorem as in [10, 13]. Unlike the case \(p\ge k+1\), the lower bound of u is not true in general if \(p<k+1\). The crucial step is to show a uniform positive lower bound for u under evenness assumption. In contrast to the \(L^p\)-Minkowski problem [17], the evenness assumption does not directly yield the lower bound of u when \(k<n\) as we do not have direct control of the volume of the associated convex body. The most technical part in this paper is to obtain a refined gradient estimate Proposition 3.1 and to use it to prove Proposition 4.1 with the assumption of evenness and spherical convexity of u. One would like to ask that would condition (1.3) guarantee the positivity of u? We will exhibit some examples in section 5 to indicate that condition (1.3) is not sufficient (see Proposition 5.1).

As in the case of the \(L^p\)-Minkowski problem [9], one has \(C^2\) estimate if \(p\ge \frac{k+1}{2}\).

Theorem 1.2

Let \(1\le k\le n-1\) be an integer and \(\frac{k+1}{2}\le p<k+1\) be a real number. For any positive function \(f\in C^2({\mathbb S}^n)\) there exists a solution u to (1.2) with \((\nabla ^2 u+ug_{{\mathbb S}^n})\in {{\bar{\Gamma }}}_k\). Moreover,

$$\begin{aligned} \Vert u\Vert _{C^{1,1}({\mathbb S}^n)}\le C. \end{aligned}$$

where C depends on \(n, k, p, \Vert f\Vert _{C^2({\mathbb S}^n)}\) and \(\min _{{\mathbb S}^n} f\). Furthermore, solution is \(C^2\) continuous (i.e., \(\nabla ^2 u\) is continuous) if \(p>\frac{k+1}{2}\).

From next section on, the range for p is \(1<p<k+1\) unless otherwise specified.

2 Preliminaries

We recall the basic notations.

Let \(\sigma _k(A)\) be the k-th elementary symmetric function defined on the set \({\mathcal {M}}_{n}\) of symmetric \(n\times n\) matrices and \(\sigma _k(A_1, \ldots , A_k)\) be the complete polarization of \(\sigma _k\) for \(A_i\in {\mathcal {M}}_{n}, i=1,\ldots , k\), i.e.

$$\begin{aligned} \sigma _k(A_1, \ldots , A_k)=\frac{1}{k!}\sum _{\begin{array}{c} i_1,\ldots i_{k}=1,\\ j_1,\ldots ,j_{k}=1 \end{array}}^n\delta _{j_1\ldots j_k}^{i_1\ldots i_k}A_{1_{i_1j_1}}\ldots A_{k_{i_kj_k}}. \end{aligned}$$

Let \(\Gamma _k\) be Garding’s cone

$$\begin{aligned} \Gamma _k=\{A\in {\mathcal {M}}_{n}: \sigma _i(A)>0 \hbox { for }i=1,\ldots , k \}. \end{aligned}$$

Let \(({\mathbb S}^n, g_{\mathbb {S}^n})\) be the unit round n-sphere and \(\nabla \) be the covariant derivative on \({\mathbb S}^n\). For a function \(u\in C^2(\mathbb {S}^n)\), we denote by \(W_u\) the matrix

$$\begin{aligned} W_u:=\nabla ^2 u+u g_{\mathbb {S}^n}. \end{aligned}$$

In the case \(W_u\) is positive definite, the eigenvalue of \(W_u\) represents the principal radii of a strictly convex hypersurface with support function u.

Let \(u^i \in C^2(\mathbb {S}^n)\), \(i=1,\ldots , n+1.\) Set

$$\begin{aligned} V(u^1, u^2, \ldots , u^{n+1}):= & {} \int _{{\mathbb S}^n} u^1\sigma _n( W_{u^2}, \ldots , W_{u^{n+1}})d\mu _{{\mathbb S}^n},\\ V_{k+1}(u^1, u^2, \ldots , u^{k+1}):= & {} V(u^1, u^2, \ldots , u^{k+1}, 1, \ldots , 1). \end{aligned}$$

We collect the following properties which have been proved in [12].

Lemma 2.1

([12]) (1) \(V_{k}(u^1, u^2, \ldots , u^{k})\) is a symmetric multilinear form on \((C^2({\mathbb S}^n))^{n+1}\). In particular,

$$\begin{aligned} V_{k+1}(\underbrace{u, \ldots , u}_{k+1})=V_{k+2}(1, \underbrace{u, \ldots , u}_{k+1}). \end{aligned}$$

Therefore, the Minkowski’s integral formula holds:

$$\begin{aligned} \int _{{\mathbb S}^n} u \sigma _k(W_u) d\mu _{{\mathbb S}^n}=\frac{k+1}{n-k}\int _{{\mathbb S}^n} \sigma _{k+1}(W_u) d\mu _{{\mathbb S}^n}. \end{aligned}$$
(2.1)

(2) Let \(u^i\in C^2({\mathbb S}^n), i=1, 2, \ldots ,k\) be such that \(u^i>0\) and \(W_{u^i}\in \Gamma _k\) for \(i=1,2,\ldots , k\), Then for any \(v\in C^2({\mathbb S}^n)\),the Alexandrov–Fenchel inequality holds:

$$\begin{aligned} V_{k+1}(v, u^1,\ldots , u^k)^2\ge V_{k+1}(v, v, u^2, \ldots , u^k)V_{k+1}(u^1, u^1, u^2\ldots , u^k), \end{aligned}$$
(2.2)

the equality holds if and only if \(v = au^1+\sum _{l=1}^{n+1} a_lx_l\) for some constants \(a, a_1, \ldots , a_{n+1}\). In particular, there are some sharp constant \(C_{n,k}\) such that

$$\begin{aligned} \left( \int _{{\mathbb S}^n} \sigma _{k+1}(W_u) d\mu _{{\mathbb S}^n}\right) ^{\frac{1}{k+1}}\le C_{n,k}\left( \int _{{\mathbb S}^n} \sigma _{k}(W_u) d\mu _{{\mathbb S}^n}\right) ^{\frac{1}{k}}. \end{aligned}$$
(2.3)

Inequality (2.3) in Lemma 2.1 follows from Alexandrov–Fenchel’s inequality (2.2) and Minkowski’s formula (2.1) via an iteration argument.

We remark that in Lemma 2.1 (2), it is sufficient to assume \(W_{u^i}\in \Gamma _k\) instead that \(W_u\) is positive definite which is the classical assumption from convex geometry.

We list some other known results which will be used in the rest of the sections.

The following theorem was proved for (1.2) by Hu–Ma–Shen in [13], a generalization of the constant rank theorem in [10].

Theorem 2.1

( [13]) Let \(p>1\). Let u be a positive solution to (1.2) such that \(W_u\) is positive semi-definite. Then if \(f^{-\frac{1}{p-1+k}}\) is spherically convex, then \(W_u\) is positive definite.

The following lemma is a special case of Lemma 1 in [8], we state it for \(W_u\in C^{1}(\mathbb S^n)\).

Lemma 2.2

Let \(e_1, \ldots , e_n\) be a local orthonormal frame on \(\mathbb {S}^n\), denote \(\nabla _s=\nabla _{e_s}, \forall s=1,\ldots , n\), then \(\forall W=W_u\in \Gamma _k\cap C^{1}(\mathbb S^n)\), \(k\ge 2\),

$$\begin{aligned}&-\sigma _k^{ij,lm}\nabla _s W_{ij}\nabla _s W_{lm}\ge \sigma _k\left[ \frac{\nabla _s \sigma _k}{\sigma _k}-\frac{\nabla _s \sigma _1}{\sigma _1}\right] \nonumber \\&\quad \left[ \left( \frac{1}{k-1}-1\right) \frac{\nabla _s \sigma _k}{\sigma _k}-\left( \frac{1}{k-1}+1\right) \frac{\nabla _s \sigma _1}{\sigma _1}\right] . \end{aligned}$$
(2.4)

3 A priori estimate for admissible solutions

In this section we establish \(C^1\) a priori estimates for the admissible solutions of (1.2).

3.1 A gradient estimate

Proposition 3.1

Let u be a positive admissible solution to (1.2). Set \(m_u=\min u\) and \(M_u=\max u\). Then there exist some positive constants A and \(0<\gamma <1\), depending on \(n, k, p, \min f\) and \(\Vert f\Vert _{C^1}\), such that

$$\begin{aligned} \frac{|\nabla u|^2}{|u-m_u|^\gamma }\le A M_u^{2-\gamma }. \end{aligned}$$

Proof

Let \(\Phi =\frac{|\nabla u|^2}{(u-m_u)^\gamma }\), where \(0<\gamma <1\) is to be determined. First we claim \(\Phi \) is well-defined, in other words, \(\Phi \) can be defined at the minimum points. Consider \(\Phi _\epsilon =\frac{|\nabla u|^2}{(u-m_u+\epsilon )^\gamma }\) for \(\epsilon >0\). Then at a maximum point of \(\Phi _\epsilon \), we have

$$\begin{aligned} (\nabla ^2 u+u I) \nabla u=\left( \frac{\gamma }{2} \frac{|\nabla u|^2}{u-m_u+\epsilon }+u\right) \nabla u. \end{aligned}$$

Hence

$$\begin{aligned} \Phi _\epsilon \le \frac{2}{\gamma }(u-m_u+\epsilon )^{1-\gamma }\max _{{\mathbb S}^n} \lambda _{max}(W_u), \end{aligned}$$

where \(\lambda _{max}(W_u)\) is the largest eigenvalue of \(W_u\). Thus when \(\gamma <1\), we have \(\Phi _\epsilon (y)\rightarrow 0\) for \(u(y)=m_u\) as \(\epsilon \rightarrow 0\). Therefore, it make sense to define \(\Phi =0\) at the minimum point of u.

Assume \(\Phi \) attains its maximum at \(x_0\). Then \(u(x_0)>m_u\). By using the orthonormal frame and rotating the coordinate, we can assume \(g_{ij}(x_0)=\delta _{ij}\), \(u_1(x_0)=|\nabla u|(x_0)\) and \(u_i(x_0)=0\) for \(i=2,\ldots , n\). In the following we compute at \(x_0\). By the critical condition,

$$\begin{aligned} \frac{2u_lu_{li}}{|\nabla u|^2}=\gamma \frac{u_i}{u-m_u}\hbox { for each }i. \end{aligned}$$

Thus \(u_{1i}=0\) for \(i=2,\ldots , n\) and

$$\begin{aligned} u_{11}=\frac{\gamma }{2}\frac{u_1^2}{u-m_u}. \end{aligned}$$
(3.1)

By rotating the remaining \(n-1\) coordinates, we can assume \((u_{ij})\) is diagonal. Consequently, \(F^{ij}=\frac{\partial \sigma _k}{\partial W_{ij}}\) is also diagonal.

We may assume \(\Phi \ge AM_u^{2-\gamma }\), where A is a large constant to be determined. Then

$$\begin{aligned} u_{11}=\frac{\gamma }{2}\frac{u_1^2}{u-m_u}\ge \frac{\gamma }{2}\frac{AM_u^{2-\gamma }}{(u-m_u)^{1-\gamma }}\ge \frac{\gamma }{2}AM_u. \end{aligned}$$
(3.2)

Since \(u\le M_u\), for \(\delta >0\), we may choose A with \(A>> \frac{2}{\gamma }\) such that

$$\begin{aligned} W_{11}=u_{11}+u\le (1+\delta )u_{11}. \end{aligned}$$
(3.3)

As \(W_{ii}\ge u_{ii}\), by the maximal condition and (3.1),

$$\begin{aligned} 0\ge & {} F^{ii}(\log \Phi )_{ii}\nonumber \\= & {} F^{ii}\frac{2u_{ii}^2+2u_lu_{lii}}{|\nabla u|^2}-\gamma \frac{F^{ii}u_{ii}}{u-m_u}+\gamma (1-\gamma )\frac{F^{ii}u_i^2}{(u-m_u)^2}\nonumber \\= & {} \frac{2F^{ii}u_{ii}^2}{u_1^2}+\frac{2F^{ii}u_1(W_{ii1}-u_i\delta _{1i})}{u_1^2}\nonumber \\&\quad -\gamma \frac{F^{ii}u_{ii}}{u-m_u}+\gamma (1-\gamma )\frac{F^{ii}u_i^2}{(u-m_u)^2}\nonumber \\= & {} \frac{2F^{ii}u_{ii}^2}{u_1^2}+2p-1u^{p-1-1}f+\frac{ 2u^{p-1} f_1}{u_1}-2F^{11}\nonumber \\&\quad -\gamma \frac{F^{ii}u_{ii}}{u-m_u}+\gamma (1-\gamma )\frac{F^{ii}u_i^2}{(u-m_u)^2}\nonumber \\\ge & {} \frac{2F^{ii}u_{ii}^2}{u_1^2}+ \gamma (1-\gamma )\frac{F^{11}u_1^2}{(u-m_u)^2}+\frac{ 2u^{p-1} f_1}{u_1} -2F^{11}-\gamma \frac{F^{ii}W_{ii}}{u-m_u}\nonumber \\= & {} \frac{2F^{ii}u_{ii}^2}{u_1^2}+2(1-\gamma )\frac{F^{11}u_{11}}{u-m_u}+\frac{ 2u^{p-1} f_1}{u_1} -2F^{11}-k\gamma \frac{\sigma _k(W)}{u-m_u}\nonumber \\= & {} \sum _{i\ne 1}\frac{2F^{ii}u_{ii}^2}{u_1^2}+ 2(1-\gamma )\frac{F^{11}u_{11}}{u-m_u}+\frac{ 2u^{p-1} f_1}{u_1}\nonumber \\&\quad +2F^{11}\left( \frac{u_{11}^2}{u_1^2}-1\right) -k\gamma \frac{\sigma _k(W)}{u-m_u}. \end{aligned}$$
(3.4)

By (3.1) and (3.2), if \(A\ge \frac{4}{\gamma ^2}\),

$$\begin{aligned} \frac{u_{11}^2}{u_1^2}-1\ge \frac{\gamma ^2}{4} A\frac{M_u}{u-m_u}-1\ge 0. \end{aligned}$$
(3.5)

Using the definition of \(\Phi \), we have

$$\begin{aligned} -\frac{ 2u^{p-1} f_1}{u_1}\ge & {} -Cu^{p-1}\Phi ^{-\frac{1}{2}}(u-m_u)^{-\frac{\gamma }{2}} \\\ge & {} -\frac{C}{\sqrt{A}}M^{-1+\frac{\gamma }{2}}u^{p-1}(u-m_u)^{-\frac{\gamma }{2}}\\\ge & {} -\frac{C}{\sqrt{A}}u^{p-1}(u-m_u)^{-1}\\\ge & {} -\frac{C}{\sqrt{A}}\frac{\sigma _k(W)}{u-m_u}. \end{aligned}$$

For \(N>1\) to be determined later, denote

$$\begin{aligned} K= \{i: u_{ii}> Nu_{11}\}. \end{aligned}$$

When A is large enough, by (3.3),

$$\begin{aligned} u_{ii}=W_{ii}-u\ge W_{ii}-\delta u_{11}\ge W_{ii}-\delta u_{ii}, \quad \forall i\in K. \end{aligned}$$

Hence

$$\begin{aligned} \sum _{i\in K}\frac{2F^{ii}u_{ii}^2}{u_1^2}\ge & {} \sum _{i\in K}\frac{2NF^{ii}u_{ii}u_{11}}{u_1^2}=N\gamma \sum _{i\in K}\frac{F^{ii}u_{ii}}{u-m_u}\nonumber \\\ge & {} \frac{N\gamma }{1+\delta } \sum _{i\in K}\frac{F^{ii}W_{ii}}{u-m_u}. \end{aligned}$$
(3.6)

Combining (3.1)–(3.6)

$$\begin{aligned} 0\ge & {} \frac{N\gamma }{1+\delta } \sum _{i\in K}\frac{F^{ii}W_{ii}}{u-m_u}+ \frac{ 2(1-\gamma )}{1+\delta }\frac{F^{11}W_{11}}{u-m_u}\nonumber \\&\quad -\frac{C}{\sqrt{A}}\frac{\sigma _k(W)}{u-m_u}-k\gamma \frac{\sigma _k(W)}{u-m_u}. \end{aligned}$$
(3.7)

Let’s denote \(W_{mm}=\max \{W_{ii}| i=1,\ldots , n\}\). We have

$$\begin{aligned} \sigma _k(W)=\sigma _{k-1}(W | m)W_{mm}+ \sigma _{k}(W | m). \end{aligned}$$

If \(\sigma _k(W|m)\le 0\), then

$$\begin{aligned} \sigma _{k-1}(W | m)W_{mm}\ge \sigma _k(W). \end{aligned}$$

Let’s assume \(\sigma _k(W|m)> 0\), that implies \((W|m)\in \Gamma _k\). In turn, \(\sigma _{k-1}(W|mi)>0, \forall i\ne m\) and

$$\begin{aligned} k\sigma _k(W|m)=\sum _{i\ne m} W_{ii}\sigma _{k-1}(W|mi)\le W_{mm}\sum _{i\ne m}\sigma _{k-1}(W|mi)= (n-k)W_{mm}\sigma _{k-1}(W|m). \end{aligned}$$

Combining the above inequalities, we have

$$\begin{aligned} \sigma _{k-1}(W | m)W_{mm}\ge \frac{k}{n}\sigma _k(W). \end{aligned}$$
(3.8)

If \(K\ne \emptyset \), then \(m\in K\), and

$$\begin{aligned} \sum _{i\in K}\frac{F^{ii}W_{ii}}{u-m_u}\ge \frac{F^{mm}W_{mm}}{u-m_u}\ge \frac{k}{n} \frac{\sigma _k(W)}{u-m_u}. \end{aligned}$$
(3.9)

If \(K= \emptyset \), then \(0\le W_{11}\le W_{mm}\le N W_{11}\), as \(F^{11}\ge F^{mm}\),

$$\begin{aligned} \frac{F^{11}W_{11}}{u-m_u}\ge \frac{1}{N}\frac{F^{mm}W_{mm}}{u-m_u}\ge \frac{k}{Nn}\frac{\sigma _k(W)}{u-m_u}. \end{aligned}$$
(3.10)

Combining (3.7), (3.9) and (3.10),

$$\begin{aligned} 0\ge & {} \left[ \min \left\{ \frac{Nk\gamma }{n(1+\delta )}, \frac{ 2k(1-\gamma )}{Nn(1+\delta )}\right\} -\frac{C}{\sqrt{A}}-k\gamma \right] \frac{\sigma _k(W)}{u-m_u}>0, \end{aligned}$$
(3.11)

if we pick \(N=n(1+2\delta )\), \(\gamma =\frac{2}{N^2+2}\), and A sufficiently large (for any \(\delta >0\) fixed, e.g, \(\delta =\frac{1}{10^{10}}\)). This is a contradiction. Thus for our choice of \(\gamma \) and A, we must have \(\Phi \le AM_u^{2-\gamma }\) at its maximum. \(\square \)

When \(k=n\), similar result was proved in [14, 15] where upper bound of u was readily available. We note the proof of Proposition 3.1 also works for certain range of \(p<1\).

3.2 Upper bound of u

We now use raw \(C^1\) estimate in Proposition 3.1 to get an upper bound of u.

Proposition 3.2

Let u be a positive admissible solution to (1.2). Then there exist some positive constants \(c_0\) and \(C_0\), depending on \(n, k, p, \min f\) and \(\int _{{\mathbb S}^n}f\), such that

$$\begin{aligned} 0<c_0\le \max u\le C_0. \end{aligned}$$

Proof

Let \(x_0\) be a maximum point of u. Then \(\nabla ^2 u(x_0)\le 0\). It follows that

$$\begin{aligned} \left( {\begin{array}{c}n\\ k\end{array}}\right) u^k(x_0)\ge \sigma _k(W_u)(x_0)=u^{p-1}(x_0) f(x_0), \end{aligned}$$

and in turn we have

$$\begin{aligned} \max _{{\mathbb S}^n} u=u(x_0)\ge \left( \frac{\min _{{\mathbb S}^n} f}{\left( {\begin{array}{c}n\\ k\end{array}}\right) }\right) ^{\frac{1}{k-p+1}}. \end{aligned}$$
(3.12)

From Proposition 3.1, we know \(|\nabla u|^2(x)\le AM_u^2=Au(x_0)^2\) for any \(x\in {\mathbb S}^n\), we have

$$\begin{aligned} u(x)\ge \frac{1}{2} u(x_0) \hbox { if } dist(x,x_0)\le \frac{1}{2\sqrt{A}}. \end{aligned}$$
(3.13)

Thus

$$\begin{aligned} \int _{{\mathbb S}^n} u^{p}f\ge & {} \int _{\{x\in {\mathbb S}^n: dist(x,x_0)\le \frac{1}{2\sqrt{A}}\}} u^{p}f\nonumber \\\ge & {} \frac{1}{2^{p}}u^{p}(x_0)\min _{{\mathbb S}^n} f \left| \{x\in {\mathbb S}^n: dist(x,x_0)\le \frac{1}{2\sqrt{A}}\}\right| . \end{aligned}$$
(3.14)

On the other hand, using Minkowski’s integral formula (2.1), Alexandrov–Fenchel’s inequality (2.3), Hölder’s inequality and (1.2), we have

$$\begin{aligned} \int _{{\mathbb S}^n} u^{p}f= & {} \int _{{\mathbb S}^n} u\sigma _k(W_u)=\frac{k+1}{n-k}\int _{{\mathbb S}^n} \sigma _{k+1}(W_u)\nonumber \\\le & {} C\left( \int _{{\mathbb S}^n} \sigma _{k}(W_u)\right) ^{\frac{k+1}{k}}=C\left( \int _{{\mathbb S}^n} u^{p-1} f\right) ^{\frac{k+1}{k}}\nonumber \\\le & {} C\left( \int _{{\mathbb S}^n} u^{p}f\right) ^{\frac{p-1}{p}\frac{k+1}{k}}\left( \int _{{\mathbb S}^n} f\right) ^{\frac{1}{p}\frac{k+1}{k}}. \end{aligned}$$
(3.15)

Since \(p<k+1\), it follows from (3.15) that

$$\begin{aligned}&\int _{{\mathbb S}^n} u^{p}f\le C\left( \int _{{\mathbb S}^n} f\right) ^{\frac{k+1}{k-p+1}}. \end{aligned}$$
(3.16)

Combining (3.14) and (3.16), we obtain \(u\le u(x_0)\le C\). \(\square \)

Combining Propositions 3.1 and 3.2 we get full \(C^1\) estimate.

Proposition 3.3

Let u be an admissible solution to (1.2). Set \(m_u=\min u\). Then there exist some positive constants \(0<\gamma <1\) and C, depending on \(n, k, p-1, \min f\) and \(\Vert f\Vert _{C^1}\), such that

$$\begin{aligned} \frac{|\nabla u|^2}{|u-m_u|^\gamma }\le C. \end{aligned}$$

4 Convex solutions

So far, we have been dealing with general admissible solutions of equation (1.2). In order to solve the \(L^p\)-Christoffel–Minkowski problem, we need to establish the existence of convex solutions, i.e., solutions to (1.2) with \(W_u\ge 0\). As in the case of the classical Christoffel–Minkowski problem [10], one needs some sufficient conditions on the prescribed function f in equation (1.2) when \(k<n\). Unlike the classical Christoffel–Minkowski problem, Eq. (1.2) may degenerate when \(p>1\) in general. We first derive lower bound of convex solutions.

4.1 Lower bound for u

To get a uniform positive lower bound, we need to impose evenness assumption together with \(W_u\ge 0\). We remark that such estimate was straightforward when \(k=n\) since the equation implies a positive lower bound of volume. For \(k<n\), a lower bound on quermassintegral \(V_{k+1}\) does not guarantee the non-degeneracy of the convex body. We need some extra effort.

Proposition 4.1

Let u be a positive, even, spherically convex solution to (1.2). Then there exists some positive constant C, such that

$$\begin{aligned} u\ge C>0. \end{aligned}$$

Proof

Since u is even, we can assume without loss of generality that \(u(x_1)=\max u=: M\) and \(u(x_2)=\min u\) and \(dist(x_1,x_2)=: 2d\le \frac{\pi }{2}\). So \(d\le \frac{\pi }{4}\). Let \(\gamma : [-d,d]\rightarrow {\mathbb S}^n\) be the arc-length parametrized geodesic such that \(\gamma (-d)=x_1\) and \(\gamma (d)=x_2\). Let \(u: [-d, d]\rightarrow \mathbb {R}\) be the function \(u(t)=u(\gamma (t))\) and denote

$$\begin{aligned} u''(t)+u(t)=g(t). \end{aligned}$$
(4.1)

It follows from the critical condition of u at \(x_1\) and \(x_2\) that

$$\begin{aligned} u'(-d)=u'(d)=0, \quad u(-d)=M. \end{aligned}$$
(4.2)

Let us explore the boundary value problem for the ODE, (4.1) and (4.2). It is easy to check that \(A\cos t+B\sin t\) is the general solutions to homogeneuous ODE

$$\begin{aligned} u''+u=0 \end{aligned}$$

and a special solution to (4.1) is given by \(\cos t\int _{-d}^t \frac{1}{\cos ^2 \tau }\int _{-d}^\tau g(s)\cos s ds d\tau \), which can be obtained by writing \(u=\cos (t)v(t)\) and solving first order ODE for \(v^{'}\). Also, by Fubini’s theorem, one sees this special solution is equal to \(\int _{-d}^t \sin (t-s)g(s)ds\). Combining with the boundary condition (4.2), we see all the solutions to (4.1) and (4.2) are

$$\begin{aligned} u(t)=\cos t\int _{-d}^t \frac{1}{\cos ^2 \tau }\int _{-d}^\tau g(s)\cos s ds d\tau +M\cos (d+t). \end{aligned}$$
(4.3)

For simplicity, we denote by \(G(\tau )=\int _{-d}^\tau g(s)\cos s ds\). It follows from (4.3) that

$$\begin{aligned} u(d)=\cos d \int _{-d}^d \frac{G(\tau )}{\cos ^2 \tau }d\tau +M\cos 2d. \end{aligned}$$
(4.4)

Our aim is to derive a positive lower bound of \(\min u=u(d).\) We divide the proof into two cases.

Case 1. \(2d\le \frac{\pi }{4}\).

Note that \(g(t)\ge 0\) as \(W_u\ge 0\), hence \(G(\tau )\ge 0, \forall \tau \in [-d,d]\). One see from (4.4) that \(u(d)\ge \frac{\sqrt{2}}{2}M\).

Case 2. \(2d\ge \frac{\pi }{4}\).

From the definition of \(G(\tau )\), by performing integration by parts, we have

$$\begin{aligned} G(\tau )= & {} \int _{-d}^\tau g(s)\cos s ds \nonumber \\= & {} \int _{-d}^\tau (u^{''}(s)+u(s))\cos s ds\nonumber \\= & {} u'(s)\cos s|_{-d}^\tau +\int _{-d}^\tau u'(s)\sin s+u(s)\cos s ds\nonumber \\= & {} u'(\tau )\cos \tau + u(s)\sin s|_{-d}^\tau \nonumber \\= & {} u'(\tau )\cos \tau + u(\tau )\sin \tau -M\sin (-d), \end{aligned}$$
(4.5)

where facts \(u{'}(-d)=0, u(-d)=M\) are used. In particular, as \(u{'}(d)=0\),

$$\begin{aligned} G(d)=(u(d)+M)\sin d. \end{aligned}$$

Since \(\sin d\ge \sin \frac{\pi }{8}\) and \(u\ge 0\), we see

$$\begin{aligned} G(d)\ge M\sin \frac{\pi }{8}>0. \end{aligned}$$
(4.6)

By Proposition 3.3, such that for \(\tau \in [-d, d]\),

$$\begin{aligned} |u'(\tau )|\le C(u(\tau )-u(d))^{\frac{\gamma }{2}}\le C\max _{{\mathbb S}^n} |\nabla u|^{\frac{\gamma }{2}} |\tau -d|^{\frac{\gamma }{2}} \le \tilde{C}|\tau -d|^{\frac{\gamma }{2}}. \end{aligned}$$

Therefore, \(G(\tau )\) is continuous as a function of \(\tau \) from (4.5), and

$$\begin{aligned} G(\tau )\ge G(d)-C^{*}|\tau -d|^{\frac{\gamma }{2}}, \quad \forall \tau \in [-d, d], \end{aligned}$$
(4.7)

for some \(C^{*}\ge 0\) under control. Take \(\delta =\min \{d, (\frac{G(d)}{2C^{*}})^{\frac{2}{\gamma }}\)}. It follows from (4.3), (4.6), (4.7) and \(d\in [0,\frac{\pi }{4}]\) that

$$\begin{aligned} u(d)= & {} \cos d \int _{-d}^d \frac{G(\tau )}{\cos ^2 \tau }d\tau +M\cos 2d\\\ge & {} \cos d \int _{d-\delta }^d G(\tau ) d\tau \ge \frac{\sqrt{2}}{2}\cdot \frac{1}{2}G(d)\delta . \end{aligned}$$

\(\square \)

4.2 Higher regularity

Proposition 4.2

Let u be a positive, even, spherically convex solution to (1.2). For any \(l\in \mathbb {R}\) and \(0<\alpha <1\), there exists some positive constant C, depending on \(n, k, p, l, \min f\) and \(\Vert f\Vert _{C^l}\), such that (1.4) holds.

Proof of Proposition 4.2

From Proposition 3.2 and 4.1, we see u is bounded from above and below by uniform positive constants. When \(k=1\), as we already have \(C^1\) bounds for u, higher regularity follows from elliptic linear PDE. We may assume \(k\ge 2\). Let

$$\begin{aligned} {\tilde{F}}(W_u):=\sigma _k^{\frac{1}{k}}(W_u)= (u^{p-1} f)^{\frac{1}{k}}. \end{aligned}$$

Differentiating the equation twice, we have

$$\begin{aligned} \Delta (u^p f)^{\frac{1}{k}}= & {} {\tilde{F}}^{ii}W_{iiss}+{\tilde{F}}^{ij,lm}W_{ijs}W_{lms}\nonumber \\ {}= & {} {\tilde{F}}^{ii}(W_{ssii}-W_{ss}+nW_{ii})+{\tilde{F}}^{ij,lm}W_{ijs}W_{lms}\nonumber \\\le & {} {\tilde{F}}^{ii}(\sigma _1)_{ii}-\sum _i {\tilde{F}}^{ii}\sigma _1+n\sigma _k^{\frac{1}{k}}\nonumber \\= & {} {\tilde{F}}^{ii}(\sigma _1)_{ii}-\sum _i {\tilde{F}}^{ii}\sigma _1+n (u^{p-1} f)^{\frac{1}{k}}. \end{aligned}$$
(4.8)

where we used the concavity of \({\tilde{F}}\).

Note that \(|\Delta (u^p f)^{\frac{1}{k}}|\le C\sigma _1\) and \(\sum _i {\tilde{F}}^{ii}=\frac{n}{k}\sigma _k^{\frac{1-k}{k}}\sigma _{k-1}\ge C_{n,k}\sigma _k^{-\frac{1}{k(k-1)}}\sigma _1^{\frac{1}{k-1}}\). Applying the maximum principle on (4.8), we see that \(\sigma _1\le C\). Thus \(\Vert u\Vert _{C^2}\le C\). Since \(W_u\ge 0\), we see that the equation is uniformly elliptic. Our assertion follows now from the standard Evans–Krylov and Schauder estimates. \(\square \)

Remark 4.1

The conditions that u is even and \(W_u\ge 0\) have been only used in Proposition 4.1.

4.3 Existence

In the following we use the continuity method to prove the existence and uniqueness of strictly convex solutions.

Proof of Theorem 1.1.

We first show that the solution is unique. The uniqueness of strictly spherically convex solution was showed by Lutwak [16]. For convenience of readers, we give a proof on the uniqueness of admissible solutions.

Assume uv are two admissible solutions to (1.2). Then we have

$$\begin{aligned}&\sigma _k(W_u)=u^{p-1} f,\quad \sigma _k( W_v)=v^{p-1} f.\end{aligned}$$
(4.9)

Multiplying v to the first equation in (4.9) and integrating over \({\mathbb S}^n\), we have by using the Alexandrov-Fenchel inequality (2.2)

$$\begin{aligned} \int _{{\mathbb S}^n} vu^{p-1} f= & {} \int _{{\mathbb S}^n} v\sigma _k(W_u)=V_{k+1}(v, u,\ldots , u) \nonumber \\\ge & {} V_{k+1}(u,u,\ldots ,u)^{\frac{k}{k+1}} V_{k+1}(v, v,\ldots ,v) ^{\frac{1}{k+1}}\nonumber \\= & {} \left( \int _{{\mathbb S}^n} u^{p}f\right) ^{\frac{k}{k+1}} \left( \int _{{\mathbb S}^n} v^{p}f\right) ^{\frac{1}{k+1}}. \end{aligned}$$
(4.10)

On the other hand, using Hölder’s inequality,

$$\begin{aligned} \int _{{\mathbb S}^n} vu^{p-1} f\le \left( \int _{{\mathbb S}^n} v^{p}f\right) ^{\frac{1}{p}} \left( \int _{{\mathbb S}^n} u^{p}f\right) ^{\frac{{p-1}}{p}}. \end{aligned}$$
(4.11)

Combining (4.10) and (4.11), in view of \(1<p<k+1\), we obtain

$$\begin{aligned} \int _{{\mathbb S}^n} u^{p}f\le \int _{{\mathbb S}^n} v^{p}f. \end{aligned}$$

Similar argument by interchanging the role of u and v gives

$$\begin{aligned} \int _{{\mathbb S}^n} v^{p}f\le \int _{{\mathbb S}^n} u^{p}f. \end{aligned}$$

Thus all the above inequalities are equalities. In particular, equality holds in (4.10). That is, equality holds in (2.3). In view of (4.9), we must have \(u\equiv v.\)

We now prove the existence. Denote

$$\begin{aligned} f_t=\left( tf^{-\frac{1}{p-1+k}}+(1-t)\left( {\begin{array}{c}n\\ k\end{array}}\right) ^{-\frac{1}{p-1+k}}\right) ^{-(p-1+k)} \hbox { for } t\in [0, 1]. \end{aligned}$$

Then \(f_t\) is even and satisfies (1.3). Consider the equation

$$\begin{aligned} \sigma _k(\nabla ^2 u+ug_{{\mathbb S}^n})=u^{p-1}f_t. \end{aligned}$$
(4.12)

Let

$$\begin{aligned} S=\{t\in [0,1]| (4.12) \hbox { has a positive, even solution }u_t\hbox { with }W_{u_t}> 0\}. \end{aligned}$$

It is clear that \(u_0\equiv 1\) is a positive, even solution of (4.12) with \(W_{u_0}> 0\) for \(t=0\). Thus S is non-empty.

Next we show S is open. The linearized operator at u is given by

$$\begin{aligned} L_u(v):=\sigma _k^{ij}(W_u)(W_v)_{ij}-(p-1)u^{p-1-1}vf_t=\sigma _k^{ij}(W_u)(W_v)_{ij}-(p-1) u^{-1}v\sigma _k(W_u). \end{aligned}$$

Suppose \(L_u(v)=0\). Then

$$\begin{aligned} \sigma _k^{ij}(W_u)(W_v)_{ij}-(p-1)u^{-1}v\sigma _k(W_u)=0. \end{aligned}$$
(4.13)

Multiplying (4.13) with u and integrating over \({\mathbb S}^n\), we have

$$\begin{aligned} k\int _{{\mathbb S}^n} v\sigma _k(W_u)=\int _{{\mathbb S}^n} u\sigma _k^{ij}(W_u)(W_v)_{ij}=(p-1)\int _{{\mathbb S}^n} v\sigma _k(W_u) \end{aligned}$$

Since \(k\ne p-1\), we have \(\int _{{\mathbb S}^n} v\sigma _k(W_u)=0\). On the other hand, Multiplying (4.13) with v and integrating over \({\mathbb S}^n\), we have

$$\begin{aligned} kV(v,v, u, \ldots , u)=\int _{{\mathbb S}^n} v\sigma _k^{ij}(W_u)(W_v)_{ij}=(p-1)\int _{{\mathbb S}^n} u^{-1}v^2\sigma _k(W_u) \end{aligned}$$

Since \(V(v, u, \ldots , u)=\int _{{\mathbb S}^n} v\sigma _k(W_u)=0\), by using the Alexandrov–Fenchel inequality (2.2), we see

$$\begin{aligned} V(v,v, u, \ldots , u)\le 0. \end{aligned}$$

Thanks to \(p>1\), we have \(\int _{{\mathbb S}^n} u^{-1}v^2\sigma _k(W_u) \le 0\), which implies \(v\equiv 0\). Hence the kernel of the linearized operator of the equation is trivial. By the implicit function theorem, for each \(t_0\in S\), there exists a neighborhood \({\mathcal {N}}\) of \(t_0\) such that there exists a positive solution \(u_t\) of (4.12) with \(W_{u_t}> 0\) for \(t\in {\mathcal {N}}\). Since \(f_t\) is even, it follows from the uniqueness result that \(u_t\) must be even. Hence, \({\mathcal {N}}\subset S\) and S is open.

We now prove the closedness of S. Let \(\{t_i\}_{i=1}^\infty \subset S\) be a sequence such that \(t_i\rightarrow t_0\) and \({u_{t_i}}\) be a positive even solution to (4.12) with \(W_{u_{t_i}}>0\) for \(t=t_i\). By virtue of the a priori estimate in Theorem 4.2, there exists a subsequence, still denote by \(u_{t_i}\), converges to some function u in \(C^{l+1}\) norm. In particular, u is an even solution to (4.12) for \(t=t_0\). Suppose \(W_u\) is not positive definite, then \(W_u\) is positive semi-definite and \(\det (W_u)(x_0)=0\) for some \(x_0\in {\mathbb S}^n\). Since \(f_{t_0}\) satisfies (1.3), we know from the constant rank theorem that \(W_u\) must be positive definite. A contradiction. Therefore, \(t_0\in S\) and S is closed.

We conclude that \(S=[0,1]\) and (4.12) with \(t=1\), which is (1.2), has a positive even solution u with \(W_u>0\). The proof is completed. \(\square \)

5 Examples

For the Minkowski problem for p-sum with \(1< p< n+1\), a \(C^2\) convex hypersurface to the \(L^p\) Minkowski problem does not always exist even if f is a smooth positive function. A series of counterexamples have been constructed in [9]. The arguments in [9] can be extended to construct similar examples for equation (1.2).

Let \(\alpha =\frac{k}{k-p+1}\). Set \(u(x)=(1-x_{n+1})^\alpha \), where \(x= (x_1,\ldots , x_n, x_{n+1})=:(x', x_{n+1})\in {\mathbb S}^n\subset \mathbb {R}^{n+1}.\) We view the open hemisphere \({\mathbb S}^n_+\), centered at the north pole, as a graph over \(\{x'\in \mathbb {R}^n: |x'|^2<1\}\). The metric g and its inverse \(g^{-1}\) on \({\mathbb S}^n_+\) are

$$\begin{aligned} g_{ij}=\delta _{ij}+\frac{x_ix_j}{1-|x'|^2},\quad g^{ij}=\delta _{ij}-x_ix_j \end{aligned}$$

and the Christoffel symbol is

$$\begin{aligned} \Gamma _{ij}^l=g_{ij}x_l. \end{aligned}$$

In this local coordinates, \(u(x)=(1-\sqrt{1-|x'|^2})^\alpha \). By a direct computation, we have

$$\begin{aligned} g^{il}\nabla ^2_{jl} u+u\delta _{ij}= & {} g^{il}(\partial _j\partial _l u-\Gamma _{jl}^m\partial _m u)+u\delta _{ij}\\= & {} (1-\sqrt{1-|x'|^2})^{\alpha -1}\left( (\alpha -1) \sqrt{1-|x'|^2}+1\right) \delta _{ij}\\&+\,\alpha (\alpha -1)(1-\sqrt{1-|x'|^2})^{\alpha -2}x_ix_j \end{aligned}$$

Using \((\alpha -1)k=\alpha (p-1)\), we see

$$\begin{aligned}&\sigma _k(g^{il}\nabla ^2_{jl} u+u\delta _{ij})=u^{p-1} f \end{aligned}$$

where

$$\begin{aligned}&f=\sigma _k\left[ \left( (\alpha -1) \sqrt{1-|x'|^2}+1\right) \delta _{ij}+\alpha (\alpha -1)\frac{x_ix_j}{1-\sqrt{1-|x'|^2}}\right] . \end{aligned}$$

It is clear that the eigenvalues of the matrix \((\delta _{ij}+b\frac{x_ix_j}{|x|^2})\) are 1 with multiplicity \(n-1\) and \(1+b\) with multiplicity 1. Thus

$$\begin{aligned} f= & {} \left( (\alpha -1) \sqrt{1-|x'|^2}+1\right) ^k\\&\quad \times \left[ \left( {\begin{array}{c}n-1\\ k\end{array}}\right) +\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \left( 1+\frac{\alpha (\alpha -1)|x'|^2}{(1-\sqrt{1-|x'|^2})\left( (\alpha -1) \sqrt{1-|x'|^2}+1\right) }\right) \right] \\= & {} \left( {\begin{array}{c}n\\ k\end{array}}\right) \left( (\alpha -1) \sqrt{1-|x'|^2}+1\right) ^k\\&\quad +\,\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \frac{\alpha (\alpha -1)|x'|^2}{1-\sqrt{1-|x'|^2}}\left( (\alpha -1) \sqrt{1-|x'|^2}+1\right) ^{k-1}. \end{aligned}$$

Since

$$\begin{aligned} \sqrt{1-|x'|^2}=1-\frac{1}{2}|x'|^2-\frac{1}{8}|x'|^4+o(|x'|^4), \end{aligned}$$

we have

$$\begin{aligned} f= & {} \left( {\begin{array}{c}n\\ k\end{array}}\right) \left( \alpha ^k -\frac{1}{2} k\alpha ^{k-1}(\alpha -1)|x'|^2\right) \\&+\,2\alpha (\alpha -1)\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \left( 1-\frac{1}{4}|x'|^2\right) \\&\quad \left[ \alpha ^{k-1}-\frac{1}{2}(k-1)\alpha ^{k-2}(\alpha -1)|x'|^2\right] +o(|x'|^2)\\= & {} \left[ \left( {\begin{array}{c}n\\ k\end{array}}\right) +2(\alpha -1)\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \right] \alpha ^k\\&-\frac{1}{2}\alpha ^{k-1}(\alpha -1)\left[ k\left( {\begin{array}{c}n\\ k\end{array}}\right) +\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \big (\alpha +\,2(k-1)(\alpha -1)\big )\right] \\&\quad |x'|^2+o(|x'|^2). \end{aligned}$$

Since \(\alpha >1\), it is direct to see that

$$\begin{aligned} f(|x'|)>0\hbox { and }f''(|x'|)<0 \hbox { near }x'=0. \end{aligned}$$
(5.1)

Hence \(\nabla ^2 f^{-\frac{1}{p-1+k}}+f^{-\frac{1}{p-1+k}} I\ge 0\) is satisfied near the north pole. As in [9], using a lemma in [7], one may patch a global convex solution to Eq. (1.2) with some positive function f such that solution is equal to \((1-x_{n+1})^\alpha \) near the north pole. That is \(u=0\) at the north pole and condition (1.3) is satisfied near the north pole.

Next, we will construct a solution to (1.2) for some positive smooth function f satisfying condition (1.3) everywhere but u touches 0. This shows that, a \(C^2\) convex hypersuface to the k-th Christoffel–Minkowski problem for p-sum with \(1< p< k+1\) does not always exist even if f is a smooth positive function such that (1.3) holds. Hence, the evenness assumption on f cannot be dropped in Theorem 1.1.

Proposition 5.1

There exists some \(0<{{\bar{p}}}<k\), such that for \(0<p-1\le {{\bar{p}}}\), there is some positive function \(f\in C^\infty ({\mathbb S}^n)\) satisfying (1.3) and a solution u to (1.2) such that \((\nabla ^2 u+u g_{\mathbb {S}^n})\ge 0\) and \(u=0\) at some point. Moreover, u is not \(C^3\).

Proof

Choose an orthonormal basis \(\{e_i\}_{i=1}^n\) on \({\mathbb S}^n\). For coordinate functions \(x_l, l=1, 2, \ldots , n+1\), we know \(\nabla ^2_{ij} x_{l}+x_l \delta _{ij}=0\). Since \(|\nabla x_l|^2+x_l^2= |\nabla x_j|^2+x_j^2\) for any \(j\ne l\) and \(|\nabla x|^2+|x|^2= \sum _{i=1}^n |e_i|^2+|x|^2=n+1\), we get \(|\nabla x_l|^2+x_l^2=1\) for any \(l=1, 2, \ldots , n+1\).

Let \(u(x)=(1-x_{n+1})^\alpha .\) By direct computations,

$$\begin{aligned} \nabla _j u= & {} -\alpha (1-x_{n+1})^{\alpha -1}\nabla _j x_{n+1},\\ \nabla ^2_{ij} u= & {} \alpha (\alpha -1)(1-x_{n+1})^{\alpha -2}\nabla _i x_{n+1}\nabla _j x_{n+1}\\&\quad -\alpha (1-x_{n+1})^{\alpha -1}\nabla ^2_{ij} x_{n+1}\\= & {} \alpha (\alpha -1)(1-x_{n+1})^{\alpha -2}\nabla _i x_{n+1}\nabla _j x_{n+1}\\&\quad +\alpha (1-x_{n+1})^{\alpha -1}x_{n+1}\delta _{ij}. \end{aligned}$$

Thus

$$\begin{aligned} \nabla ^2_{ij} u+u\delta _{ij}= & {} \alpha (\alpha -1)(1-x_{n+1})^{\alpha -2}\nabla _i x_{n+1}\nabla _j x_{n+1}\\&\quad +(1-x_{n+1})^{\alpha -1}(1+(\alpha -1)x_{n+1})\delta _{ij}\\= & {} (1-x_{n+1})^{\alpha -1}(1+(\alpha -1)x_{n+1})\\&\quad \left[ \delta _{ij}+\frac{\alpha (\alpha -1)\nabla _i x_{n+1}\nabla _j x_{n+1}}{(1-x_{n+1})(1+(\alpha -1)x_{n+1})}\right] . \end{aligned}$$

Therefore

$$\begin{aligned} \sigma _k(\nabla ^2_{ij} u+u\delta _{ij})= & {} (1-x_{n+1})^{k(\alpha -1)}(1+(\alpha -1)x_{n+1})^k\\&\times \left[ \left( {\begin{array}{c}n-1\\ k\end{array}}\right) +\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \left( 1+\frac{\alpha (\alpha -1)|\nabla x_{n+1}|^2}{(1-x_{n+1})(1+(\alpha -1)x_{n+1})}\right) \right] \\= & {} u^\frac{k(\alpha -1)}{\alpha }(1+(\alpha -1)x_{n+1})^{k-1}\\&\times \left[ \left( {\begin{array}{c}n\\ k\end{array}}\right) (1+(\alpha -1)x_{n+1})+\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \alpha (\alpha -1)(1+x_{n+1})\right] \\= & {} \frac{(n-1)!}{k!(n-k)!}u^\frac{k(\alpha -1)}{\alpha }(1+(\alpha -1)x_{n+1})^{k-1}\\&\times \left[ n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1)x_{n+1}\right] . \end{aligned}$$

In the second equality we used the fact \(|\nabla x_{n+1}|^2=1-x_{n+1}^2\).

Let \(f(x)=\frac{(n-1)!}{k!(n-k)!}(1+(\alpha -1)x_{n+1})^{k-1}\left[ n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1)x_{n+1}\right] \) and \(\alpha =\frac{k}{k-p+1}\). Then \(u(x)=(1-x_{n+1})^\alpha \) with \(\alpha =\frac{k}{k-p+1}\) is a solution to

$$\begin{aligned} \sigma _k(\nabla ^2_{ij} u+u\delta _{ij})=u^{p-1} f \hbox { on }{\mathbb S}^n. \end{aligned}$$

Now let us analyze the function f on \({\mathbb S}^n\). First, f is smooth. Second it is direct to check that when \(\alpha <2\), i.e. \({p-1}<\frac{k}{2}\), \(f>0\).

We claim that when \(0<\alpha -1\) lies in certain range, i.e., \({p-1}\le {{\bar{p}}}\), f satisfies the convexity condition \((\nabla ^2 f^{-\frac{1}{k+{p-1}}}+ f^{-\frac{1}{k+{p-1}}} I)>0\).

Let \({{\tilde{g}}}={{\tilde{f}}}^{-\frac{1}{k+{p-1}}}\), where

$$\begin{aligned} \tilde{f}=\frac{k!(n-k)!}{(n-1)!}f= (1+(\alpha -1)x_{n+1})^{k-1}\left[ n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1)x_{n+1}\right] . \end{aligned}$$

We need to show \(\left( \frac{\nabla ^2_{ij}{{\tilde{g}}}}{\tilde{g}}+\delta _{ij}\right) >0\). To simplify the notation, we denote \(y=x_{n+1}\). Direct computations give

$$\begin{aligned} \frac{\nabla _i {{\tilde{g}}}}{{{\tilde{g}}}}= -\frac{\alpha -1}{k+{p-1}}\left[ \frac{k-1}{1+(\alpha -1)y}+\frac{n+k\alpha }{n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y}\right] \nabla _i y. \end{aligned}$$
$$\begin{aligned} \frac{\nabla ^2_{ij} {{\tilde{g}}}}{{{\tilde{g}}}}= & {} \frac{\nabla _i {{\tilde{g}}} \nabla _j {{\tilde{g}}}}{{{\tilde{g}}}^2}-\frac{\alpha -1}{k+{p-1}}\left[ \frac{k-1}{1+(\alpha -1)y}+\frac{n+k\alpha }{n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y}\right] \nabla ^2_{ij} y\\&+\,\frac{\alpha -1}{k+{p-1}}\left[ \frac{(k-1)(\alpha -1)}{(1+(\alpha -1)y)^2}+\frac{(n+k\alpha )^2(\alpha -1)}{[n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y]^2}\right] \nabla _{i} y\nabla _j y. \end{aligned}$$

Using \(\nabla ^2_{ij} y=-y\delta _{ij}\), we have

$$\begin{aligned} \frac{\nabla ^2_{ij} {{\tilde{g}}}}{{{\tilde{g}}}}+\delta _{ij}= & {} \left\{ \frac{\alpha -1}{k+{p-1}}\left[ \frac{k-1}{1+(\alpha -1)y}+\frac{n+k\alpha }{n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y}\right] y+1\right\} \delta _{ij}\\&\qquad +\,\Bigg \{\frac{\alpha -1}{k+{p-1}}\left[ \frac{(k-1)(\alpha -1)}{(1+(\alpha -1)y)^2}+\,\frac{(n+k\alpha )^2(\alpha -1)}{[n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y]^2}\right] \\&\qquad +\,\left( \frac{\alpha -1}{k+{p-1}}\right) ^2\left[ \frac{k-1}{1+(\alpha -1)y} +\frac{n+k\alpha }{n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y}\right] ^2\Bigg \}\nabla _{i} y\nabla _j y. \end{aligned}$$

Notice that the coefficient of \(\nabla _{i} y\nabla _j y\) on the RHS of above equation is always positive. To ensure \(\left( \frac{\nabla ^2_{ij} \tilde{g}}{{{\tilde{g}}}}+\delta _{ij}\right) \) is positive definite, we only need the coefficient of \(\delta _{ij}\) on the RHS of above equation is positive, i.e.,

$$\begin{aligned} \frac{\alpha -1}{k+{p-1}}\left[ \frac{k-1}{1+(\alpha -1)y}+\frac{n+k\alpha }{n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y}\right] y+1>0. \end{aligned}$$
(5.2)

Note \(k+{p-1}=k+\frac{k(\alpha -1)}{\alpha }=k\frac{2\alpha -1}{\alpha }\) and the denominator is always positive when \(\alpha <2\). Inequality (5.2) is equivalent to say the quadratic form

$$\begin{aligned} Q(y)= & {} \alpha (\alpha -1)\big \{(k-1)[n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y]+(n+k\alpha )[1+(\alpha -1)y] \big \}y\\&+\,k(2\alpha -1)[1+(\alpha -1)y][n+k\alpha (\alpha -1)+(n+k\alpha )(\alpha -1) y]>0. \end{aligned}$$

By regrouping,

$$\begin{aligned} Q(y)= & {} k(3\alpha -1)(\alpha -1)^2(n+k\alpha )y^2\\&+\,k(\alpha -1)\{\alpha [n+(k-1)\alpha (\alpha -1)+\alpha ]+(2\alpha -1)(2n+k\alpha ^2)\}y\\&+\,k(2\alpha -1)[n+k\alpha (\alpha -1)]. \end{aligned}$$

By computation, we see that when \(0<\alpha -1\) is close to 0, i.e., \({{\bar{p}}}\) is sufficiently small,

$$\begin{aligned} Q(-1)>0\hbox { and }Q'(-1)>0\hbox { and }Q''(y)>0 \hbox { for }y\in [-1, 1]. \end{aligned}$$

Therefore when \({p-1}\le {{\bar{p}}}\), we have Q(y) is positive for \(y\in [-1, 1]\).

In conclusion, for \(0<p-1\) small, we construct a globally defined function u which is a solution of \(\sigma _k(\nabla ^2_{ij} u+u\delta _{ij})=u^{p-1} f\) with a smooth, positive function f with \((\nabla ^2 f^{-\frac{1}{k+{p-1}}}+ f^{-\frac{1}{k+{p-1}}} I)>0\). However, u has a zero.\(\square \)

In this example, \((\nabla ^2 u+u g_{\mathbb {S}^n})\) is not of full rank at some point. This implies that for such f, the Gauss map fails to be regular and the convex body with support function u is not \(C^2\). However, in the next section we will show that the solution to the PDE (1.2) for \(\frac{k+1}{2}\le p<k+1\) is always \(C^{2}\) when f is \(C^2\).

6 \(C^2\) estimate for \(p\ge \frac{k+1}{2}\)

To prove Theorem 1.2, we consider the following perturbed equation

$$\begin{aligned} \sigma _k(\nabla ^2 u+(u+\epsilon ) g_{{\mathbb S}^n})=u^{p-1} f, \end{aligned}$$
(6.1)

for \(\epsilon >0\).

First of all, we prove the following existence for an auxilliary equation below.

Proposition 6.1

For any \(v \in C^4({\mathbb S}^n)\) with \(v>0\) and \(f\in C^4({\mathbb S}^n)\), there exists a unique solution \(u\in C^{5,\alpha }({\mathbb S}^n)\) (\(0<\alpha <1\)) with \((\nabla ^2 u+vg_{{\mathbb S}^n})\in \Gamma _k\), which we denote by \(T_f(v)\), to

$$\begin{aligned} \sigma _k(\nabla ^2 u+v g_{{\mathbb S}^n})=u^{p-1}f. \end{aligned}$$
(6.2)

Moreover, there exists some constant C, depending on \(n, k, p-1, \alpha ,\Vert v\Vert _{C^4}, \Vert f\Vert _{C^4}, \min v, \min f\), such that

$$\begin{aligned} \Vert u\Vert _{C^{5,\alpha }}\le C. \end{aligned}$$

Proof

Step 1 A priori estimate for (6.2).

Let \(u(x_0)=\min u\). Then

$$\begin{aligned} \left( {\begin{array}{c}n\\ k\end{array}}\right) v^k(x_0) \le \sigma _k(\nabla ^2 u+v g_{{\mathbb S}^n})(x_0)= u(x_0)^{p-1}f(x_0). \end{aligned}$$

It follows that \(u\ge u(x_0)\ge c>0.\) Similarly, we have \(u\le C\).

Denote \(w_{ij}=u_{ij}+v\delta _{ij}.\) Note that \(w_{iiss}=w_{ssii}+2w_{ii}-2w_{ss}-v_{ii}+v_{ss}\) for any is. For \(C^2\) estimate, we can apply the same argument as in the proof of Proposition 4.2 to \(\mathrm {tr}(w)= \Delta u+nv\). Once we get the \(C^2\) estimate and the positive lower bound of u, (6.2) is uniformly elliptic. By the Evans–Krylov and the Schauder theory, we have higher order estimate.

Step 2 Existence and uniqueness for (6.2).

To prove the uniqueness, let u and \({\tilde{u}}\) be two solutions. Then the difference \(h= u-{\tilde{u}}\) satisfies \(a_{ij}(x)h_{ij}+c(x)h=0,\) where \(a_{ij}(x)\) is an elliptic operator and \(c(x)<0\). Thus \(h\equiv 0\) by strong maximum principle.

We use continuity method to prove the existence. We set \(f_0:=\frac{1}{v^{p-1}}\sigma _k(\nabla ^2 v+vg_{{\mathbb S}^n})\) and \(f_t=(1-t)f_0+tf\). Consider (6.2) with \(f=f_t\). It is easy to see \(u_0\equiv v\) is the unique solution to (6.2) for \(f=f_0\). Next, the kernel of the linearized operator \(L_{u_t}\) is trivial and self-adjoint. Thus the openness follows from standard implicit function theorem. The closeness follows from the a priori estimates in Step 1. Therefore, we have the existence of (6.2) via continuity method. \(\square \)

Next we show the existence for the perturbed Eq. (6.1).

Proposition 6.2

Let \(\epsilon >0\) and \(\frac{k+1}{2}\le p<k+1\). There exists a solution \(u\in C^4({\mathbb S}^n)\) with \((\nabla ^2 u+(u+\epsilon )g_{{\mathbb S}^n})\in \Gamma _k\) to (6.1). Moreover, there exist some positive constants \(c_\epsilon \) and \(C_\epsilon \), depending on \(n, k, p-1, \Vert f\Vert _{C^4}, \min f\) and \(\epsilon \), such that

$$\begin{aligned} u\ge c_\epsilon \hbox { and } \Vert u\Vert _{C^{5,\alpha }}\le C_\epsilon . \end{aligned}$$

Proof

Step 1 A priori estimate for (6.1).

From the equation, \(u>0\) automatically. Let \(u(x_0)=\min u\), then

$$\begin{aligned} \left( {\begin{array}{c}n\\ k\end{array}}\right) \epsilon ^k\le \sigma _k(\nabla ^2 u+(u+\epsilon ) g_{{\mathbb S}^n})(x_0)= u(x_0)^{p-1}f(x_0). \end{aligned}$$

A positive lower bound \(u\ge c_\epsilon \) follows. One may follow the same argument in the previous section to prove the \(C^1\) and the \(C^2\) estimate depending on \(c_\epsilon \). We remark that for these arguments one needs only assume \((\nabla ^2 u+(u+\epsilon )g_{{\mathbb S}^n})\in \Gamma _k\), see Remark 4.1.

Step 2 Existence for (6.1).

We use the degree theory to prove the existence. Denote by \(f_t=(1-t)\left( {\begin{array}{c}n\\ k\end{array}}\right) (1+\epsilon )^k+tf\) for \(t\in [0,1]\). For any \(\omega \in C^4({\mathbb S}^n)\) and \(f_t\) we consider

$$\begin{aligned} \sigma _k(\nabla ^2 u+(e^\omega +\epsilon ) g_{{\mathbb S}^n})=u^{p-1}f_t(x). \end{aligned}$$
(6.3)

From Proposition 6.1, there exists a unique positive solution \(T_{f_t}(e^\omega +\epsilon )\) to (6.3). Define an operator

$$\begin{aligned} {\tilde{T}}_t: C^4\rightarrow & {} C^{5,\alpha }\\ \omega\mapsto & {} \log T_{f_t}(e^\omega +\epsilon ). \end{aligned}$$

It follows from the a priori estimate in Proposition 6.1 that \({\tilde{T}}_t\) is compact.

It is easy to see that \(\omega \) is a fixed point of \({\tilde{T}}_t\), i.e., \(\omega ={{\tilde{T}}}_t(\omega )\), if and only if \(u=e^\omega \) is a solution to (6.1) with \((\nabla ^2 u+(u+\epsilon ) g_{{\mathbb S}^n})\in \Gamma _k\). Therefore, by using the a priori estimates in Proposition 6.2, we see that any fixed point of \({{\tilde{T}}}_t\) is not on the boundary of

$$\begin{aligned} S_K=\{\omega \in C^4: \Vert \omega \Vert _{C^4}\le K\} \end{aligned}$$

when K is sufficient large, depending on \(\epsilon \).

By the degree theory, \(\deg (I-{{\tilde{T}}}_t, S_K, 0)\) is well defined and independent of t.

Claim For \(t=0\), \(u_0\equiv 1\) is the unique solution to (6.1) with \(f=f_0\) and the linearized operator \(L_{u_0}\) at \(u_0\equiv 1\) is injective.

To show this claim, we need the a priori estimate from Propositions 6.3 and 6.4 below, where we assume \(p\ge \frac{k+1}{2}\).

First, there is no other solutions of (6.1) near \(u_0\equiv 1\). The linearized operator for the Eq. (6.1) at \(u_0\equiv 1\) is given by

$$\begin{aligned} L_{u_0}\rho= & {} \sigma _k^{ij}((1+\epsilon )g_{{\mathbb S}^n})(\nabla ^2_{ij}\rho +\rho g_{ij})-(p-1)f_0\rho \\= & {} (1+\epsilon )^{k-1}\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) (\Delta \rho +n\rho )-(p-1)\left( {\begin{array}{c}n\\ k\end{array}}\right) (1+\epsilon )^k\rho . \end{aligned}$$

Since the first eigenvalue of \(\Delta \) on \({\mathbb S}^n\) is n, we see that the kernel of \(L_{u_0}\) is trivial, namely, \(L_{u_0}\) is injective. Thus the assertion follows by the implicit function theorem.

Second, for \(\epsilon >0\) small, there exist no other solutions than \(u_0\equiv 1\). Suppose there are \(\epsilon _l\rightarrow 0\) and non-constant solutions \(u_l\) for each \(\epsilon _l\). By the a priori estimate independent of \(\epsilon \) by Propsitions 6.3 and 6.4, there is a subsequence, still denote by \(\{u_{l}\}\), with \(u_l \rightarrow {{\tilde{u}}}\) in \(C^{1,\alpha }\), where \({{\tilde{u}}}\in C^{1,1}({\mathbb S}^n)\) is a solution of the un-perturbed Eq. (1.2) with \(f=f_0\). It follows from the previous step that \(u_l\) is uniformly away from \(u_0\equiv 1\), so \({{\tilde{u}}}\) is not the constant 1, which contradicts to the uniqueness of (1.2).

Third, for any \(\epsilon >0\) such that \(u>0\) solves (6.1) with \(f=f_0\), the uniqueness is true. This follow immediately from previous two steps. We finish the proof of the claim.

We turn back to the proof of the existence. Since \(L_{u_0}\) is injective, the derivative \(\tilde{T_0}'\) in \(C^4\) is injective. The degree can be computed as \(\deg (I-T_0, S_K, 0)=(-1)^\beta \) where \(\beta \) is the number of eigenvalues of \(\tilde{T_0}'\) greater than one. In any case \(\deg (I-T_t, S_K, 0)=\deg (I-T_0, S_K, 0)=(-1)^\beta \) is not equal to zero. Therefore we have the existence for (6.1) for any \(t\in [0,1]\), in particular for \(t=1\). The assertion follows. \(\square \)

We now show the a priori estimate independent of \(\epsilon \). The arguments in the proof for \(C^1\) estimate in previous section yield the \(C^1\) estimate for solutions to (6.1).

Proposition 6.3

Let \(\epsilon \ge 0\). Let u be a solution to (6.1) with \((\nabla ^2 u+(u+\epsilon )g_{{\mathbb S}^n})\in \Gamma _k\). Then there exists some positive constant C, depending on \(n, k, p, \min f\) and \(\Vert f\Vert _{C^1}\), but independent of \(\epsilon \), such that

$$\begin{aligned} \Vert u\Vert _{C^1}\le C. \end{aligned}$$

Next, we show that, in the case \(\frac{k+1}{2}\le p<k+1\), Eq. (6.1) admits a \(C^2\) estimate independent of \(\epsilon \).

Proposition 6.4

Let \(\epsilon \ge 0\). Assume \(\frac{k+1}{2}\le p<k+1\). Let u be a solution to (6.1) with \((\nabla ^2 u+(u+\epsilon )g_{{\mathbb S}^n})\in \Gamma _k\). Then there exists a nonnegative constant \(\alpha =\alpha (p-1,k,n)\) depending only on pkn with \(\alpha (p-1,k,n)> 0\) if \(\frac{k+1}{2}< p\), and there is some positive constant C depending on \(n, k, p-1, \min f\) and \(\Vert f\Vert _{C^2}\), but independent of \(\epsilon \), such that

$$\begin{aligned} \left\| \frac{\nabla ^2 u}{u^{\alpha }}\right\| _{C^0}\le C. \end{aligned}$$

Proof

For \(k=1\), the standard theory of linear elliptic PDE gives us the \(C^2\) estimate. Hence we consider \(k\ge 2\). In the following proof we denote by \(W_u^\epsilon =\nabla ^2 u+(u+\epsilon )I\). It is sufficient to prove the upper bound of \(\frac{\sigma _1(W_u^\epsilon )}{u^{\alpha }}\) since \(W_u^\epsilon \in \Gamma _2\).

Let \(y_0\in {\mathbb S}^n\) be a maximum point of \(\frac{|\nabla u|^2}{u^{1+\alpha }}\). Then

$$\begin{aligned} \nabla |\nabla u|^2(y_0)=(1+\alpha )|\nabla u|^2\frac{\nabla u}{u}(y_0). \end{aligned}$$

It follows that

$$\begin{aligned} \frac{(\nabla ^2 u+(u+\epsilon )I)}{u^{\alpha }}\cdot \nabla u(y_0)=\left( \frac{(1+\alpha )|\nabla u|^2}{2u^{1+\alpha }}+\frac{(u+\epsilon )}{u^{\alpha }}\right) \nabla u(y_0). \end{aligned}$$

Thus \(\left( \frac{(1+\alpha )|\nabla u|^2}{2u^{1+\alpha }}+\frac{(u+\epsilon )}{u^{\alpha }}\right) (y_0)\) is an eigenvalue of \(\frac{(\nabla ^2 u+(u+\epsilon )I)}{u^{\alpha }}(y_0)\). Since \(W_u^\epsilon \in \Gamma _2\), we have

$$\begin{aligned} \max \frac{(1+\alpha )|\nabla u|^2}{2u^{1+\alpha }}= & {} \frac{(1+\alpha )|\nabla u|^2}{2u^{1+\alpha }}(y_0) \nonumber \\\le & {} \left( \frac{(1+\alpha )|\nabla u|^2}{2u^{1+\alpha }}+\frac{(u+\epsilon )}{u^{\alpha }}\right) (y_0)\nonumber \\\le & {} \frac{\sigma _1 (W_u^\epsilon )}{u^{\alpha }}(y_0)\le \max \frac{\sigma _1 (W_u^\epsilon )}{u^{\alpha }}. \end{aligned}$$
(6.4)

Let \(x_0\) be a maximum point of \(\frac{\sigma _1 (W_u^\epsilon )}{u^{\alpha }}\) and by a choice of local frame and a rotation of coordinates we assume \(g_{ij}=\delta _{ij}\) and \(W_u^\epsilon \) is diagonal at \(x_0\). By the maximal condition, at \(x_0\),

$$\begin{aligned} \nabla \sigma _1=\alpha \sigma _1\frac{\nabla u}{u},\quad \frac{\nabla ^2\sigma _1}{\sigma _1}-\frac{\nabla ^2 u^{\alpha }}{u^{\alpha }}\le 0. \end{aligned}$$

In the following we compute at \(x_0\). Assuming \(0\le \alpha \le 1\), and using Ricci’s identity,

$$\begin{aligned} 0\ge & {} \sigma _k^{ii}[\frac{(\sigma _1)_{ii}}{\sigma _1}-\frac{\alpha u_{ii}}{u}-\alpha (\alpha -1)\frac{u_i^2}{u^2}]\nonumber \\\ge & {} \sigma _k^{ii}\frac{(W^\epsilon _{iiss}+W^\epsilon _{ss} -nW^\epsilon _{ii})}{\sigma _1}-\alpha k\frac{\sigma _k}{u}+(n+1-k)\alpha \sigma _{k-1}\nonumber \\ {}= & {} \frac{\Delta \sigma _k-\sigma _k^{ij,lm}W^\epsilon _{ijs}W^\epsilon _{lms}-nk\sigma _k}{\sigma _1}-\alpha k\frac{\sigma _k}{u}+(n+1-k)(1+\alpha ) \sigma _{k-1}. \end{aligned}$$
(6.5)

From the Eq. (6.1), we have

$$\begin{aligned} \Delta \sigma _k= & {} (p-1) u^{p-2}\Delta u f+(p-1)(p-2)u^{p-3}|\nabla u|^2 f\nonumber \\&+\,2(p-1)u^{p-2}\nabla u\nabla f+u^{p-1} \Delta f\nonumber \\= & {} (p-1)u^{p-2}\sigma _1 f+(p-1)(p-2)u^{p-3}|\nabla u|^2f\nonumber \\ {}&+\,2(p-1)u^{p-2}\nabla u\nabla f+(1-n(p-1))u^{p-1} \Delta f. \end{aligned}$$
(6.6)

Since \(\nabla \sigma _1=\alpha \sigma _1\frac{\nabla u}{u}\), \(\sigma _k=u^{p-1}f\) and \(\nabla \sigma _k=(p-1)u^{p-2}f\nabla u+u^{p-1}\nabla f\), we deduce from (2.4) in Lemma 2.2 that

$$\begin{aligned} -\sigma _k^{ij,lm}W^\epsilon _{ijs}W^\epsilon _{lms} \ge -\beta u^{p-1-2}|\nabla u|^2 f+c_1 u^{p-2}\nabla u\nabla f+c_2u^{p-1} \frac{|\nabla f|^2}{f}, \end{aligned}$$
(6.7)

where \(c_1, c_2\) are constants under control and

$$\begin{aligned} \beta =(p-1-\alpha )\frac{(k-2)(p-1)+k\alpha }{k-1}. \end{aligned}$$
(6.8)

It follows from (6.6) and (6.7) that

$$\begin{aligned}&\Delta \sigma _k-\sigma _k^{ij,lm}W^\epsilon _{ijs}W^\epsilon _{lms}\nonumber \\&\ge (p-1) u^{p-2}\sigma _1 f+\left[ (p-1)(p-2)-\beta \right] u^{p-3}|\nabla u|^2 f\nonumber \\&\quad +\,c_1u^{p-2}\nabla u\nabla f-C u^{p-1} \end{aligned}$$
(6.9)

where C depends on \(k, p, n, \Vert f\Vert _{C^2}\) and \(\min f\).

By (6.5) and (6.9),

$$\begin{aligned} 0\ge & {} (p-1-k\alpha ) u^{p-1-1} f+(n+1-k)(1+\alpha ) \sigma _{k-1}\nonumber \\&+\, \frac{\left[ (p-1)(p-2)-\beta \right] u^{p-3}|\nabla u|^2 f+(c_1+2(p-1))u^{p-2}\nabla u\nabla f-C u^{p-1}}{\sigma _1} \end{aligned}$$
(6.10)

Note that from (6.4) we have

$$\begin{aligned} \frac{\sigma _1(W_u^\epsilon )}{u^{\alpha }}(x_0)=\max \frac{\sigma _1(W_u^\epsilon )}{u^{\alpha }} \ge \frac{(1+\alpha )|\nabla u|^2}{2u^{1+\alpha }}(x_0). \end{aligned}$$
(6.11)

As u is bounded from above, we deduce from (6.10) and (6.11) that

$$\begin{aligned} 0\ge & {} \min \left\{ (p-1-k\alpha )+\frac{2((p-1)(p-2)-\beta )}{1+\alpha }, p-1-k\alpha \right\} u^{p-2} f\nonumber \\&-\,C \frac{u^{p-\frac{3}{2}}}{\sqrt{\sigma _1}}-C\frac{u^{p-1}}{\sigma _1}+(n+1-k)(1+\alpha ) \sigma _{k-1}. \end{aligned}$$
(6.12)

In view of (6.8), if \(p\ge \frac{k+1}{2}\), we may choose \(\alpha \ge 0\) such that \(p-1-k\alpha \ge 0\) and

$$\begin{aligned}&(p-1-k\alpha )+\frac{2((p-1)(p-2)-\beta )}{1+\alpha }\\&\quad =\frac{1}{1+\alpha }\left( \frac{2}{k-1}(p-1)^2-(p-1)+\frac{k-5}{k-1}\alpha (p-1)-k\alpha (1+\alpha )+\frac{2k}{k-1}\alpha ^2\right) \ge 0, \end{aligned}$$

Moreover, if \(p> \frac{k+1}{2}\), \(\alpha \) can be picked positive. By the Newton–Maclaurin inequality

$$\begin{aligned} \sigma _{k-1}\ge C_{n,k}\sigma _1^{\frac{1}{k-1}}\sigma _k^{\frac{k-2}{k-1}}, \end{aligned}$$

it follows from (6.12),

$$\begin{aligned} 0\ge & {} (n+1-k)(1+\alpha )\sigma _1 \sigma _{k-1}-C u^{p-\frac{3}{2}}\sqrt{\sigma _1}-Cu^{p-1}\nonumber \\\ge & {} C\sigma _1^{1+\frac{1}{k-1}}(u^{p-1}f)^{\frac{k-2}{k-1}}-Cu^{p-\frac{3}{2}}\sqrt{\sigma _1}-Cu^{p-1}. \end{aligned}$$
(6.13)

Since \(p\ge \frac{k+1}{2}\), we can choose \(\alpha \ge 0\) such that \((p-1)\frac{k-2}{k-1}\le p-\frac{3}{2}-(\frac{1}{k-1}+\frac{1}{2})\alpha .\) Moreover, if \(p> \frac{k+1}{2}\), \(\alpha \) can be picked positive. By virtue of the uniform upper bound of u, we obtain \(\frac{\sigma _1}{u^\alpha }\le C\). The proof is completed. \(\square \)

Now we are ready to prove Theorem 1.2.

Proof of Theorem 1.2

For \(\epsilon >0\), let \(u_\epsilon \) be the solution of (6.1) with \((\nabla ^2 u_\epsilon + (u_\epsilon +\epsilon )g_{{\mathbb S}^n})\in \Gamma _k\). From the a priori \(C^2\) estimate independent of \(\epsilon \), there is a subsequence \(u_{\epsilon _i}\rightarrow u\) in \(C^{1,\alpha }\) for any \(\alpha <1\). The bound gives \(u\in C^{1,1}({\mathbb S}^n)\) and \(\sigma _k(\nabla ^2 u+ug_{{\mathbb S}^n})=u^{p-1}f\) with \((\nabla ^2 u+ u g_{{\mathbb S}^n})\in {{\bar{\Gamma }}}_k\).

We note that solution u is \(C^2\) continuous if \(p>\frac{k+1}{2}\). This follows from Proposition 6.4, \(|\nabla ^2u(x)|\le Cu^{\alpha }(x), \forall x\in \mathbb S^n\), because \(u\in C^{\infty }\) away from the null set \(\{u=0\}\) and \(\nabla ^2u\) is continuous at every point of \(\{u=0\}\) when \(\alpha >0\). \(\square \)

We discuss a special case of equation (1.2) when \(k=1\). This is the equation corresponding to the \(L^p\)-Christoffel problem. In this case, equation is semilinear:

$$\begin{aligned} \Delta u(x)+nu(x)=u^{p-1}(x)f(x), \quad x\in \mathbb S^n.\end{aligned}$$
(6.14)

From the \(C^1\) estimate established for admissible solutions of Eq. (6.1) in the previous section and standard semilinear elliptic theory, we immediately have

Theorem 6.1

For any positive function \(f\in C^2({\mathbb S}^n)\) there exists a nonnegative solution u to (6.14) with

$$\begin{aligned} \Vert u\Vert _{C^{2,\alpha }({\mathbb S}^n)}\le C, \end{aligned}$$

for some \(0<\alpha <1\) and C depending on \(n, k, p, \alpha , \Vert f\Vert _{C^2({\mathbb S}^n)}\) and \(\min _{{\mathbb S}^n} f\).

If condition (1.3) is imposed, one may obtain a sperically convex solution u. Though \(u\in C^{2,\alpha }\), the corresponding hypersurface with u as its support function may not be in \(C^{1,1}\) as \(W_u\) may degenerate on the null set of u. Equation (6.14) has variational structure, it is of interest to develop corresponding potential theory as in the classical Christoffel problem [1, 5].

To end this paper, we would like to raise the following two questions.

  1. 1.

    Using compactness argument as in [11], together with the a priori estimates in Proposition 4.2 and the Constant Rank Theorem 2.1, one can prove that if \(\Vert f\Vert _{C^2}+\Vert \frac{1}{f}\Vert _{C^0}\le M\) and (1.3) holds, there exists a uniform positive constant C depending only on nM such that

    $$\begin{aligned} W_u\ge C g_{\mathbb {S}^n}. \end{aligned}$$

    Is there a direct effective estimate of \(W_u\) from below under the same convexity conditions, without use of the constant rank theorem?

  2. 2.

    Under the condition of evenness, a positive lower bound of u in Proposition 4.1 has been derived via an ODE argument and a bound on \(\nabla u\) which depends on \(\nabla f\). In the case of \(L^p\)-Minkowski problem (i.e., \(k=n\)), one may obtain a bound of volume of the associated convex body \(\Omega _u\) from below if f is positive. Is it possible to derive such a priori a positive lower bound of \(Vol(\Omega _u)\) for solutions of Eq. (1.2) in general? This would give a positive lower bound of u.