1 Introduction

Let \({\mathbb {R}}^{n+1}\) be the \((n+1)\)-dimensional affine space equipped with the standard flat connection D. For an immersion \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) of an n-dimensional smooth manifold \(M^n\), if the position vector x(p) is transversal to \(x_*(T_pM^n)\) at each point \(p\in M^n\), we say that \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) defines a centroaffine hypersurface with centroaffine normalization. A centroaffine hypersurface is associated with the centroaffine metric h, a difference tensor K of type (1, 2), and a Tchebychev vector field defined by \(T:=\tfrac{1}{n}\mathrm{trace}_hK\). As is well-known, (hK) are centroaffine invariants which determine x up to centroaffine transformations of \({\mathbb {R}}^{n+1}\). In this paper, we study locally strongly convex centroaffine hypersurfaces, i.e., the centroaffine metric h is definite.

The centroaffine normalization of every centroaffine hypersurface induces the identity as the Weingarten operator, from the point of view of relative differential geometry any centroaffine hypersurface is a relative hypersphere (see [20], Sects. 6.3 and 7.2). Thus, in centroaffine differential geometry the usually induced Weingarten operator contains no further geometric information. In 1994, Wang [24] made a significant contribution by reasonably introducing the operator \({\mathcal {T}}:={\hat{\nabla }} T\) as the centroaffine shape operator of the centroaffine hypersurface \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\), where \({\hat{\nabla }}\) denotes the Levi-Civita connection with respect to the centroaffine metric. Then, following [17], a centroaffine hypersurface is called a Tchebychev hypersurface if the operator \({\mathcal {T}}\) is proportional to the identity isomorphism \(\mathrm{id}:TM^n\rightarrow TM^n\), i.e., \({\mathcal {T}}=H\cdot \mathrm{id}\). Here, generally \(H:=(\mathrm{div}\,T)/n\) is called the centroaffine mean curvature of \(M^n\).

As the centroaffine totally umbilical hypersurfaces, Tchebychev hypersurfaces naturally generalize the notion of affine hyperspheres in equiaffine differential geometry. Indeed, it is known that both Tchebychev hypersurfaces and affine hyperspheres have exactly the similar structure equations (cf. e.g. [4, 7, 24]). Because of such nice similarity, the Tchebychev hypersurfaces have been studied extensively. For references, we refer to [1, 5, 6, 9, 11, 13, 15, 17, 18, 21]. Related to this similarity, we would highlight the recent result of Cheng-Hu-Vrancken [5] which proved that the ellipsoids are the only centroaffine Tchebychev hyperovaloids. Thus, after several preceding partial results of [6, 15, 17], it finally solves the longstanding problem of generalizing the well-known theorem of Blaschke and Deicke (cf. Theorem 3.35 in [14]) on affine hyperspheres in equiaffine differential geometry to that in centroaffine differential geometry.

Another interesting problem is the classification of non-degenerate affine hyperspheres with constant sectional curvature, that had been solved by Li-Penn [12] and Vrancken-Li-Simon [23] in the locally strongly convex case, and finally solved by Vrancken [22] in case of non-degenerate affine hyperspheres with nonzero Pick invariant. Motivated by the above mentioned similarity, we have the following natural and interesting problem:

Problem Classify all non-degenerate centroaffine Tchebychev hypersurfaces in \({\mathbb {R}}^{n+1}\) with constant sectional curvature.

Concerning this problem, there are only some partial results. As an early result, the flat centroaffine Tchebychev surfaces have been classified by Liu and Wang (see Theorem 4.2 of [17]). For arbitrary dimensions, Li and Wang [16] presented a classification of the canonical centroaffine hypersurfaces with \(N(h)\le 1\), where N(h) denotes the dimension of the maximal negative definite subspace of the centroaffine metric h induced by x. Here, a centroaffine hypersurface in \({\mathbb {R}}^{n+1}\) is said to be canonical if it has flat centroaffine metric and parallel cubic form. Then, Cheng-Hu-Moruz [3] (Corollary 1.1 therein) classified all the locally strongly convex (i.e., \(N(h)=0,n\) cases) canonical centroaffine hypersurfaces in \({\mathbb {R}}^{n+1}\). Recently, Lalléchère et al [13] proved that flat hyperbolic centroaffine Tchebychev hypersurfaces in \({\mathbb {R}}^4\) must be canonical ones. Then, applying the result of [16], one can obtain a classification of all flat hyperbolic centroaffine Tchebychev hypersurfaces in \({\mathbb {R}}^4\).

In this paper, we focus on the above problem for locally strongly convex centroaffine Tchebychev hypersurfaces. First, motivated by Liu and Wang’s classification theorem for all centroaffine surfaces with vanishing centroaffine shape operator (see Theorem 4.1 in [17]), we prove the following theorem for higher dimensions:

Theorem 1.1

Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\ (n\ge 3)\) be a locally strongly convex centroaffine hypersurface with constant sectional curvature and vanishing centroaffine shape operator. Then, \(x(M^n)\) is locally centroaffinely equivalent to one of the following hypersurfaces:

  1. (i)

    a hyperquadric of proper affine hyperspheres centered at the origin of \({\mathbb {R}}^{n+1}\);

  2. (ii)

    \(x^{\alpha _1}_1x^{\alpha _2}_2\cdots x^{\alpha _{n+1}}_{n+1}=1\), where either \(\alpha _i>0\) for \(1\le i\le n+1\); or \(\alpha _j>0\) for \(2\le j\le n+1\) and \(\alpha _1+\cdots +\alpha _{n+1}<0\);

  3. (iii)

    \(x^{\alpha _1}_1x^{\alpha _2}_2\cdots x^{\alpha _{n-1}}_{n-1}(x^2_n+x^2_{n+1})^{ \alpha _n}\exp (\alpha _{n+1}\arctan \tfrac{x_n}{x_{n+1}})=1\), where \(\alpha _i<0\) for \(1\le i\le n-1\) and \(\alpha _1+\cdots +\alpha _{n-1}+2\alpha _n>0\);

  4. (iv)

    \(x_{n+1}=\tfrac{1}{2x_1}(x^2_2+\cdots +x^2_{v-1})-x_1(-\ln x_1+\alpha _v\ln x_v +\cdots +\alpha _n\ln x_n)\), where \(2\le v\le n+1\), \(\alpha _i>0\) for \(v\le i\le n\), and \(\alpha _v+\cdots +\alpha _n<1\).

Remark 1.1

Theorem 1.1 partially generalizes Theorem 1.3 of Li and Wang [16]. Moreover, by Corollary 1.1 of Cheng-Hu-Moruz [3], the examples (ii), (iii) and (iv) exhaust all locally strongly convex canonical centroaffine hypersurfaces of \({\mathbb {R}}^{n+1}\). The detailed calculations of these canonical hypersurfaces shall be given in Sect. 3 below.

Next, by definition, a hyperbolic centroaffine hypersurface is locally strongly convex with \(N(h)=0\). Then, motivated by that Liu and Wang [17] have classified all flat centroaffine Tchebychev surfaces in \({\mathbb {R}}^3\) and Lalléchère et al [13] have classified all flat hyperbolic centroaffine Tchebychev hypersurfaces in \({\mathbb {R}}^4\), as the second main result of this paper, we prove the following classification theorem for all flat hyperbolic centroaffine Tchebychev hypersurfaces in \({\mathbb {R}}^5\).

Theorem 1.2

Let \(x:M^4\rightarrow {\mathbb {R}}^5\) be a flat hyperbolic centroaffine Tchebychev hypersurface. Then, \(x(M^4)\) is centroaffinely equivalent to the hypersurfaces defined by:

$$\begin{aligned} x^{\alpha _1}_1x^{\alpha _2}_2x^{\alpha _3}_3x^{\alpha _4}_4x^{\alpha _5}_5=1, \end{aligned}$$

where \(\alpha _i>0\) for \(1\le i\le 5\).

Remark 1.2

We would mention that the statement of Theorem 1 in Lalléchère et al [13] is not correct. Indeed, by Examples 1.1–1.2 and Theorem 1.3 in [16], all the hypersurfaces described in Theorem 1 of [13] except \(x^{\alpha _1}_1x^{\alpha _2}_2x^{\alpha _3}_3x^{\alpha _4}_4=1\) with \(\alpha _i>0\) for \(1\le i\le 4\) have the property \(N(h)=1\).

This paper is organized as follows. In Sect. 2, we first review the elementary facts of centroaffine hypersurfaces in \({\mathbb {R}}^{n+1}\), together with recalling the general property of the Tchebychev hypersurfaces. In Sect. 3, for better understanding of our results, we give the detailed calculations of the canonical centroaffine hypersurfaces described in Theorem 1.1. In Sect. 4, we present a proof of Theorem 1.1. In Sect. 5, we construct a typical orthonormal frame on the flat hyperbolic centroaffine hypersurfaces in \({\mathbb {R}}^{n+1}\). Then, by applying the Codazzi equations and the Tchebychev condition with respect to the above-mentioned typical orthonormal frame, we derive information of the connection coefficients and the difference tensor in Sects. 6 and 7, respectively. Finally, in Sect. 8, we complete the proof of Theorem 1.2.

2 Preliminaries

In this section, we briefly review some basic notions and facts about centroaffine hypersurfaces in \({\mathbb {R}}^{n+1}\). For more details, we refer to the monographs [14, 19, 20] and the references [8, 10, 24].

Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\ (n\ge 2)\) be a locally strongly convex immersed centroaffine hypersurface as stated in the Introduction section. Then, for any vector fields X and Y tangent to \(M^n\), we have the centroaffine formula of Gauss:

$$\begin{aligned} D_Xx_{*}(Y)=x_{*}(\nabla _XY)+h(X,Y)(-\varepsilon x), \end{aligned}$$
(2.1)

where \(\varepsilon =\pm 1\) is chosen such that h is positive definite. Associated with (2.1), \(-\varepsilon x\) is called the centroaffine normal, \(\nabla \) and h are called the induced (centroaffine) connection and centroaffine metric induced by \(-\varepsilon x\), respectively. The centroaffine hypersurface is called elliptic type (resp. hyperbolic type) if \(\varepsilon =1\) (resp. \(\varepsilon =-1\)). For the geometric interpretation about the type of hypersurfaces, see Sect. 2 of [10] for more comments. However, we note that Li and Wang [16] fixed x to be the centroaffine normal and thus the centroaffine metric is negative definite for elliptic type centroaffine hypersurfaces.

Denote by \({\hat{\nabla }}\) the Levi-Civita connection with respect to the centroaffine metric h. Then the tensor K, defined by \(K(X,Y):=K_XY:=\nabla _X Y-\hat{\nabla }_XY\), is called difference tensor of the centroaffine hypersurface. K is symmetric as both connections \(\nabla \) and \({\hat{\nabla }}\) are torsion free. We also have a totally symmetric (0, 3)-type tensor \(C:=\nabla h\), called the cubic form of the centroaffine hypersurface. Moreover, K and C are related by

$$\begin{aligned} C(X,Y,Z)=-2h(K(X,Y),Z),\ \ \forall \,X,Y,Z\in TM^n. \end{aligned}$$
(2.2)

Then, associated to a centroaffine hypersurface \(x: M^n\rightarrow {\mathbb {R}}^{n+1}\), we can define the Tchebychev form \(T^\flat \) and the Tchebychev vector field T in implicit form by

$$\begin{aligned} T^\flat (X)=\tfrac{1}{n}\mathrm{trace}_h(K_X),\ \ h(T,X)=T^\flat (X),\ \ \forall X\in TM^n. \end{aligned}$$
(2.3)

It should be pointed out that if T vanishes, or equivalently \(\mathrm{trace}_h(K_X)=0\) for any tangent vector X, then \(x: M^n\rightarrow {\mathbb {R}}^{n+1}\) reduces to be the so-called proper affine hypersphere centered at the origin of \({\mathbb {R}}^{n+1}\) (cf. p.279 of [14] or Sect. 1.15.2–1.15.3 therein).

For a centroaffine hypersurface \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\), the covariant differentiation \({\hat{\nabla }} T\) is of deep meaning. Indeed, Wang [24] showed that

$$\begin{aligned} h({\hat{\nabla }}_XT,Y)=h(X,{\hat{\nabla }}_YT), \end{aligned}$$
(2.4)

which implies that the Tchebychev vector field T is a closed vector field, in the sense that the Tchebychev form \(T^\flat \) is a closed form. Furthermore, Wang [24] considered the homomorphism mapping \({\mathcal {T}}:TM^n\rightarrow TM^n\), defined by

$$\begin{aligned} {\mathcal {T}}(X):={\hat{\nabla }}_XT,\ \ \forall X\in TM^n. \end{aligned}$$
(2.5)

In [17], \({\mathcal {T}}\) is called the Tchebychev operator of \(x:M^n \rightarrow {\mathbb {R}}^{n+1}\). Thus, (2.4) implies that the Tchebychev operator \({\mathcal {T}}\) is self-adjoint with respect to the centroaffine metric h. It is worth mentioning that the geometric properties derived by Wang [24] allow him to define \({\mathcal {T}}\) as the centroaffine shape operator of \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\), and therefore, the well-defined function \(H:=(\mathrm{trace}_h {\mathcal {T}})/n=(\mathrm{div}\,T)/n\) is called the centroaffine mean curvature of \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\).

Denote by \({\hat{R}}\) the Riemannian curvature tensor of the centroaffine metric h. Then the equations of Gauss and Codazzi are given by, respectively,

$$\begin{aligned}&\displaystyle {\hat{R}}(X,Y)Z=\varepsilon (h(Y,Z)X-h(X,Z)Y)-[K_X,K_Y]Z, \end{aligned}$$
(2.6)
$$\begin{aligned}&\displaystyle ({\hat{\nabla }}_ZK)(X,Y)=({\hat{\nabla }}_XK)(Z,Y). \end{aligned}$$
(2.7)

Next, we choose an h-orthonormal tangential frame field \(\{e_1,\ldots ,e_n\}\) with dual frame field \(\{\omega ^1,\ldots ,\omega ^n\}\) and Levi-Civita connection forms \(\{\omega ^j_i\}\). We shall use the convention of indices: \(1\le i,j,k,l,m,p,q\le n\). We denote by \(K_{ij}^k=h(K_{e_i}e_j,e_k)\) and \(T^i=\tfrac{1}{n}\sum _jK^i_{jj}\) the components of K and T; and by \(K_{ij,l}^k\) and \(T^j_{,i}\) the components of the covariant differentiation \({\hat{\nabla }} K\) and \({\hat{\nabla }} T\) with respect to \(\{e_i\}\), respectively, defined by

$$\begin{aligned} \sum _lK_{ij,l}^k\omega ^l= & {} dK_{ij}^k+\sum _lK_{lj}^k\omega ^i_l+\sum _lK_{il}^k\omega ^j_l +\sum _lK_{ij}^l\omega ^k_l, \\ \sum _iT_{,i}^j\omega ^i= & {} dT^j+\sum _iT^i\omega ^j_i. \end{aligned}$$

Denote by \(R_{ijkl}\) the components of the Riemannian curvature tensor of the centroaffine metric h. Then, (2.6) and (2.7) can be written as:

$$\begin{aligned} R_{ijkl}= & {} \varepsilon (\delta _{ik}\delta _{jl} -\delta _{il}\delta _{jk}) +\sum _m(K_{il}^mK_{jk}^m-K_{ik}^mK_{jl}^m), \end{aligned}$$
(2.8)
$$\begin{aligned} K_{ij,l}^k= & {} K_{il,j}^k. \end{aligned}$$
(2.9)

Taking the contraction of the Gauss equation (2.8) twice, we derive that

$$\begin{aligned} R_{ij}= & {} \varepsilon (n-1)\delta _{ij}+\sum _{m,k}K_{ik}^mK_{jk}^m-n\sum _mT^mK_{ij}^m, \end{aligned}$$
(2.10)
$$\begin{aligned} n(n-1)\kappa= & {} R=n(n-1)(J+\varepsilon )-n^2\Vert T\Vert ^2, \end{aligned}$$
(2.11)

where \(R_{ij}\) and R denote the components of the Ricci tensor and the scalar curvature of the centroaffine metric h, respectively, and that

$$\begin{aligned} \Vert T\Vert ^2:=\sum _{i}(T^i)^2,\ \ J:=\tfrac{1}{n(n-1)}\sum _{i,j,k}(K_{ij}^k)^2. \end{aligned}$$

As usual, J and \(\kappa \) are called the centroaffine Pick invariant and the normalized scalar curvature of the centroaffine hypersurface, respectively.

Similarly, the second covariant derivative \(K_{ij,lp}^{q}\) of K is defined by

$$\begin{aligned} \sum _{p}K_{ij,lp}^{q}\omega ^p= & {} dK_{ij,l}^{q}+\sum _{p}K_{pj,l}^{q}\omega _p^i\nonumber \\&+\sum _{p}K_{ip,l}^{q}\omega _p^j+\sum _{p}K_{ij,p}^{q}\omega _p^l +\sum _{p}K_{ij,l}^{p}\omega _{p}^{q}. \end{aligned}$$
(2.12)

Then, we have the Ricci identity

$$\begin{aligned} K^q_{il,jp}=K^{q}_{il,pj} +\sum _{m}K^{q}_{ml}R_{mijp}+\sum _{m}K^{q}_{im}R_{mljp} +\sum _{m}K^{m}_{il}R_{mqjp}. \end{aligned}$$
(2.13)

As stated in the first section, a centroaffine hypersurface is called Tchebychev if its centroaffine shape operator satisfies \({\mathcal {T}}=H\cdot \mathrm{id}\). As an important subclass of centroaffine hypersurfaces that we have introduced earlier, Tchebychev hypersurfaces have remarkable properties. For instance, using the assumption \({\mathcal {T}}=H\cdot \mathrm{id}\), the equality \(h({\hat{R}}(X,T)T,T)=0\) and the relation

$$\begin{aligned} ({\hat{\nabla }}_X{\mathcal {T}})(Y)-({\hat{\nabla }}_Y{\mathcal {T}})(X)={\hat{R}}(X,Y)T, \end{aligned}$$
(2.14)

we derive the following formulas:

$$\begin{aligned} {\hat{\nabla }}\Vert T\Vert ^2=2HT,\ \ \Vert T\Vert ^2{\hat{\nabla }}{H}=T(H)T, \end{aligned}$$
(2.15)

where \({\hat{\nabla }}\) denotes the gradient operator with respect to the centroaffine metric h. In particular, from the second formula in (2.14) and (2.15), we further obtain that

$$\begin{aligned} \Vert T\Vert ^2{\hat{R}}(X,Y)T=-T(H)(h(T,Y)X-h(T,X)Y). \end{aligned}$$
(2.16)

3 Computations on Canonical Centroaffine Hypersurfaces

In [16], Li and Wang have made some calculations about the canonical centroaffine hypersurfaces in \({\mathbb {R}}^{n+1}\). In this section, to get the exact knowledge about locally strongly convex canonical centroaffine hypersurfaces, we compute their centroaffine invariants with more details. For convenience of the calculations, we begin with fixing x to be the centroaffine normal and finally we decide if it is of hyperbolic type or elliptic type.

Example 3.1

Given \(\alpha =(\alpha _1,\ldots ,\alpha _{n+1})\in {\mathbb {R}}^{n+1}\) that satisfies either (1) \(\alpha _i>0\) for \(1\le i\le n+1\); or (2) \(\alpha _j>0\) for \(2\le j\le n+1\) and \(\alpha _1+\alpha _2+\cdots +\alpha _{n+1}<0\), we define

$$\begin{aligned}&\begin{aligned} M^{(1)}_\alpha =\left\{ x\in {\mathbb {R}}^{n+1}\,|\, x^{\alpha _1}_1x^{\alpha _2}_2\cdots x^{\alpha _{n+1}}_{n+1}=1\right\} . \end{aligned} \end{aligned}$$
(3.1)

Claim 3.1

\(M^{(1)}_\alpha \) is a locally strongly convex canonical centroaffine hypersurface in \({\mathbb {R}}^{n+1}\). Moreover, \(M^{(1)}_\alpha \) is of hyperbolic type in case (1), and of elliptic type in case (2).

Proof of Claim 3.1

Put \(\beta _j=-\tfrac{\alpha _j}{\alpha _1}\), \(2\le j\le n+1\). Then, \(M^{(1)}_\alpha \) can be rewritten as

$$\begin{aligned} x^{\beta _2}_2x^{\beta _3}_3\cdots x^{\beta _{n+1}}_{n+1}=x_1, \end{aligned}$$
(3.2)

where, in case (1) we have \(\beta _j<0\) for \(2\le j\le n+1\); whereas in case (2) we have \(\beta _j>0\) for \(2\le j\le n+1\) and \(\beta _2+\cdots +\beta _{n+1}<1\).

Now, taking local coordinates \((u_2,\ldots ,u_{n+1})\) of \(M^{(1)}_\alpha \) such that

$$\begin{aligned} \begin{aligned} x=(x_1,x_2,\ldots ,x_{n+1}):=(e^{\beta _2u_2+\beta _3u_3+\cdots +\beta _{n+1}u_{n+1}},e^{u_2}, \ldots ,e^{u_{n+1}}), \end{aligned} \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned} x_{u_j}&=(\beta _je^{\beta _2u_2+\ldots +\beta _{n+1}u_{n+1}},0,\ldots ,0,e^{u_j},0,\ldots ,0),\ \ 2\le j\le n,\\ x_{u_ju_k}&=(\beta _j\beta _ke^{\beta _2u_2+\ldots +\beta _{n+1}u_{n+1}},0,\ldots ,0,\delta _{jk}e^{u_j},0,\ldots ,0),\ \ 2\le j,k\le n. \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} \begin{aligned} \left[ x_{u_2},\ldots ,x_{u_{n+1}},x_{u_ju_k}\right]&=(-1)^{n+2}(\beta _j\beta _k-\delta _{jk}\beta _j)e^{(1+\beta _2)u_2+\cdots +(1+\beta _{n+1})u_{n+1}},\\ \left[ x_{u_2},\ldots ,x_{u_{n+1}},x\right]&=(-1)^{n+2}\left( 1-\sum ^{n+1}_{i=2}\beta _i\right) e^{(1+\beta _2)u_2+\cdots +(1+\beta _{n+1})u_{n+1}}, \end{aligned} \end{aligned}$$

where and later, \([\cdot ]\) denotes the standard determinant in \({\mathbb {R}}^{n+1}\).

By using \(x_{u_iu_j}=\sum \nolimits _{k=2}^{n+1}\Gamma ^k_{ij}x_{u_k}+h_{ij}x\), we have

$$\begin{aligned} h_{ij}=\frac{\left[ x_{u_2},x_{u_3},\ldots ,x_{u_{n+1}},x_{u_iu_j}\right] }{\left[ x_{u_2},x_{u_3},\ldots ,x_{u_{n+1}},x\right] } =\frac{\beta _i\beta _j-\delta _{ij}\beta _j}{1-\sum ^{n+1}_{k=2}\beta _k},\ \ 2\le i,j\le n+1.\qquad \end{aligned}$$
(3.3)

Thus, the centroaffine metric h is flat. Moreover, from the fact

$$\begin{aligned} \begin{aligned} \left( 1-\sum ^{n+1}_{k=2}\beta _k\right) \sum _{i,j=2}^{n+1}h_{ij}z_iz_j&=\left( \sum _{i=2}^{n+1}\beta _iz_i\right) ^2-\sum _{i=2}^{n+1}\beta _iz_i^2\\&=-\left( 1\!-\!\sum _{k=2}^{n+1}\beta _k\right) \sum _{i=2}^{n+1}\beta _iz_i^2\!-\!\sum _{2\le i<j\le n+1}\beta _i\beta _j(z_i-z_j)^2, \end{aligned} \end{aligned}$$

we see that the matrix \((h_{ij})\) is positive (resp. negative) definite, or equivalently, the hypersurface \(M^{(1)}_\alpha \) is hyperbolic (resp. elliptic) in case (1) (resp. case (2)).

Next, by definition and (3.3), the difference tensor of \(M^{(1)}_\alpha \) is given by

$$\begin{aligned} K^k_{ij}=\Gamma ^k_{ij}=\frac{\left[ x_{u_2},\ldots ,x_{u_iu_j},\ldots ,x_{u_{n+1}},x\right] }{\left[ x_{u_2},\ldots ,x_k,\ldots ,x_{u_{n+1}},x\right] },\ \ 2\le i,j,k\le n+1, \end{aligned}$$
(3.4)

where in the numerator \(x_{u_iu_j}\) locates in the same position as that of \(x_k\) in the denominator. Then, straightforward calculations give that

$$\begin{aligned} K^k_{ij}=\left\{ \begin{aligned}&\frac{\beta _i\beta _j}{\sum ^{n+1}_{l=2}\beta _l-1},\qquad \quad i\ne j;\\&1+\frac{\beta _i(\beta _i-1)}{\sum ^{n+1}_{l=2}\beta _l-1},\quad \ i=j=k;\\&\frac{\beta _i(\beta _i-1)}{\sum ^{n+1}_{l=2}\beta _l-1},\qquad \quad i=j\ne k. \end{aligned}\right. \end{aligned}$$
(3.5)

From (3.3) and (3.5), we get \({\hat{\nabla }} K=0\). Thus, as claimed, \(M^{(1)}_\alpha \) is a locally strongly convex canonical centroaffine hypersurface in \({\mathbb {R}}^{n+1}\) for both cases (1) and (2). \(\square \)

Example 3.2

Given \(\alpha =(\alpha _1,\ldots ,\alpha _{n+1})\in {\mathbb {R}}^{n+1}\) with \(\alpha _i<0\) (\(1\le i\le n-1\)) and \(\alpha _1+\cdots +\alpha _{n-1}+2\alpha _n>0\), we define

$$\begin{aligned}&\begin{aligned} M^{(2)}_{\alpha }\!=\!\left\{ x\in {\mathbb {R}}^{n+1}\,|\, x^{\alpha _1}_1x^{\alpha _2}_2\cdots x^{\alpha _{n-1}}_{n-1}(x^2_n\!+\!x^2_{n+1})^{ \alpha _n}\exp (\alpha _{n+1}\arctan \tfrac{x_n}{x_{n+1}})=1\right\} . \end{aligned}\nonumber \\ \end{aligned}$$
(3.6)

Claim 3.2

\(M^{(2)}_\alpha \) is a locally strongly convex canonical centroaffine hypersurface in \({\mathbb {R}}^{n+1}\) of elliptic type.

Proof of Claim 3.2

Put \(\beta _i=-\tfrac{\alpha _i}{\alpha _1}\) for \(2\le i\le n+1\). Then, \(M^{(2)}_\alpha \) can be rewritten as

$$\begin{aligned} \begin{aligned} x^{\beta _2}_2\cdots x^{\beta _{n-1}}_{n-1}(x^2_n+x^2_{n+1})^{\beta _n} \exp ({\beta _{n+1}\arctan \tfrac{x_n}{x_{n+1}}})=x_1, \end{aligned} \end{aligned}$$

where \(\beta _i<0\) for \(2\le i\le n-1\) and \(\beta _2+\cdots +\beta _{n-1}+2\beta _n>1\).

Taking local coordinates \((u_2,\ldots ,u_{n+1})\) of \(M^{(2)}_\alpha \) such that

$$\begin{aligned} \left\{ \begin{aligned} x_1&=e^{\beta _2u_2+\cdots +\beta _{n-1}u_{n-1}+2\beta _nu_n+\beta _{n+1}u_{n+1}},\\ x_2&=e^{u_2},\ \ x_3=e^{u_3},\ \ \ldots ,\ \ x_{n-1}=e^{u_{n-1}},\\ x_n&=e^{u_n}\sin u_{n+1},\ \ x_{n+1}=e^{u_n}\cos u_{n+1}, \end{aligned}\right. \end{aligned}$$

we have, letting \(\langle \cdot ,\cdot \rangle \) be the usual Euclidean inner product in \({\mathbb {R}}^{n+1}\),

$$\begin{aligned} \left\{ \begin{aligned}&x_{u_j}=(\beta _je^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,e^{u_j},0,\ldots ,0,0,0),\ 2\le j\le n-1,\\&x_{u_n}=(2\beta _ne^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,e^{u_n}\sin u_{n+1},e^{u_n} \cos u_{n+1}),\\&x_{u_{n+1}}=(\beta _{n+1}e^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,e^{u_n}\cos u_{n+1}, -e^{u_n}\sin u_{n+1}), \end{aligned}\right. \end{aligned}$$

where \({\bar{\beta }}=(\beta _2,\ldots ,\beta _{n-1},2\beta _n,\beta _{n+1})\), \(u=(u_2, \ldots ,u_{n+1})\). Moreover, for \(2\le i,j\le n-1\),

$$\begin{aligned} \left\{ \begin{aligned}&x_{u_iu_j}=(\beta _i\beta _je^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,\delta _{ij}e^{u_i},0, \ldots ,0,0,0),\\&x_{u_iu_n}=(2\beta _i\beta _ne^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,0,0),\ \ x_{u_iu_{n+1}}=(\beta _i\beta _{n+1}e^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,0,0),\\&x_{u_nu_n}=(4\beta ^2_ne^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,e^{u_n}\sin u_{n+1}, e^{u_n}\cos u_{n+1}),\\&x_{u_nu_{n+1}}=(2\beta _n\beta _{n+1}e^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0,e^{u_n} \cos u_{n+1},-e^{u_n}\sin u_{n+1}),\\&x_{u_{n+1}u_{n+1}}=(\beta ^2_{n+1}e^{\langle {\bar{\beta }},u\rangle },0,\ldots ,0, -e^{u_n}\sin u_{n+1},-e^{u_n}\cos u_{n+1}). \end{aligned}\right. \end{aligned}$$

Thus, by \(x_{u_iu_j}=\sum \nolimits _{k=2}^{n+1}\Gamma ^k_{ij}x_{u_k}+h_{ij}x\), as in (3.3), direct calculations give that

$$\begin{aligned} \left\{ \begin{aligned}&h_{ij}=\frac{\beta _i\beta _j-\delta _{ij}\beta _j}{1-2\beta _n-\sum ^{n-1}_{t=2}\beta _t},\ \ h_{in}=\frac{2\beta _i\beta _n}{1-2\beta _n-\sum ^{n-1}_{t=2}\beta _t},\\&h_{i,n+1}=\frac{\beta _i\beta _{n+1}}{1-2\beta _n-\sum ^{n-1}_{t=2}\beta _t},\ \ h_{nn}=\frac{4\beta ^2_n-2\beta _n}{1-2\beta _n-\sum ^{n-1}_{t=2}\beta _t},\\&h_{n,n+1}=\frac{2\beta _n\beta _{n+1}-\beta _{n+1}}{1-2\beta _n-\sum ^{n-1}_{t=2}\beta _t},\ \ h_{n+1,n+1}=\frac{2\beta _n+\beta ^2_{n+1}}{1-2\beta _n-\sum ^{n-1}_{t=2}\beta _t}, \end{aligned}\right. \end{aligned}$$
(3.7)

where \(2\le i,j\le n-1\). This implies that the centroaffine metric h is flat.

Moreover, putting \((h_{ij})_{n\times n}=\tfrac{1}{1-2\beta _n-\sum ^{n-1}_{t=2}\beta _t}(A_{ij})_{n\times n}\), we can calculate the leading principal minors \(\{D_i\}_{1\le i\le n}\) of the matrix \((A_{ij})_{n\times n}\) to obtain:

$$\begin{aligned} \begin{aligned} D_1&=\beta _2^2-\beta _2>0,\ \ D_2=(1-\beta _2-\beta _3)\beta _2\beta _3>0,\ \ \ldots ,\\ D_{n-2}&=(1-\beta _2-\cdots -\beta _{n-1})\prod _{i=2}^{n-1}(-\beta _i)>0,\\ D_{n-1}&=2\beta _n(\beta _2+\cdots +\beta _{n-1}+2\beta _n-1)\prod _{i=2}^{n-1}(-\beta _i)>0,\\ D_n&=(4\beta _n^2+\beta _{n+1}^2)(\beta _2+\cdots +\beta _{n-1}+2\beta _n-1)\prod _{i=2}^{n-1}(-\beta _i)>0. \end{aligned} \end{aligned}$$
(3.8)

Hence, the matrix \((h_{ij})_{n\times n}\) is negative definite, or equivalently, \(M^{(2)}_{\alpha }\) is of elliptic type.

Finally, direct calculations with the use of (3.4) and (3.7) show that all coefficients \(K_{ij}^k\) of the difference tensor are equal to \(\Gamma _{ij}^k\) which are all constant. Then, it follows that \({\hat{\nabla }} K=0\) and, as claimed, \(M^{(2)}_\alpha \) is a locally strongly convex canonical centroaffine hypersurface in \({\mathbb {R}}^{n+1}\). \(\square \)

Example 3.3

Given \(2\le v\le n+1\) and \(\alpha (v)=(\alpha _v,\ldots ,\alpha _n)\in {\mathbb {R}}^{n-v+1}\), with \(\alpha _k>0\) for \(v\le k\le n\) and \(\alpha _v+\cdots +\alpha _n<1\), we define the graph in \({\mathbb {R}}^{n+1}\) by

$$\begin{aligned} M^{(3)}_{\alpha (v)}:\ \ \left\{ \begin{aligned} x_{n+1}&=\tfrac{1}{2x_1}\sum ^{v-1}_{i=2}x^2_i+x_1\left( \ln x_1-\sum ^n_{i=v}\alpha _i\ln x_i\right) ,\ \ 3\le v\le n;\\ x_{n+1}&=x_1\left( \ln x_1-\sum \limits ^n_{i=2}\alpha _i\ln x_i\right) ,\ \ v=2;\\ x_{n+1}&=\tfrac{1}{2x_1}\sum \limits ^n_{i=2}x^2_i+x_1\ln x_1,\ \ v=n+1. \end{aligned} \right. \end{aligned}$$
(3.9)

Claim 3.3

\(M^{(3)}_{\alpha (v)}\) with \(2\le v\le n+1\) are locally strongly convex canonical centroaffine hypersurfaces in \({\mathbb {R}}^{n+1}\) of elliptic type.

Proof of Claim 3.3

Because of similarity and simpler, the proof for both \(M^{(3)}_{\alpha (n+1)}\) and \(M^{(3)}_{\alpha (2)}\) are omitted. Below, we consider the cases \(3\le v\le n\). With the parameterization \((u_1,\ldots ,u_n)\), we can express \(M^{(3)}_{\alpha (v)}\) by

$$\begin{aligned} x=\left( e^{u_1},e^{u_1}u_2,\ldots ,e^{u_1}u_{v-1},e^{u_v},\ldots ,e^{u_n},e^{u_1} \left( u_1+\tfrac{1}{2}\sum ^{v-1}_{t=2}u^2_t-\sum ^n_{r=v}\alpha _ru_r\right) \right) . \end{aligned}$$

Then, as the preceding examples, by direct calculation of \(x_{u_i}\) for \(1\le i\le n\) and \(x_{u_iu_j}\) for \(1\le i,j\le n\), together with the equation \(x_{u_iu_j}=\mathop {\sum }\nolimits _{k=1}^n\Gamma ^k_{ij}x_{u_k}+h_{ij}x\), we can get

$$\begin{aligned} \left\{ \begin{aligned}&h_{11}=\frac{1}{\sum ^n_{s=v}\alpha _s-1},\ \ h_{1i}=0,\ \ h_{1k}=\frac{-\alpha _k}{\sum ^n_{s=v}\alpha _s-1},\\&h_{ij}=\frac{\delta _{ij}}{\sum ^n_{s=v}\alpha _s-1},\ \ h_{ik}=0,\ \ h_{kl}=\frac{\delta _{kl}\alpha _k}{\sum ^n_{s=v}\alpha _s-1}, \end{aligned}\right. \end{aligned}$$
(3.10)

where \(2\le i,j\le v-1\) and \(v\le k,l\le n\). (3.10) implies that the centroaffine metric h is flat. Moreover, from the fact

$$\begin{aligned} \left( \sum ^n_{k=v}\alpha _k-1\right) \sum _{i,j=1}^nh_{ij}z_iz_j= & {} \left( 1-\sum ^n_{k=v}\alpha _k\right) z_1^2\nonumber \\&\quad +\sum _{i=2}^{v-1}z_i^2+\sum _{k=v}^n\alpha _k(z_1-z_k)^2, \end{aligned}$$
(3.11)

we see that the matrix \((h_{ij})_{n\times n}\) is negative definite, or equivalently, the hypersurface \(M^{(3)}_{\alpha (v)}\) is of elliptic type.

Moreover, direct calculations show that all \(K_{ij}^k=\Gamma _{ij}^k\) are constant, which satisfy:

$$\begin{aligned}&\left\{ \begin{aligned}&K^1_{11}=\frac{\sum ^n_{s=v}\alpha _s-2}{\sum ^n_{s=v}\alpha _s-1},\ \ K^j_{11}=0,\ \ 2\le j\le v-1,\\&K^k_{11}=-\frac{1}{\sum ^n_{s=v}\alpha _s-1},\ \ v\le k\le n,\\&K^m_{1j}=\delta _{jm},\ \ 2\le j\le v-1,\ \ 1\le m\le n, \end{aligned}\right. \end{aligned}$$
(3.12)
$$\begin{aligned}&\left\{ \begin{aligned}&K^1_{1k}=K^m_{1k}=\frac{\alpha _k}{\sum ^n_{s=v}\alpha _s-1},\ \ K^j_{1k}=0,\ \ 2\le j\le v-1,\ v\le k,m\le n,\\&K^1_{ij}=K^m_{ij}=-\frac{\delta _{ij}}{\sum ^n_{s=v}\alpha _s-1},\ \ K^r_{ij}=0,\ 2\le i,j,r\le v-1,\ v\le m\le n,\\&K^m_{ik}=0,\ \ 2\le i\le v-1,\ \ v\le k\le n,\ \ 1\le m\le n,\\&K^1_{kl}=K^m_{kl}=-\frac{\alpha _k\delta _{kl}}{\sum ^n_{s=v}\alpha _s-1},\ v\le k,l,m\le n,\ m\ne k,l,\\&K^j_{kl}=0,\ \ K^k_{kl}=\delta _{kl}-\frac{\alpha _k\delta _{kl}}{\sum ^n_{s=v}\alpha _s-1},\ 2\le j\le n-1,\ v\le k,l\le n. \end{aligned}\right. \end{aligned}$$
(3.13)

It follows from (3.10), (3.12) and (3.13) that \({\hat{\nabla }} K=0\). Hence, as claimed, \(M^{(3)}_{\alpha (v)}\) is a locally strongly convex canonical centroaffine hypersurface in \({\mathbb {R}}^{n+1}\). \(\square \)

Before ending this section, we give further remarks concerning the preceding examples.

Remark 3.1

Given an integer \(2\le v\le n\) and real numbers \(\{{\tilde{\alpha }}_i\}_{1\le i\le v-1}\) with \({\tilde{\alpha }}_1<0\), we have the centroaffine transformation of \({\mathbb {R}}^{n+1}\)

$$\begin{aligned} \left\{ \begin{aligned}&x_1=y_1,\ \ x_i=\frac{y_i}{\sqrt{-{\tilde{\alpha }}_1}},\ 2\le i\le v-1;\ \ x_j=y_j,\ v\le j\le n;\\&x_{n+1}=-\frac{1}{{\tilde{\alpha }}_1}\left( y_{n+1}+\sum ^{v-1}_{i=2}{\tilde{\alpha }}_iy_i\right) , \end{aligned}\right. \end{aligned}$$
(3.14)

under which the locally strongly convex canonical centroaffine hypersurface \(M^{(3)}_{\alpha (v)}\), namely

$$\begin{aligned} x_{n+1}=\tfrac{1}{2x_1}\sum ^{v-1}_{i=2}x^2_i-x_1\left( -\ln x_1+\sum ^n_{k=v}\alpha _k\ln x_k\right) , \end{aligned}$$
(3.15)

is changed to be Example 1.2 of Li–Wang [16]:

$$\begin{aligned} y_{n+1}=\tfrac{1}{2y_1}\sum ^{v-1}_{i=2}y^2_i-\sum ^{v-1}_{i=2}{\tilde{\alpha }}_iy_i -y_1\left( {\tilde{\alpha }}_1\ln y_1+\sum ^n_{j=v}{\tilde{\alpha }}_j\ln y_j\right) , \end{aligned}$$
(3.16)

where \({\tilde{\alpha }}_j=-{\tilde{\alpha }}_1\alpha _j,\ v\le j\le n\).

Moreover, corresponding to \(\alpha _k>0\) for \(v\le k\le n\) and \(\alpha _v+\cdots +\alpha _n<1\) in (3.15), it holds that \({\tilde{\alpha }}_k>0\) for \(v\le k\le n\) and \({\tilde{\alpha }}_1+{\tilde{\alpha }}_v+\cdots +{\tilde{\alpha }}_n<0\) in (3.16).

In [16], more information for the canonical centroaffine hypersurface (3.16) is given. This includes the conclusion that the canonical centroaffine hypersurfaces (3.16) are of Lorentzian centroaffine metric if \({\tilde{\alpha }}_j>0\) for \(v\le j\le n\) and \({\tilde{\alpha }}_1+{\tilde{\alpha }}_v+\cdots +{\tilde{\alpha }}_n>0\).

Next, we recall Corollary 1.1 of Cheng-Hu-Moruz [3] which shows that the centroaffine hypersurfaces described by the preceding examples are the only locally strongly convex canonical centroaffine hypersurfaces in \({\mathbb {R}}^{n+1}\).

Theorem 3.1

([3, 16]). Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\ (n\ge 2)\) be a locally strongly convex canonical centroaffine hypersurface. Then it is locally centroaffinely equivalent to one of the following hypersurfaces:

  1. (i)

    \(x^{\alpha _1}_1x^{\alpha _2}_2\cdots x^{\alpha _{n+1}}_{n+1}=1\), where either \(\alpha _i>0\) for \(1\le i\le n+1\); or \(\alpha _j>0\) for \(2\le j\le n+1\) and \(\alpha _1+\cdots +\alpha _{n+1}<0\);

  2. (ii)

    \(x^{\alpha _1}_1x^{\alpha _2}_2\cdots x^{\alpha _{n-1}}_{n-1}(x^2_n+x^2_{n+1})^{ \alpha _n}\exp (\alpha _{n+1}\arctan \tfrac{x_n}{x_{n+1}})=1\), where \(\alpha _i<0\) for \(1\le i\le n-1\) and \(\alpha _1+\cdots +\alpha _{n-1}+2\alpha _n>0\);

  3. (iii)

    \(x_{n+1}=\tfrac{1}{2x_1}(x^2_2+\cdots +x^2_{v-1})-x_1(-\ln x_1+\alpha _v\ln x_v +\cdots +\alpha _n\ln x_n)\), where \(2\le v\le n+1\), \(\alpha _i>0\) for \(v\le i\le n\) and \(\alpha _v+\cdots +\alpha _n<1\).

Remark 3.2

Theorem 3.1 complements Theorem 1.1 of [16] by giving all examples with \(N(h)=n\). Since a complete proof of Theorem 3.1 is omitted in [3], hereby we would give an outline: By Theorem 6.1 in [3], all locally strongly convex canonical centroaffine surfaces in \({\mathbb {R}}^3\) are given by (ii) \(\sim \) (v) therein. Then, from the conclusions of Theorem 1.1, Propositions 3.1 and 3.2 of [3], we see that all higher dimensional locally strongly convex canonical centroaffine hypersurfaces but that \(x_{n+1}=\tfrac{1}{2x_1}(x^2_2+\cdots +x_n^2)+x_1\ln x_1\) can be produced by the Calabi products of either two lower dimensional locally strongly convex canonical centroaffine hypersurfaces, or one lower dimensional locally strongly convex canonical centroaffine hypersurface with a point, defined by (1.3) and (1.4) of [3], respectively, depending on a parameter \(\lambda \). Then, by using Theorem 1.1 of [3] again, noting that as building blocks of the Calabi products, the standard immersions of the symmetric spaces \(\mathbf{SL}(m,{\mathbb {R}})/\mathbf{SO}(m)\), \(\mathbf{SL}(m,{\mathbb {C}})/\mathbf{SU}(m)\), \(\mathbf{SU}^*\big (2m\big )/\mathbf{Sp}(m)\) and \(\mathbf{E}_{6(-26)}/\mathbf{F}_4\) are not of flat centroaffine metric (cf. [2]), we can finally achieve the conclusion of Theorem 3.1.

4 Proof of Theorem 1.1

First of all, from (2.15) and (2.16), we immediately get the following lemma.

Lemma 4.1

Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) be a locally strongly convex centroaffine Tchebychev hypersurface with constant sectional curvature c. Then it holds that

$$\begin{aligned} \Vert {\hat{\nabla }} H\Vert ^2=c^2\Vert T\Vert ^2, \end{aligned}$$
(4.1)

where \({\hat{\nabla }}\) and \(\Vert \cdot \Vert \) denote the gradient operator and the tensorial norm with respect to the centroaffine metric h, respectively. In particular, if the centroaffine metric is flat, i.e., \(c=0\), the centroaffine mean curvature H is a constant.

Next, before completing the proof of Theorem 1.1, with the notations of the second section and the indices convention, we are going to calculate the Laplacian \(\Delta J\) of the centroaffine Pick invariant J.

Lemma 4.2

Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) be a locally strongly convex centroaffine hypersurface. Then, with respect to a local h-orthonormal frame \(\{e_1,\ldots ,e_n\}\) of \(M^n\), it holds that

$$\begin{aligned} \begin{aligned} \tfrac{1}{2}n(n-1)\Delta J&=\sum _{i,j,k,l}(K^k_{ij,l})^2+\sum _{i,j,k,l}(R_{ijkl})^2+\sum _{i,j}(R_{ij})^2 -\varepsilon (n+1)R\\&\quad +n\sum _{i,j,k}K^k_{ij}R_{ij}T^k+n\sum _{i,j,k}K^k_{ij}T^k_{,ij}. \end{aligned} \end{aligned}$$
(4.2)

Proof

By definition, we have

$$\begin{aligned} \begin{aligned} \tfrac{1}{2}n(n-1)\Delta J=\tfrac{1}{2}\Delta \left( \sum _{i,j,k}(K^k_{ij})^2\right) =\sum _{i,j,k,l}(K^k_{ij,l})^2+\sum _{i,j,k,l}K^k_{ij}K^k_{ij,ll}. \end{aligned} \end{aligned}$$
(4.3)

By using (2.9) and (2.13), we have the calculations

$$\begin{aligned} \begin{aligned} \sum _{i,j,k,l}K^k_{ij}K^k_{ij,ll}&=\sum _{i,j,k,l}K^k_{ij}K^k_{il,lj} +\sum _{i,j,k,l,m}K^k_{ij}K^k_{ml}R_{mijl}\\&\quad +\sum _{i,j,k,l,m}K^k_{ij}K^k_{im}R_{mljl} +\sum _{i,j,k,l,m}K^k_{ij}K^m_{il}R_{mkjl}\\&=n\sum _{i,j,k}K^k_{ij}T^k_{,ij} +\sum _{i,j,k,l,m}K^k_{ij}K^k_{ml}R_{mijl}\\&\quad +\sum _{i,j,k,m}K^k_{ij}K^k_{im}R_{mj} +\sum _{i,j,k,l,m}K^k_{ij}K^m_{il}R_{mkjl}. \end{aligned} \end{aligned}$$
(4.4)

Now, from the fact

$$\begin{aligned} \begin{aligned} \sum _{i,j,k,l,m}K^k_{ij}K^m_{il}R_{mkjl}&=-\sum _{i,j,k,l,m}K^k_{ij}K^m_{il}R_{kmjl}\\&=-\sum _{i,j,k,l,m}K^k_{mj}K^k_{il}R_{mijl}, \end{aligned} \end{aligned}$$
(4.5)

and (2.8), we obtain

$$\begin{aligned} \begin{aligned}&\sum _{i,j,k,l,m}K^k_{ij}K^k_{ml}R_{mijl} +\sum _{i,j,k,l,m}K^k_{ij}K^m_{il}R_{mkjl}\\&\quad =\sum _{i,j,k,l,m}(K^k_{ij}K^k_{ml}-K^k_{il}K^k_{jm})R_{mijl} =\sum _{i,j,l,m}(R_{imjl})^2-2\varepsilon R. \end{aligned} \end{aligned}$$
(4.6)

Moreover, by using (2.10), we get

$$\begin{aligned} \sum _{i,j,k,m}K^k_{ij}K^k_{im}R_{mj} =\sum _{i,j}(R_{ij})^2+n\sum _{i,j,k}K^k_{ij}R_{ij}T^k-\varepsilon (n-1)R. \end{aligned}$$
(4.7)

From the above calculations, we conclude that

$$\begin{aligned} \begin{aligned} \sum _{i,j,k,l}K^k_{ij}K^k_{ij,ll}&=\sum _{i,j,l,m}(R_{imjl})^2+\sum _{i,j}(R_{ij})^2-\varepsilon (n+1)R\\&\quad +n\sum _{i,j,k}K^k_{ij}R_{ij}T^k+n\sum _{i,j,k}K^k_{ij}T^k_{,ij}. \end{aligned} \end{aligned}$$
(4.8)

The assertion of the lemma then follows from (4.3) and (4.8). \(\square \)

Completion of the Proof of Theorem 1.1.

Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) be a locally strongly convex centroaffine hypersurface with constant sectional curvature c and vanishing centroaffine shape operator \({\mathcal {T}}={\hat{\nabla }} T=0\). Thus, \(H=0\) and \(\Vert T\Vert \) is a constant.

If \(T=0\), then x is an equiaffine normal, thus \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) is a proper affine hypersphere centered at the origin \(O\in {\mathbb {R}}^{n+1}\). By the Main theorem of Vrancken-Li-Simon [23], we conclude that \(x(M^n)\) is centroaffinely equivalent to either a hyperquadric of proper affine hyperspheres centered at the origin or the hypersurface \(x_1x_2\cdots x_{n+1}=1\) in \({\mathbb {R}}^{n+1}\).

If \(T\ne 0\), it is seen from Lemma 4.1 and the fact \(H=0\) that \(c=0\), i.e., \((M^n,h)\) is flat. It follows from (2.11) that J is a constant, and applying Lemma 4.2, we obtain

$$\begin{aligned} \begin{aligned} 0=\tfrac{1}{2}n(n-1)\Delta J=\sum _{i,j,k,l}(K^k_{ij,l})^2+n\sum _{i,j,k}K^k_{ij}T^k_{,ij} =\sum _{i,j,k,l}(K^k_{ij,l})^2, \end{aligned} \end{aligned}$$

and therefore, \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) is of parallel difference tensor.

Hence, \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) is a locally strongly convex canonical centroaffine hypersurface. Now, combining with Theorem 3.1, we have completed the proof of Theorem 1.1. \(\square \)

5 Construction of a Typical h-orthonormal Frame

From now on, we assume that \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) is a flat hyperbolic centroaffine hypersurface. Then, from (2.11), for any point \(p\in M^n\), we have \(K(p)\ne 0\).

In this section, we will construct a typical h-orthonormal frame field on \(M^n\) that should be of independent meaning. First of all, we fix an arbitrary \(p\in M^n\) and construct a canonical basis of \(T_pM^n\).

Lemma 5.1

Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\ (n\ge 2)\) be a flat hyperbolic centroaffine hypersurface. Then, at any \(p\in M^n\), there exists an h-orthonormal basis \(\{e_i\}^n_{i=1}\) of \(T_pM^n\) such that the difference tensor K takes the following form at p:

$$\begin{aligned} \left\{ \begin{aligned}&K_{e_1}e_1=\lambda _1e_1,\ \ K_{e_1}e_k=\mu _1e_k,\\&K_{e_k}e_k=\mu _1e_1+\cdots +\mu _{k-1}e_{k-1}+\lambda _ke_k,\\&K_{e_i}e_j=\mu _ie_j,\ \ 2\le i<j\le n,\ \ 2\le k\le n, \end{aligned}\right. \end{aligned}$$
(5.1)

where the coefficients satisfy the following relations:

$$\begin{aligned} \left\{ \begin{aligned}&\mu _1=\tfrac{1}{2}\left( \lambda _1-\sqrt{\lambda ^2_1+4}\,\right) ,\ \ \lambda _k>0,\ \ \mu _k\ne 0,\ \ 1\le k\le n-1,\\&\mu _\ell =\tfrac{1}{2}\left( \lambda _\ell -\sqrt{\lambda ^2_\ell +4(1+\mu ^2_1+\cdots +\mu ^2_{\ell -1})}\,\right) ,\ \ 2\le \ell \le n-1. \end{aligned}\right. \end{aligned}$$
(5.2)

Proof

The argument is now standard, for readers not familiar are referred to [4, 23].

Denote by \(U^1_pM^n=\{u\in T_pM^n\,|\,h(u,u)=1\}\) which is a compact set. Define a continuous function \(f^1(u)=h(K_uu,u)\) on \(U^1_pM^n\). Since \(K(p)\ne 0\), the function \(f^1\) attains an absolute maximum at some \(e_1\in U^1_pM^n\) such that \(\lambda _1:=f^1(e_1)>0\). This implies that \(K_{e_1}e_1=\lambda _1e_1\). Then, considering the self-adjoint map \(K_{e_1}:T_pM^n\rightarrow T_pM^n\), we obtain an h-orthonormal basis \(\{e_i\}^n_{i=1}\) of \(T_pM^n\), consisting of the eigenvectors of \(K_{e_1}\) with associated eigenvalues \(\lambda _1\) and \(\lambda _{1j}\ (2\le j\le n)\) satisfying

$$\begin{aligned} K_{e_1}e_1=\lambda _1e_1,\ \ K_{e_1}e_j=\lambda _{1j}e_j,\ \ \lambda _1\ge 2\lambda _{1j},\ \ 2\le j\le n. \end{aligned}$$

Then, taking in (2.6) \(X=e_1,Y=Z=e_j\) for \(j\ge 2\), as \(\varepsilon =-1\), we can get

$$\begin{aligned} \lambda ^2_{1j}-\lambda _{1j}\lambda _1-1=0,\ \ 2\le j\le n. \end{aligned}$$

Given that \(\lambda _1\ge 2\lambda _{1j}\), we have \(\lambda _{12}=\cdots =\lambda _{1n}=\tfrac{1}{2}\big (\lambda _1-\sqrt{\lambda ^2_1+4}\,\big )=:\mu _1\not =0\).

Next, denote \(T^2_pM^n=\text {Span}\{e_2,\ldots ,e_n\}\) and \(U^2_pM^n=\{u\in T^2_pM^n\,| h(u,u)=1\}\). Then, the function \(f^2(u)=h(K_uu,u)\) defined on \(U^2_pM^n\) attains at an element \(e_2\in U^2_pM^n\) so that the absolute maximum \(\lambda _2:=f^2(e_2)\ge 0\). This implies that \(K_{e_2}e_2=\mu _1e_1+\lambda _2e_2\).

Consider now the self-adjoint operator \({\mathfrak {B}}_2:T^2_pM^n\rightarrow T^2_pM^n\) defined by

$$\begin{aligned} {\mathfrak {B}}_2X=K_{e_2}X-h(K_{e_2}X,e_1)e_1,\ \ X\in T^2_pM^n. \end{aligned}$$

We can choose an h-orthonormal basis of \(\{e_1,e_2\}^\perp \), still denoted \(\{e_i\}^n_{i=3}\), consisting of eigenvectors of \({\mathfrak {B}}_2\) with associated eigenvalues \(\{\lambda _{2i}\}_{i=3}^n\) satisfying

$$\begin{aligned} K_{e_2}e_2=\mu _1e_1+\lambda _2e_2,\ \ K_{e_2}e_j=\lambda _{2j}e_j,\ \ 3\le j\le n,\ \ \lambda _2\ge 2\lambda _{2j},\ \ 2\le j\le n. \end{aligned}$$

Then, taking in (2.6) \(X=e_2,Y=Z=e_j\) for \(j\ge 3\), we get

$$\begin{aligned} \lambda ^2_{2j}-\lambda _{2j}\lambda _{2}-(1+\mu ^2_1)=0,\ \ 3\le j\le n, \end{aligned}$$

which implies \(\lambda _2>0\) (if otherwise \(\lambda _2=0\), then \(\lambda _{2j} =h(K_{e_2}e_j,e_j)=0\) which gives a contradiction to the above equation). Therefore, as \(\lambda _2\ge 2\lambda _{2j}\), the above equation implies that \(\lambda _{23}=\cdots =\lambda _{2n}=\tfrac{1}{2} \big (\lambda _2-\sqrt{\lambda ^2_2+4(1+\mu ^2_1)}\,\big )=:\mu _2\not =0\).

Repeating the above procedure inductively, we then construct an h-orthonormal basis \(\{e_i\}^n_{i=1}\) of \(T_pM^n\) for which K takes the form (5.1) with coefficients satisfying (5.2). \(\square \)

Next, we are going to extend the above h-orthonormal basis \(\{e_i\}^n_{i=1}\) of \(T_pM^n\) to a local h-orthonormal frame field \(\{E_i\}^n_{i=1}\) on a neighbourhood U of \(p\in M^n\).

Lemma 5.2

Let \(x:M^n\rightarrow {\mathbb {R}}^{n+1}\) be a flat hyperbolic centroaffine hypersurface. For \(p\in M^n\), there exists a local h-orthonormal frame field \(\{E_i\}^n_{i=1}\) on a neighbourhood U of \(p\in M^n\) such that the difference tensor K takes the following form:

$$\begin{aligned} \left\{ \begin{aligned}&K_{E_1}E_1=f_1E_1,\ \ K_{E_1}E_k=g_1E_k,\\&K_{E_k}E_k=g_1E_1+\cdots +g_{k-1}E_{k-1}+f_kE_k,\\&K_{E_i}E_j=g_iE_j,\ \ 2\le i<j\le n,\ \ 2\le k\le n, \end{aligned} \right. \end{aligned}$$
(5.3)

where, the functions \(\{f_i,g_k\}\) satisfy the relations:

$$\begin{aligned} \left\{ \begin{aligned}&g_1=\tfrac{1}{2}\left( f_1-\sqrt{f^2_1+4}\,\right) ,\ \ f_k>0,\ \ g_k\ne 0,\ \ 1\le k\le n-1,\\&g_\ell =\tfrac{1}{2}\left( f_\ell -\sqrt{f^2_\ell +4(1+g^2_1+\cdots +g^2_{\ell -1})}\,\right) ,\ \ 2\le \ell \le n-1. \end{aligned}\right. \end{aligned}$$
(5.4)

Proof

We divided the proof process into three steps.

Step 1. There exists a unit vector field \(E_1\) on some neighborhood \(U'\) of p such that

$$\begin{aligned} K_{E_1}E_1=f_1E_1,\ \ K_{E_1}Y=g_1Y,\ \ \forall \,Y\in \{E_1\}^\bot , \end{aligned}$$
(5.5)

where \(f_1>0,\ g_1=\tfrac{1}{2}\big (f_1-\sqrt{f^2_1+4}\,\big )\).

For the h-orthonormal basis \(\{e_i\}_{i=1}^n\) of \(T_pM^n\) as described in Lemma 5.1, we choose an arbitrary local differentiable h-orthonormal frame field \(\{{\tilde{E}}_1,\ldots ,{\tilde{E}}_n\}\) on a neighborhood \({\tilde{U}}\) of p such that \({\tilde{E}}_i(p)=e_i\) for \(1\le i\le n\). Then, we define a mapping

$$\begin{aligned} \varphi :{\mathbb {R}}^n\times {\tilde{U}}\rightarrow {\mathbb {R}}^n\ \ by\ \ \varphi (a_1,a_2,\ldots ,a_n,q)=(b_1,b_2,\ldots ,b_n), \end{aligned}$$

where

$$\begin{aligned} b_k:=\sum ^n_{i,j=1}a_ia_jh(K_{{\tilde{E}}_i}{\tilde{E}}_j,{\tilde{E}}_k)-\lambda _1a_k,\ \ 1\le k\le n, \end{aligned}$$
(5.6)

are regarded as functions on \({\mathbb {R}}^n\times {\tilde{U}}:\ b_k=b_k(a_1,a_2,\ldots ,a_n,q)\).

It is easily checked from (5.1) that \(b_k(1,0,\ldots ,0,p)=0\) for \(1\le k\le n\), and

$$\begin{aligned} \frac{\partial b_k}{\partial a_j}\Big |_{(1,0,\ldots ,0,p)}=\left\{ \begin{aligned}&\lambda _1>0,\qquad \qquad \ k=j=1,\\&2\mu _1-\lambda _1\not =0,\ \ \ \ \ 2\le k=j\le n,\\&0,\qquad \qquad \qquad \ \ \ k\ne j. \end{aligned} \right. \end{aligned}$$
(5.7)

Thus, \(\big (\tfrac{\partial b_k}{\partial a_j}\big )\) at the point \((1,0,\ldots ,0,p)\in {\mathbb {R}}^n\times {\tilde{U}}\) is invertible. By the implicit function theorem there exist differentiable functions \(\{a_i(q)\}_{1\le i\le n}\), defined on a neighborhood \({\tilde{U}}'\subset {\tilde{U}}\) of p such that \(a_1(p)=1,\ a_j(p)=0\) for \(2\le j\le n\), and

$$\begin{aligned} b_k(a_1(q),a_2(q),\ldots ,a_n(q),q)\equiv 0,\ \ 1\le k\le n,\ \ \forall \,q\in {\tilde{U}}'. \end{aligned}$$
(5.8)

Put \(V=\mathop {\sum }\nolimits ^n_{i=1}a_i{\tilde{E}}_i\). Then \(V(p)=e_1\), (5.6) and (5.8) imply that \(K_VV=\lambda _1V\).

Since \(\Vert V(p)\Vert =\sqrt{h(V(p),V(p))}=1\), there exists a neighborhood \(U'\subset {\tilde{U}}'\) of p such that \(V\ne 0\) on \(U'\). Hence, the unit vector field \(E_1:=V/\Vert V\Vert \) on \(U'\) with \(E_1(p)=e_1\) satisfies

$$\begin{aligned} K_{E_1}E_1=f_1E_1,\ \ f_1=\tfrac{\lambda _1}{\Vert V\Vert }. \end{aligned}$$
(5.9)

(5.9) implies that the distribution \(\{E_1\}^\bot \) orthogonal to \(\text {Span}\{E_1\}\) is \(K_{E_1}\)-invariant on \(U'\). For any \(q\in U'\), let \(\{X_j\}_{2\le j\le n}\subset \{E_1(q)\}^\bot \) be the orthnormal eigenvectors of \(K_{E_1(q)}\) with associated eigenvalues \(\{\alpha _j\}_{2\le j\le n}\): \(K_{E_1(q)}X_j=\alpha _jX_j\) for \(2\le j\le n\). Then, by using \({\hat{R}}(X_j,E_1(q))E_1(q)=0\), the Gauss equation (2.6) and \(\varepsilon =-1\), we derive

$$\begin{aligned} \alpha ^2_j-f_1(q)\alpha _j-1=0,\ \ 2\le j\le n. \end{aligned}$$
(5.10)

It follows that \(K_{E_1}(q)\) has at most three distinct eigenvalues as below:

$$\begin{aligned} \tfrac{1}{2}\left( f_1(q)-\sqrt{f^2_1(q)+4}\,\right) ,\ \ f_1(q),\ \ \tfrac{1}{2}\left( f_1(q)+\sqrt{f^2_1(q)+4}\,\right) . \end{aligned}$$

On the other hand, by Lemma 5.1 and the continuity of the eigenvalue functions of \(K_{E_1}\), we see that \(\tfrac{1}{2}\big (f_1(q)+\sqrt{f^2_1(q)+4}\,\big )\) can not be attained. We have got the conclusion of Step 1.

Step 2. There exists a unit vector field \(E_2\in \{E_1\}^\bot \) on some neighborhood \({\tilde{U}}\subset U'\) of p such that

$$\begin{aligned} K_{E_2}E_2=g_1E_1+f_2E_2,\ \ K_{E_2}Y=g_2Y,\ \ \forall \,Y\in \{E_1,E_2\}^\bot , \end{aligned}$$
(5.11)

where \(g_2=\tfrac{1}{2}\big (f_2-\sqrt{f^2_2+4(1+g^2_1)}\,\big )\).

Similar to Step 1, we begin with choosing an arbitrarily h-orthonormal vector fields \(\{{\tilde{E}}'_2,\ldots ,{\tilde{E}}'_n\}\) of \(\{E_1\}^\bot \) on some neighborhood of p, still denoted by \(U'\), such that \({\tilde{E}}'_j(p)=e_j\) for \(2\le j\le n\). Then, we define a mapping

$$\begin{aligned} \tilde{\varphi }:{\mathbb {R}}^{n-1}\times {U}'\rightarrow {\mathbb {R}}^{n-1}\ \ by\ \ \tilde{\varphi }({\tilde{a}}_2,{\tilde{a}}_3,\ldots ,{\tilde{a}}_n,{\tilde{q}})=({\tilde{b}}_2, {\tilde{b}}_3,\ldots ,{\tilde{b}}_n), \end{aligned}$$

where

$$\begin{aligned} {\tilde{b}}_k=\sum ^n_{i,j=2}{\tilde{a}}_i{\tilde{a}}_jh(K_{{\tilde{E}}'_i}{\tilde{E}}'_j, {\tilde{E}}'_k)-\lambda _2{\tilde{a}}_k,\ \ 2\le k\le n, \end{aligned}$$
(5.12)

are regarded as functions on \({\mathbb {R}}^{n-1}\times {U}':{\tilde{b}}_k ={\tilde{b}}_k({\tilde{a}}_2,{\tilde{a}}_3,\ldots ,{\tilde{a}}_n,{\tilde{q}})\).

As \({\tilde{E}}'_j(p)=e_j\), from (5.1), we see that \({\tilde{b}}_k (1,0,\ldots ,0,p)=0\) for \(2\le k\le n\) and

$$\begin{aligned} \frac{\partial {\tilde{b}}_k}{\partial {\tilde{a}}_j}\Big |_{(1,0,\ldots ,0,p)}=\left\{ \begin{aligned}&\lambda _2>0,\qquad \qquad k=j=2,\\&2\mu _2-\lambda _2\not =0,\quad 3\le k=j\le n,\\&0,\qquad \qquad \qquad \ k\ne j. \end{aligned} \right. \end{aligned}$$
(5.13)

Thus, \(\big (\tfrac{\partial {\tilde{b}}_k}{\partial {\tilde{a}}_j}\big )\) at the point \((1,0,\ldots ,0,p)\in {\mathbb {R}}^{n-1}\times {U}'\) is invertible. By the implicit function theorem, there exist differentiable functions \(\{{\tilde{a}}_j({\tilde{q}})\}_{2\le j\le n}\) defined on a neighborhood \({\tilde{U}}'\subset {U}'\) of p such that \({\tilde{a}}_2(p)=1\), \({\tilde{a}}_r(p)=0\) for \(3\le r\le n\) and

$$\begin{aligned} {\tilde{b}}_k({\tilde{a}}_2({\tilde{q}}),\ldots ,{\tilde{a}}_n({\tilde{q}}), {\tilde{q}})\equiv 0,\ \ 2\le k\le n,\ \ \forall \,q\in {\tilde{U}}'. \end{aligned}$$
(5.14)

Put \({\tilde{V}}=\mathop {\sum }\nolimits ^n_{j=2}{\tilde{a}}_j{\tilde{E}}'_j\). Then \({\tilde{V}}(p)=e_2\). From (5.5), (5.12) and (5.14) we obtain

$$\begin{aligned} K_{{\tilde{V}}}{\tilde{V}}=\lambda _2{\tilde{V}}+g_1h({\tilde{V}},{\tilde{V}})E_1. \end{aligned}$$

As \(\Vert {\tilde{V}}(p)\Vert =1\), there exists a neighborhood \({\tilde{U}} \subset {\tilde{U}}'\) of p such that \({\tilde{V}}\ne 0\) on \({\tilde{U}}\). Then \(E_2:={\tilde{V}}/\Vert {\tilde{V}}\Vert \) defines a unit vector field over \({\tilde{U}}\) with \(E_2(p)=e_2\) which satisfies

$$\begin{aligned} K_{E_2}E_2=g_1E_1+f_2E_2,\ \ f_2=\tfrac{\lambda _2}{\Vert {\tilde{V}}\Vert }. \end{aligned}$$
(5.15)

At any point \({q}\in {\tilde{U}}\), we then consider \({\mathfrak {B}}_2(q):\{E_1(q)\}^\perp \rightarrow \{E_1(q)\}^\bot \), defined by

$$\begin{aligned} {\mathfrak {B}}_2(q)X=K_{E_2(q)}X-h(K_{E_2(q)}X,E_1(q))E_1(q),\ \ X\in \{E_1(q)\}^\bot , \end{aligned}$$

which is a self-adjoint transformation. Besides \(E_2\), let \({\tilde{X}}_r\) be the other unit eigenvectors of \({\mathfrak {B}}_2(q)\) corresponding to eigenvalues \(\tilde{\alpha }_r\ (3\le r\le n)\). Then, it holds at q:

$$\begin{aligned} K_{E_2}E_2=g_1E_1+f_2E_2,\ \ K_{E_2}{\tilde{X}}_r=\tilde{\alpha }_r{\tilde{X}}_r,\ \ 3\le r\le n. \end{aligned}$$

Now, using \({\hat{R}}({\tilde{X}}_r,E_2({q}))E_2({q})=0\) and the Gauss equation (2.6) with \(\varepsilon =-1\), we get

$$\begin{aligned} \tilde{\alpha }^2_r-f_2\tilde{\alpha }_r-(1+g^2_1)=0,\ \ 3\le r\le n. \end{aligned}$$
(5.16)

This shows that \({\mathfrak {B}}_2(q)\) has at most three distinct eigenvalues

$$\begin{aligned} \tfrac{1}{2}\left( f_2-\sqrt{f^2_2+4(1+g^2_1)}\,\right) ,\ \ f_2,\ \ \tfrac{1}{2}\left( f_2+\sqrt{f^2_2+4(1+g^2_1)}\,\right) . \end{aligned}$$

On the other hand, from (5.1) and (5.2) that \(\lambda _2>2\mu _2\) at p, and by continuity of the eigenvalue functions of \({\mathfrak {B}}_2(q)\) for \(q\in {\tilde{U}}\), we see that \(\tfrac{1}{2}\big (f_2+\sqrt{f^2_2+4(1+g^2_1)}\,\big )\) can not be attained. Thus, for any \(Y\in \{E_1,E_2\}^\bot \) over \({\tilde{U}}\), it holds

$$\begin{aligned} K_{E_2}Y=g_2Y,\ \ g_2=\tfrac{1}{2}\left( f_2-\sqrt{f^2_2+4(1+g^2_1)}\,\right) . \end{aligned}$$
(5.17)

This completes the proof of Step 2.

Step 3. The completion of Lemma 5.2’s proof.

After the construction of \(E_1\) and \(E_2\) satisfying (5.5) and (5.11) in the preceding two Steps, we can inductively prove the existence of \(\{E_i\}_{3\le i\le n}\) such that (5.3) and (5.4) hold.

Indeed, for any \(2\le j\le n-2\), after having h-orthonormal vector fields \(\{E_1,\ldots ,E_j\}\) over a neighborhood \({\bar{U}}\) of p, which satisfy corresponding relations in (5.3) and (5.4), we choose an arbitrary h-orthonormal vector fields \(\{{\bar{E}}_{j+1},\ldots , {\bar{E}}_n\}\) of \(\{E_1,\ldots ,E_j\}^\bot \) on \({\bar{U}}\) such that \({\bar{E}}_i(p)=e_i\) for \(j+1\le i\le n\). Then, we define a mapping

$$\begin{aligned} \bar{\varphi }_j:{\mathbb {R}}^{n-j}\times {\bar{U}}\rightarrow {\mathbb {R}}^{n-j}\ \ by\ \ \bar{\varphi }({\bar{a}}_{j+1},{\bar{a}}_{j+2},\ldots ,{\bar{a}}_n,{\bar{q}}) =({\bar{b}}_{j+1},{\bar{b}}_{j+2},\ldots ,{\tilde{b}}_n), \end{aligned}$$

where

$$\begin{aligned} {\bar{b}}_k=\sum ^n_{r,s=j+1}{\bar{a}}_r{\bar{a}}_sh(K_{{\bar{E}}_r}{\bar{E}}_s, {\bar{E}}_k)-\lambda _{j+1}{\bar{a}}_k,\ \ j+1\le k\le n, \end{aligned}$$
(5.18)

are regarded as functions on \({\mathbb {R}}^{n-j}\times {\bar{U}}:{\bar{b}}_k ={\bar{b}}_k({\bar{a}}_{j+1},{\bar{a}}_{j+2},\ldots ,{\bar{a}}_n,{\bar{q}})\).

As \({\bar{E}}_i(p)=e_i\), from (5.1), we see that \({\bar{b}}_k(1,0,\ldots ,0,p)=0\) for \(j+1\le k\le n\) and

$$\begin{aligned} \frac{\partial {\bar{b}}_k}{\partial {\bar{a}}_l}\Big |_{(1,0,\ldots ,0,p)}=\left\{ \begin{aligned}&\lambda _{j+1}>0,\qquad \qquad \ k=l=j+1,\\&2\mu _{j+1}-\lambda _{j+1}\not =0,\ \ k=l\ge j+2,\\&0,\qquad \qquad \qquad \quad \ \ k\ne l. \end{aligned} \right. \end{aligned}$$
(5.19)

By the implicit function theorem, there exist differentiable functions \(\{{\bar{a}}_i({\bar{q}})\}_{j+1\le i\le n}\), defined on a neighborhood \({\bar{U}}'\subset {\bar{U}}\) of p such that \({\bar{a}}_t(p)=0\) for \(j+2\le t\le n\), \({\bar{a}}_{j+1}(p)=1\) and

$$\begin{aligned} {\bar{b}}_k({\bar{a}}_{j+1}(q),\ldots ,{\bar{a}}_n(q),q)\equiv 0,\ \ j+1\le k\le n,\ \ \forall \,q\in {\bar{U}}'. \end{aligned}$$
(5.20)

Put \({\bar{V}}=\mathop {\sum }\nolimits ^n_{k=j+1}{\bar{a}}_k{\bar{E}}_k\). Then \({\bar{V}}(p)=e_{j+1}\). By (5.18) and (5.20), we can derive that

$$\begin{aligned} K_{{\bar{V}}}{\bar{V}}=\lambda _{j+1}{\bar{V}}+g_1h({\bar{V}},{\bar{V}})E_1+\cdots +g_jh({\bar{V}},{\bar{V}})E_j \end{aligned}$$

Since \(\Vert {\bar{V}}(p)\Vert =1\), there exists a neighborhood \(U\subset {\bar{U}}'\) of p such that \({\bar{V}}\ne 0\) on U. Then \(E_{j+1}:={\bar{V}}/\Vert {\bar{V}}\Vert \) defines a unit vector field over U with \(E_{j+1}(p)=e_{j+1}\) which satisfies

$$\begin{aligned} K_{E_{j+1}}E_{j+1}=g_1E_1+\cdots +g_jE_j+f_{j+1}E_{j+1},\ \ f_{j+1}=\tfrac{\lambda _{j+1}}{\Vert {\bar{V}}\Vert }. \end{aligned}$$
(5.21)

At any point \(q\in U\), we then consider

$$\begin{aligned} {\mathfrak {B}}_{j+1}(q):\{E_1(q),\ldots ,E_j(q)\}^\perp \rightarrow \{E_1(q),\ldots ,E_j(q)\}^\bot , \end{aligned}$$

defined by

$$\begin{aligned} \begin{aligned} {\mathfrak {B}}_{j+1}(q)X=K_{E_{j+1}}X-h(K_{E_{j+1}}X,E_1)E_1-\cdots -h(K_{E_{j+1}}X,E_j)E_j, \end{aligned} \end{aligned}$$

which is a self-adjoint transformation. Besides \(E_{j+1}\), let \({\bar{X}}_t\) be the other unit eigenvectors of \({\mathfrak {B}}_{j+1}(q)\) corresponding to eigenvalues \(\bar{\alpha }_t\ (j+2\le t\le n)\). Then, it holds at q:

$$\begin{aligned} K_{E_{j+1}}E_{j+1}=g_1E_1+\cdots +g_jE_j+f_{j+1}E_{j+1},\ \ K_{E_{j+1}}{\bar{X}}_t=\bar{\alpha }_t{\tilde{X}}_t,\ \ j+2\le t\le n. \end{aligned}$$

Now, using \({\hat{R}}({\bar{X}}_t,E_{j+1}(q))E_{j+1}(q)=0\) and (2.6) with \(\varepsilon =-1\), we get

$$\begin{aligned} \bar{\alpha }^2_t-f_{j+1}\bar{\alpha }_t-(1+g^2_1+\cdots +g^2_j)=0,\ \ j+2\le t\le n. \end{aligned}$$
(5.22)

From (5.22), we see that \({\mathfrak {B}}_{j+1}(q)\) has at most three distinct eigenvalues

$$\begin{aligned}&\tfrac{1}{2}\left( f_{j+1}-\sqrt{f^2_{j+1}+4(1+g^2_1+\cdots +g^2_j)}\,\right) ,\ \ f_{j+1},\ \\&\tfrac{1}{2}\left( f_{j+1}+\sqrt{f^2_{j+1}+4(1+g^2_1\cdots +g^2_j)}\,\right) . \end{aligned}$$

On the other hand, from (5.1) and (5.2) that \(\lambda _{j+1}>2\mu _{j+1}\) at p, and by continuity of the eigenvalue functions of \({\mathfrak {B}}_{j+1}(q)\) for \(q\in U\), we see that eigenvalues of the first two possibilities can be attained. Thus, for any \(Y\in \{E_1,\ldots ,E_{j+1}\}^\bot \) over U, it holds

$$\begin{aligned} K_{E_{j+1}}Y=g_{j+1}Y,\ \ g_{j+1}=\tfrac{1}{2}\left( f_{j+1}-\sqrt{f^2_{j+1}+4(1+g^2_1+\cdots +g^2_j)}\,\right) .\nonumber \\ \end{aligned}$$
(5.23)

In summary, by induction, we can extend the vectors \(\{e_1,e_2,\ldots ,e_{n-1}\}\) to the h-orthonormal vector fields \(\{E_1,E_2,\ldots ,E_{n-1}\}\) which satisfy corresponding relations in (5.3) and (5.4). Finally, we choose a unit vector field \(E_n\in \{E_1,E_2,\ldots ,E_{n-1}\}^\bot \) so that \(\{E_i\}^n_{i=1}\) forms a local h-orthonormal frame on a neighbourhood \(U\subset M^n\) of p. Obviously, with respect to \(\{E_i\}^n_{i=1}\), the difference tensor K takes the form in (5.3) and (5.4), where \(f_i>0\ (1\le i\le n-1)\) follow from its continuity and that \(f_i(p)=\lambda _i>0\).

This completes the proof of Lemma 5.2. \(\square \)

6 Applying the Codazzi Equations

From this section on, we consider the case \(n=4\) and assume that \(x:M^4\rightarrow {\mathbb {R}}^5\) is a flat hyperbolic centroaffine hypersurface. With respect to the typical h-orthogonal frame field \(\{E_1,E_2,E_3,E_4\}\), described in Lemma 5.2, we shall derive by applying the Codazzi equation (2.7) the necessary information on the derivatives \(\{E_i(g_j)\}\) and the connection coefficients \(\{\Gamma ^k_{ij}\}\) of \({\hat{\nabla }}\). The computations below are complicated but straightforward.

Put \({\hat{\nabla }}_{E_i}E_j=\mathop {\sum }\nolimits _{k=1}^4\Gamma ^k_{ij}E_k\), where \(\Gamma ^k_{ij}+\Gamma ^j_{ik}=0\) for \(1\le i,j,k\le 4\).

Then, taking in (2.7) \(X=E_i\), \(Y=E_j\) and \(Z=E_k\) for \(1\le i,j,k\le 4\), we get

$$\begin{aligned} \begin{aligned}&{\hat{\nabla }}_{E_i}K(E_j,E_k)-K({\hat{\nabla }}_{E_i}E_j,E_k)-K(E_j,{\hat{\nabla }}_{E_i}E_k)\\&\quad -{\hat{\nabla }}_{E_j}K(E_i,E_k)+K({\hat{\nabla }}_{E_j}E_i,E_k)+K(E_i,{\hat{\nabla }}_{E_j}E_k)=0. \end{aligned} \end{aligned}$$
(6.1)

First of all, for \((i,j,k)=(1,2,1)\), we obtain from (6.1) and (5.3) that

$$\begin{aligned} \begin{aligned}&(1+g^2_1)^2g_1^{-2}(g_1\Gamma ^2_{11}+E_2(g_1))E_1\\&\quad -\left[ g_1^{-1}(g^2_1+1)\Gamma ^2_{21}+g_2^{-1}(g^2_1+1)\Gamma ^2_{11}-g_2\Gamma ^2_{11}+E_1(g_1)\right] E_2\\&\quad +\left[ g_2\Gamma ^3_{11}-g_1^{-1}(1+g^2_1)\Gamma ^3_{21}\right] E_3 +\left[ g_2\Gamma ^4_{11}-g_1^{-1}(1+g^2_1)\Gamma ^4_{21}\right] E_4=0. \end{aligned} \end{aligned}$$

Hence, we get

$$\begin{aligned} \left\{ \begin{aligned}&\Gamma ^3_{11}=(g_1g_2)^{-1}(1+g^2_1)\Gamma ^3_{21},\ \ \Gamma ^4_{11}=(g_1g_2)^{-1}(1+g^2_1)\Gamma ^4_{21},\\&E_1(g_1)=-g_1^{-1}(g^2_1+1)\Gamma ^2_{21}-g_2^{-1}({g^2_1}+1) \Gamma ^2_{11}+g_2\Gamma ^2_{11},\ \ E_2(g_1)=-g_1\Gamma ^2_{11}. \end{aligned}\right. \nonumber \\ \end{aligned}$$
(6.2)

Similarly, letting \((i,j,k)=(1,2,2)\) in (6.1), and taking account of (6.2), we obtain

$$\begin{aligned} \begin{aligned} \Gamma ^3_{21}=\Gamma ^3_{12},\ \ \Gamma ^4_{21}=\Gamma ^4_{12},\ \ E_1(g_2)=-2g_1\Gamma ^2_{11}-g_2\Gamma ^2_{21}, \end{aligned} \end{aligned}$$
(6.3)

whereas letting \((i,j,k)=(1,3,1)\) in (6.1), and taking account of (6.2), we obtain

$$\begin{aligned} \left\{ \begin{aligned} \Gamma ^3_{31}&=(g_2g_3)^{-1}\left[ g_3(g_1\Gamma ^2_{11}+g_2\Gamma ^2_{21})-(1+g^2_1+g^2_2-g^2_3)\Gamma ^3_{21}\right] ,\\ \Gamma ^2_{31}&=\Gamma ^3_{21},\ \ g_2\Gamma ^4_{31}=g_3\Gamma ^4_{21},\ \ E_3(g_1)=-g_2^{-1}(1+g^2_1)\Gamma ^3_{12}. \end{aligned}\right. \end{aligned}$$
(6.4)

Letting \((i,j,k)=(1,3,4)\) in (6.1), and taking account of (6.2)–(6.4), we obtain

$$\begin{aligned} \begin{aligned} E_1(g_3)=-g_2^{-1}\left[ g_1g_3\Gamma ^2_{11}+g_2g_3\Gamma ^2_{21}+(1+g^2_1+g^2_2+g^2_3) \Gamma ^3_{21}\right] . \end{aligned} \end{aligned}$$
(6.5)

Letting \((i,j,k)=(1,4,1)\) in (6.1), and taking account of (6.2), we obtain

$$\begin{aligned} \left\{ \begin{aligned} \Gamma ^4_{41}&=\Gamma ^2_{21}+g_2^{-1}(g_1\Gamma ^2_{11}+g_3\Gamma ^3_{21}) +g_3^{-1}f_4\Gamma ^4_{13},\\ \Gamma ^3_{41}&=\Gamma ^4_{13},\ \ g_3\Gamma ^4_{21}=g_2\Gamma ^4_{13},\ \ E_4(g_1)=-g_1\Gamma ^4_{11}. \end{aligned}\right. \end{aligned}$$
(6.6)

Letting \((i,j,k)=(1,4,4)\) in (6.1), taking account of (6.2), (6.3) and (6.6), we get

$$\begin{aligned} \begin{aligned} E_1(f_4)&=-\frac{f_4(g_1\Gamma ^2_{11}+g_2\Gamma ^2_{21}+g_3\Gamma ^3_{21})}{g_2}\\&\quad -\frac{(f^2_4+4(1+g^2_1+g^2_2+g^2_3))\Gamma ^4_{13}}{g_3}. \end{aligned} \end{aligned}$$
(6.7)

Letting \((i,j,k)=(2,3,2)\) in (6.1), taking account of (6.3) and (6.4), we get

$$\begin{aligned} \left\{ \begin{aligned}&\Gamma ^4_{22}=(g_2g_3)^{-1}\left[ (1+g^2_1+g^2_2)\Gamma ^4_{32}-g_1g_2\Gamma ^4_{13}\right] ,\ \ E_3(g_2)=-g_2\Gamma ^3_{22}-2g_1\Gamma ^3_{12},\\&E_2(g_2)=-(g_2g_3)^{-1}\left[ g_1(1+g^2_1+g^2_2-g^2_3)\Gamma ^3_{21} +g_2(1+g^2_2-g^2_3)\Gamma ^3_{22}\right] \\&\quad -(g_2g_3)^{-1}\left[ g_3(1+g^2_2)\Gamma ^3_{32}+g^2_1(g_2\Gamma ^3_{22} +g_3(-\Gamma ^2_{11}+\Gamma ^3_{32}))\right] . \end{aligned}\right. \end{aligned}$$
(6.8)

Letting \((i,j,k)=(2,3,3)\) in (6.1), taking account of (6.2) and (6.8), we get

$$\begin{aligned} \begin{aligned} \Gamma ^4_{32}=\Gamma ^4_{23},\ \ E_2(g_3)=-2g_1\Gamma ^3_{21}-2g_2\Gamma ^3_{22}-g_3\Gamma ^3_{32}. \end{aligned} \end{aligned}$$
(6.9)

Letting \((i,j,k)=(2,4,2)\) in (6.1), taking account of (6.6), we get

$$\begin{aligned} \left\{ \begin{aligned}&\Gamma ^4_{42}=g_3^{-1}(g_1\Gamma ^3_{21}+g_2\Gamma ^3_{22}+g_3\Gamma ^3_{32} +f_4\Gamma ^4_{23}),\ \ \Gamma ^3_{42}=\Gamma ^4_{23},\\&E_4(g_2)=-g_3^{-1}\left[ (1+g^2_1+g^2_2-g^2_3)\Gamma ^3_{42} +g_1g_2\Gamma ^4_{13}+g^2_3\Gamma ^4_{23}\right] . \end{aligned}\right. \end{aligned}$$
(6.10)

Letting \((i,j,k)=(2,4,4)\) in (6.1), taking account of (6.10), we get

$$\begin{aligned} E_2(f_4)= & {} -g_3^{-1}\left[ f_4(g_1\Gamma ^3_{21}+g_2\Gamma ^3_{22}+g_3\Gamma ^3_{32})\right. \nonumber \\&\left. +(f^2_4+4(1+g^2_1+g^2_2+g^2_3))\Gamma ^4_{23}\right] . \end{aligned}$$
(6.11)

Letting \((i,j,k)=(3,4,3)\) in (6.1), taking account of (6.6) and (6.10), we get

$$\begin{aligned} \left\{ \begin{aligned} E_3(g_3)&=\frac{g_1(1+g^2_1+2g^2_2)\Gamma ^3_{21}+g^3_2\Gamma ^3_{22} +f_4(g_1g_2\Gamma ^4_{13}+g^2_2\Gamma ^4_{23}+g_2g_3\Gamma ^4_{33})}{g_2g_3}\\&\quad -g_3^{-1}(1+g^2_1+g^2_2+g^2_3)\Gamma ^4_{43},\\ E_4(g_3)&=-2g_1\Gamma ^4_{13}-2g_2\Gamma ^4_{23}-g_3\Gamma ^4_{33}. \end{aligned}\right. \end{aligned}$$
(6.12)

Finally, letting \((i,j,k)=(3,4,4)\) in (6.1), taking account of (6.6), (6.10) and (6.12), we get

$$\begin{aligned} \begin{aligned} E_3(f_4)=-4(g_1\Gamma ^4_{13}+g_2\Gamma ^4_{23}+g_3\Gamma ^4_{33})-f_4\Gamma ^4_{43}. \end{aligned} \end{aligned}$$
(6.13)

7 Determining the Connection Coefficients

For the purpose of proving Theorem 1.2, we now assume that \(x:M^4\rightarrow {\mathbb {R}}^5\) is a flat hyperbolic centroaffine Tchebychev hypersurface.

In this section, applying for the Tchebychev condition we shall solve the Eqs. (6.2)–(6.13) to express \(\{\Gamma _{ij}^k\}\) and \(E_4(f_4)\) by the coefficients \(\{f_i,g_j\}\) of the difference tensor in (5.3). Similar as in the last section, the computations below are complicated but straightforward.

First of all, by \(T=\tfrac{1}{4}\mathop {\sum }\nolimits ^4_{i=1}K(E_i,E_i)\) we express the Tchebychev condition as

$$\begin{aligned} \sum ^4_{i=1}{\hat{\nabla }}_{E_j}K(E_i,E_i)-4H E_j=0,\ \ 1\le j\le 4. \end{aligned}$$
(7.1)

Then, taking \(j=1\) in (7.1), by calculations with applying (5.3), we can derive that

$$\begin{aligned} \begin{aligned} 0&=\left[ E_1(f_1+3g_1)+(f_2+2g_2)\Gamma ^1_{12}+(f_3+g_3)\Gamma ^1_{13}+f_4\Gamma ^1_{14}-4H\right] E_1\\&\quad +\left[ E_1(f_2+2g_2)+(f_1+3g_1)\Gamma ^2_{11}+(f_3+g_3)\Gamma ^2_{13}+f_4\Gamma ^2_{14}\right] E_2\\&\quad +\left[ E_1(f_3+g_3)+(f_1+3g_1)\Gamma ^3_{11}+(f_2+2g_2)\Gamma ^3_{12}+f_4\Gamma ^3_{14}\right] E_3\\&\quad +\left[ E_1(f_4)+(f_1+3g_1)\Gamma ^4_{11}+(f_2+2g_2)\Gamma ^4_{12}+(f_3+g_3)\Gamma ^4_{13}\right] E_4. \end{aligned} \end{aligned}$$
(7.2)

From (5.4) we have

$$\begin{aligned} \left\{ \begin{aligned}&f_1+3g_1=g_1^{-1}(4g^2_1-1),\ \ f_2+2g_2=g_2^{-1}\left[ 3g^2_2-(1+g^2_1)\right] ,\\&f_3+g_3=g_3^{-1}\left[ 2g^2_3-(1+g^2_1+g^2_2)\right] . \end{aligned}\right. \end{aligned}$$
(7.3)

From (6.2), (6.3) and (6.6), we get

$$\begin{aligned} \begin{aligned} \Gamma ^3_{11}&=(g_1g_2)^{-1}(1+g^2_1)\Gamma ^3_{12},\ \ \Gamma ^4_{11}=(g_1g_3)^{-1}(1+g^2_1)\Gamma ^4_{13},\ \\ \Gamma ^2_{14}&=-g_3^{-1}g_2\Gamma ^4_{13}. \end{aligned} \end{aligned}$$
(7.4)

Then, from (7.2), taking account of (7.3) and (7.4), \(\Gamma _{ij}^k=-\Gamma _{ik}^j\), and the expressions of \(E_1(g_1)\), \(E_1(g_2)\), \(E_1(g_3)\) and \(E_1(f_4)\) in Sect. 6, we can derive a system of four equations with unknowns \(\Gamma ^2_{11}\), \(\Gamma ^2_{21}\), \(\Gamma ^3_{12}\) and \(\Gamma ^4_{13}\). Solving these equations, we then obtain

$$\begin{aligned} \begin{aligned} \Gamma ^2_{11}&=4H g^4_1g_2\left[ \left( -g^2_1(1+g^2_1+g^2_2)^2+(-1-g^2_1+g^2_2)g^2_3\right) (\lambda \beta )^{-1}\right. \\&\left. \quad +\left( 2+4g^4_1+g^2_1(6+f^2_4+4g^2_2+4g^2_3)\right) \gamma ^{-1}\right] ,\\ \Gamma ^2_{21}&=-4H g^3_1g^2_2\left[ g^2_1(1+g^2_1+g^2_2)^2(1+6g^6_1+g^2_1(6+f^2_4+2g^2_2)\right. \\&\quad +g^4_1(11+2f^2_4+6g^2_2))+((1+g^2_1)^2(1+(6+f^2_4)g^2_1\\&\quad +2(7+f^2_4)g^4_1+14g^6_1)+2g^2_1(1+9g^2_1+(22+f^2_4)g^4_1\\&\quad +14g^6_1)g^2_2+2g^4_1(1+7g^2_1)g^4_2)g^2_3+g^2_1(4+g^2_1(f^2_4(1+2g^2_1)\\&\left. \quad +4(5+g^2_2+4g^2_1(2+g^2_1+g^2_2))))g^4_3+4g^4_1(1+2g^2_1)g^6_3\right] (\lambda \beta \gamma )^{-1},\\ \Gamma ^3_{12}&=4H g^5_1g_2g_3\left[ -(1+g^2_1+g^2_2)(1+2g^4_1+g^2_1(3+f^2_4+2g^2_2))\right. \\&\left. \quad +(2+g^2_1(f^2_4+2(2+g^2_1+g^2_2)))g^2_3+4g^2_1g^4_3\right] (\beta \gamma )^{-1},\\ \Gamma ^4_{13}&=4H f_4g^5_1g_3\gamma ^{-1}, \end{aligned} \end{aligned}$$
(7.5)

where

$$\begin{aligned} \begin{aligned} \lambda&=(1+g^2_1)(g^2_1+g^2_2),\\ \beta&=g^2_3+g^2_1((1+g^2_1+g^2_2)^2+(2+g^2_1+g^2_2)g^2_3),\\ \gamma&=1+g^2_1(f^2_4(1+g^4_1+g^2_1(2+g^2_2+g^2_3))\\&\quad +(3+2g^2_1+2g^2_2+2g^2_3)(2+2g^4_1+g^2_1(3+2g^2_2+2g^2_3))). \end{aligned} \end{aligned}$$
(7.6)

Similarly, we take \(j=2\) in (7.1). By straightforward calculations with applying (5.3), we can further use (7.5) to solve the unknowns \(\Gamma ^3_{22}\), \(\Gamma ^3_{32}\) and \(\Gamma ^4_{23}\). Then, we obtain

$$\begin{aligned} \Gamma ^3_{22}= & {} \tfrac{4H g^2_2g_3}{(1+g^2_1)^2}\left[ g^2_2(2+2g^4_1+4g^4_2+g^2_1(4+6g^2_2)\right. \nonumber \\&\quad +g^2_2(6+f^2_4+4g^2_3))\varphi ^{-1}-g^2_2\psi ^{-1}-g^6_1(1+g^2_1+g^2_2)\beta ^{-1}\nonumber \\&\left. \quad +g^6_1(2+4g^4_1+g^2_1(6+f^2_4+4g^2_2+4g^2_3))\gamma ^{-1}\right] ,\nonumber \\ \Gamma ^3_{32}= & {} \tfrac{4H}{(1+g^2_1)^2}\left[ g^5_2\psi ^{-1}+g^6_1g_2(1+g^2_1+g^2_2)^2\beta ^{-1}\right. \nonumber \\&\quad -(g^3_2(1+g^2_1+g^2_2)((1+g^2_1)^2+(4+f^2_4+4g^2_1)g^2_2+4g^4_2)\nonumber \\&\quad +2g^5_2(1+g^2_1+2g^2_2)g^2_3)\varphi ^{-1}\nonumber \\&\quad +g^4_1g_2(-1+g^2_1(-f^2_4(1+g^4_1+g^2_1(2+g^2_2))\nonumber \\&\quad -(3+2g^2_1+2g^2_2)(2+2g^4_1+g^2_1(3+2g^2_2))\nonumber \\&\left. \quad -2(1+2g^4_1+g^2_1(3+2g^2_2))g^2_3))\gamma ^{-1}\right] ,\nonumber \\ \Gamma ^4_{23}= & {} 4H f_4g_2g_3\left[ g^{12}_1+g^4_2+g^{10}_1(3+6g^2_2)+g^2_1g^4_2(5+f^2_4+4g^2_2+4g^2_3)\right. \nonumber \\&\quad +g^4_1g^4_2(4(g^2_2+g^2_3)^2+8(1+g^2_2+g^2_3)+f^2_4(1+g^2_2+g^2_3))\nonumber \\&\quad +g^8_1(3+13g^4_2+g^2_2(12+f^2_4+4g^2_3))+g^6_1(1+12g^6_2+g^2_2(6+f^2_4+4g^2_3)\nonumber \\&\left. \quad +g^4_2(17+2f^2_4+12g^2_3))\right] (\gamma \varphi )^{-1}, \end{aligned}$$
(7.7)

where

$$\begin{aligned} \begin{aligned} \varphi&=(1+g^2_1+g^2_2)^2((1+g^2_1)^2+(4+f^2_4+4g^2_1)g^2_2+4g^4_2)\\&\quad +g^2_2(4(1+g^2_1)^2+(f^2_4+12(1+g^2_1))g^2_2+8g^4_2)g^2_3+4g^4_2g^4_3,\\ \psi&=(1+g^2_1+g^2_2)(g^2_2+g^2_3),\\ \end{aligned} \end{aligned}$$
(7.8)

Finally, taking in (7.1) \(j=3\) and \(j=4\), respectively, by direct calculations with using (7.5) and (7.7), we can derive

$$\begin{aligned} \begin{aligned} \Gamma ^4_{33}&= 4H f_4\left[ g^4_3\theta ^{-1} +g^6_2g^2_3(\eta \varphi )^{-1}+g^6_1g^2_3((1+g^2_1)\gamma )^{-1}\right] ,\\ \Gamma ^4_{43}&= 4H g_3\left[ -g^2_3(1+g^2_1+g^2_2+2g^2_3)\theta ^{-1}-g^4_2(1+g^4_1+2g^4_2\right. \\&\quad +g^2_1(2+3g^2_2)+g^2_2(3+2g^2_3))(\eta \varphi )^{-1}\\&\left. \quad -g^4_1(1+2g^4_1+g^2_1(3+2g^2_2+2g^2_3))((1+g^2_1)\gamma )^{-1}\right] ,\\ E_4(f_4)&= 4H\left[ 2-(g^2_2(1+g^2_1+g^2_2)^2(2(1+g^2_1)^2+(4+f^2_4+4g^2_1)g^2_2)\right. \\&\quad +g^4_2(4(1+g^2_1)^2+(4+f^2_4+4g^2_1)g^2_2)g^2_3)(\eta \varphi )^{-1}\\&\quad -((1+g^2_1+g^2_2)^3+(1+g^2_1+g^2_2)(f^2_4+6(1+g^2_1+g^2_2))g^2_3\\&\quad +(f^2_4+8(1+g^2_1+g^2_2))g^4_3)\theta ^{-1}-((2+(4+f^2_4)g^2_1)(g_1+g^3_1)^2\\&\left. \quad +g^4_1(4+(4+f^2_4)g^2_1)g^2_2+g^4_1(4+(4+f^2_4)g^2_1)g^2_3)((1+g^2_1)\gamma )^{-1}\right] , \end{aligned} \end{aligned}$$
(7.9)

where

$$\begin{aligned} \begin{aligned} \eta&= (1+g^2_1)(1+g^2_1+g^2_2),\\ \theta&= (1+g^2_1+g^2_2)(1+g^2_1+g^2_2+g^2_3)((1+g^2_1+g^2_2)^2\\&\quad +(f^2_4+4(1+g^2_1+g^2_2))g^2_3+4g^4_3). \end{aligned} \end{aligned}$$
(7.10)

8 Completion of Theorem 1.2’s Proof

Let \(x:M^4\rightarrow {\mathbb {R}}^5\) be a flat hyperbolic centroaffine Tchebychev hypersurface. In this section, with the materials we derived in preceding sections, we shall complete the proof of Theorem 1.2. We begin with the equation

$$\begin{aligned} E_3(E_4(g_1))-E_4(E_3(g_1))=({\hat{\nabla }}_{E_3}E_4)(g_1)-({\hat{\nabla }}_{E_4}E_3)(g_1). \end{aligned}$$
(8.1)

By using the results in Sects. 6 and 7, we can calculate (8.1) to obtain that

$$\begin{aligned}&H^2f_4g^9_1g_3\left[ (1+g^2_1+g^2_2)^3-(1+g^2_1+g^2_2)(f^2_4+3(1+g^2_1+g^2_2))g^2_3\right. \\&\left. \quad +f^2_4g^4_3+4g^6_3\right] =0. \end{aligned}$$

From the fact \(g_1\ne 0\) and \(g_3\ne 0\), we then obtain

$$\begin{aligned} H^2f_4\mu =0, \end{aligned}$$
(8.2)

where

$$\begin{aligned} \mu:= & {} (1+g^2_1+g^2_2)^3-(1+g^2_1+g^2_2)(f^2_4\nonumber \\&+3(1+g^2_1+g^2_2))g^2_3+f^2_4g^4_3+4g^6_3. \end{aligned}$$
(8.3)

From (8.2), we shall divide the following discussions into three cases:

Case I: \(H=0\);

Case II: \(H\ne 0\) and \(f_4=0\);

Case III: \(H\ne 0\), \(f_4\ne 0\) and \(\mu =0\).

In Case I, we can apply Theorem 1.1. Then, combining with the results in Sect. 3 and taking into account that \(x:M^4\rightarrow {\mathbb {R}}^5\) is hyperbolic, we conclude that x is centroaffinely equivalent to one of the hypersurfaces (cf. Remark 1.2):

$$\begin{aligned} x_1^{\alpha _1}x_2^{\alpha _2}x_3^{\alpha _3}x_4^{\alpha _4}x_5^{\alpha _5}=1, \ \ \alpha _i>0\ \ \mathrm{for}\ \ 1\le i\le 5. \end{aligned}$$

In Case II, substituting \(f_4=0\) into the expression of \(E_4(f_4)\) in (7.9), we obtain that

$$\begin{aligned} \begin{aligned} 0&=4H\left[ 2-(g^2_2(1+g^2_1+g^2_2)^2(2(1+g^2_1)^2+(4+4g^2_1)g^2_2)+g^4_2(4(1+g^2_1)^2\right. \\&\quad +(4+4g^2_1)g^2_2)g^2_3)(\eta \varphi )^{-1}-((1+g^2_1+g^2_2)^3+(1+g^2_1+g^2_2)(6(1+g^2_1+g^2_2))g^2_3\\&\quad +(8(1+g^2_1+g^2_2))g^4_3)\theta ^{-1}-((2+4g^2_1)(g_1+g^3_1)^2+g^4_1(4+4g^2_1)g^2_2\\&\left. \quad +g^4_1(4+4g^2_1)g^2_3)((1+g^2_1)\gamma )^{-1}\right] =4H\Omega \nu ^{-1}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} \Omega&=(1+g^2_1+g^2_2)^2(1+4g^8_1+2g^2_2+4g^4_2+2g^6_1(5+8g^2_2)\\&\quad +4g^2_1(1+g^2_2)(1+g^2_2+2g^4_2)+g^4_1(9+22g^2_2+20g^4_2))\\&\quad +2(1+g^2_1+g^2_2)((1+g^2_1)^2(1+3g^2_1+8g^4_1)+(1+g^2_1)(3+11g^2_1+36g^4_1)g^2_2\\&\quad +4(2+7G^2_1+12g^4_1)g^4_2+20g^2_1g^6_2)g^2_3+4((1+g^2_1)^2(1+3g^2_1+5g^4_1)\\&\quad +(1+g^2_1)(3+12g^2_1+28g^4_1)g^2_2+(5+27g^2_1+41g^4_1)g^4_2+18g^2_1g^6_2)g^4_3\\&\quad +8((g_1+g^3_1)^2+(1+5g^2_1+8g^4_1)g^2_2+7g^2_1g^4_2)g^6_3+16g^2_1g^2_2g^8_3>0,\\ \nu&=(1+g^2_1+g^2_2+g^2_3)(1+g^2_1+g^2_2+2g^2_3)(1+g^4_1+2g^4_2+g^2_1(2+3g^2_2)\\&\quad +g^2_2(3+2g^2_3))(1+2g^4_1+g^2_1(3+2g^2_2+2g^2_3))>0. \end{aligned} \end{aligned}$$

This is a contradiction to \(H\ne 0\). Hence, Case II does not occur.

In Case III, we first see from \(g_3\ne 0\) and (5.4) that \(1+g^2_1+g^2_2-g^2_3>0\). Moreover, replacing \(E_4\) by \(-E_4\) if necessary, we can assume \(f_4<0\). Then, from (8.3), we get

$$\begin{aligned} f_4=-\sqrt{\tfrac{(1+g^2_1+g^2_2-2g^2_3)^2(1+g^2_1+g^2_2+g^2_3)}{g^2_3(1+g^2_1+g^2_2-g^2_3)}}, \end{aligned}$$
(8.4)

and that \(1+g^2_1+g^2_2-2g^2_3\ne 0\).

Since \(x:M^4\rightarrow {\mathbb {R}}^5\) is a Tchebychev hypersurface, from (2.15) we have

$$\begin{aligned} \begin{aligned} E_3(\Vert T\Vert ^2)=2H h(T,E_3). \end{aligned} \end{aligned}$$
(8.5)

However, by Lemma 5.2, it holds that

$$\begin{aligned} T&=\tfrac{1}{4}\left[ (f_1+3g_1)E_1+(f_2+2g_2)E_2+(f_3+g_3)E_3+f_4E_4\right] , \end{aligned}$$
(8.6)
$$\begin{aligned} \Vert T\Vert ^2&=\tfrac{1}{16}\left[ (f_1+3g_1)^2+(f_2+2g_2)^2+(f_3+g_3)^2+f^2_4\right] . \end{aligned}$$
(8.7)

Moreover, combining with (7.5), (7.7) and (7.9), we can rewrite \(E_3(g_1)\), \(E_3(g_2)\), \(E_3(g_3)\) and \(E_3(f_4)\) in Sect. 6 as

$$\begin{aligned} E_3(g_1)= & {} -4Hg^5_1(1+g^2_1)g_3\left[ (-1-g^2_1-g^2_2)(1+2g^4_1+g^2_1(3+f^2_4+2g^2_2))\right. \nonumber \\&\left. \quad +(2+g^2_1(f^2_4+2(2+g^2_1+g^2_2)))g^2_3+4g^2_1g^4_3\right] (\beta \gamma )^{-1},\nonumber \\ E_3(g_2)= & {} \tfrac{4Hg_2g_3}{(1+g^2_1)^2}\left[ g^4_2\psi ^{-1}-(g^4_2(2+2g^4_1+4g^4_2+g^2_1(4+6g^2_2)\right. \nonumber \\&\quad +g^2_2(6+f^2_4+4g^2_3)))\varphi ^{-1}+g^6_1(1+g^2_1+g^2_2)(2+2g^2_1+g^2_2)\beta ^{-1}\nonumber \\&\left. \quad -g^6_1(2+2g^2_1+g^2_2)(2+4g^4_1+g^2_1(6+f^2_4+4g^2_2+4g^2_3))\gamma ^{-1}\right] ,\qquad \end{aligned}$$
(8.8)

and

$$\begin{aligned} E_3(g_3)= & {} \tfrac{2H}{(1+g^2_1)^2}\left[ -2g^6_2\psi ^{-1}+g^2_2((1+g^2_1+g^2_2)^2(-1-g^2_1+2g^2_2) ((1+g^2_1)^2\right. \nonumber \\&\quad +(4+f^2_4+4g^2_1)g^2_2+4g^4_2)+g^2_2(-2(1+g^2_1)^3+(1+g^2_1)(f^2_4\nonumber \\&\quad -2(1+g^2_1))g^2_2+8(1+g^2_1)g^4_2+8g^6_2)g^2_3)((1+g^2_1+g^2_1)\varphi )^{-1}\nonumber \\&\quad +(1+g^2_1)^2(-(1+g^2_1+g^2_2)^3-(1+g^2_1+g^2_2)(f^2_4+3(1+g^2_1+g^2_2))g^2_3\nonumber \\&\quad +(f^2_4-2(1+g^2_1+g^2_2))g^4_3)\theta ^{-1}-(g^2_1(1+g^2_1+g^2_2)^2(-1-2g^2_1+2g^6_1\nonumber \\&\quad +g^4_1(1+2g^2_2))-(1+g^2_1)^2(1+g^4_1+g^2_1(2+g^2_2))g^2_3)\beta ^{-1}\nonumber \\&\quad +g^2_1((-1+2g^4_1+g^2_1(1+2g^2_2))(1+g^2_1(f^2_4(1+g^4_1+g^2_1(2+g^2_2))\nonumber \\&\quad +(3+2g^2_1+2g^2_2)(2+2g^4_1+g^2_1(3+2g^2_2))))+g^2_1(-2+g^2_1(-4+6g^2_1\nonumber \\&\left. \quad +f^2_4(1+g^2_1)+8g^2_1(g^2_1+g^2_2)(2+g^2_1+g^2_2)))g^2_3)\gamma ^{-1}\right] ,\nonumber \\ E_3(f_4)= & {} 4Hf_4g_3\left[ g^2_3(1+g^2_1+g^2_2-2g^2_3)\theta ^{-1}\right. \nonumber \\&\quad -(g^4_2(-1-g^4_1+g^2_2+g^2_1(-2+g^2_2)+2g^2_2(g^2_2+g^2_3)))(\varphi \eta )^{-1}\nonumber \\&\left. \quad +(g^4_1-2g^8_1-g^6_1(1+2g^2_2+2g^2_3))((1+g^2_1)\gamma )^{-1}\right] , \end{aligned}$$
(8.9)

where \(\{\beta ,\gamma ,\varphi ,\psi ,\eta ,\theta \}\) are defined by (7.6), (7.8) and (7.10), respectively.

Now, combining (7.3), (8.4), (8.6), (8.7), (8.8) and (8.9), we can carry a straightforward calculation of (8.5) to obtain the following relation:

$$\begin{aligned} \begin{aligned} \tfrac{1}{2g_3}(1+g^2_1+g^2_2-2g^2_3)H=0. \end{aligned} \end{aligned}$$
(8.10)

But this is impossible. Hence, Case III does not occur.

In conclusion, we have completed the proof of Theorem 1.2.