1 Introduction

In this paper we study Lagrangian submanifolds of complex space forms. The complex space forms are the easiest examples of Kähler manifolds. These are almost Hermitian manifolds for which the almost complex structure J is parallel with respect to the Levi-Civita connection \(\nabla \) of the Hermitian metric g. The standard models of complex space forms are the complex projective space \(\mathbb {CP}^n\), the complex Euclidean space \({\mathbb {C}}^n\) and the complex hyperbolic space \(\mathbb {CH}^n\), according to whether the holomorphic sectional curvature \({\tilde{c}}\) satisfies \(\tilde{c}>0\), \(\tilde{c}=0\) or \(\tilde{c}<0\).

There are two special classes of submanifolds of a Kähler manifold depending on the behavior of the complex structure J with respect to the submanifold.

A submanifold M of \({\tilde{M}}\) is called almost complex if and only if J maps tangent vectors to tangent vectors. On the other hand M is called totally real if the almost complex structure J of \(\tilde{M}\) carries each tangent space of M into its corresponding normal space. The study of minimal totally real submanifolds originates with the work of Chen and Ogiue (see [6]). A special case here happens when the real dimension of the submanifold equals the complex dimension of the ambiant space. In that case J interchanges the tangent and the normal spaces. Such submanifolds are called Lagrangian submanifolds. These can also be seen as submaniflds of the largest possible dimension on which the symplectic form vanishes identically.

For the study of minimal Lagrangian immersions in complex space forms one may find a short survey in [5], where some of the main results are mentioned (see for example [2,3,4, 6, 7, 9,10,11, 13, 14, 16, 19, 22, 23]).

The fundamental question in submanifold theory is then to determine to what extent the geometry of the submanifold determines the immersion of the submanifold in the ambiant space. In that respect, it was shown by Ejiri [11] that an n-dimensional Lagrangian minimal submanifold of constant sectional curvature c immersed in an n-dimensional complex space form is either totally geodesic or flat \((c=0)\) (cf. also [17] and [9]). More precisely in the latter case it must be congruent to a specific Lagrangian tori in the complex projective space (see Main Theorem below). Note that the condition that the immersion is minimal is unavoidable. From [21] and [8] one can see that one cannot expect to obtain a general classification of all Lagrangian submanifolds of real space forms in complex space forms.

In this paper we consider the logical next step. We will assume that our manifold M is isometric with \(M_1^{n_1}(c_1)\times M_2^{n_2}(c_2)\), i.e. it is a product of two real space forms of constant sectional curvature, respectively \(c_1\) and \(c_2\). As the main result of the paper we extend Ejiri’s result by proving

Main Theorem

Let \(\psi :M^n\rightarrow {\tilde{M}}^n(4\tilde{c})\) be a minimal Lagrangian immersion into a complex space form with induced metric \(\langle \cdot ,\cdot \rangle \). If \(M^n=M_1^{n_1}(c_1)\times M_2^{n_2}(c_2)\), where \(n=n_1+n_2\), \(M_1^{n_1}(c_1)\) (resp. \(M_1^{n_2}(c_2)\)) is an \(n_1\) (resp. \(n_2\))-dimensional Riemannian manifold of constant sectional curvature \(c_1\) (resp. \(c_2\)), then \(c_1 c_2=0\). Moreover,

  1. (1)

    if \(c_1=c_2=0\), then \(M^n\) is equivalent to either the totally geodesic immersion in \({\mathbb {C}}^n\) or the Lagrangian flat torus in \(\mathbb {CP}^n(4\tilde{c})\).

  2. (2)

    if \(c_1 c_2=0\) and \(c_1^2+c_2^2\ne 0\), without loss of generality, we may assume that \(c_1=0\) and \(c_2\ne 0\). Then we have \(c_2=\frac{n_1+n_2+1}{n_2+1}\tilde{c}>0\), say \(\tilde{c}=1\), so the ambient space is \(\mathbb {CP}^n(4)\), and the immersion is congruent with

    $$\begin{aligned} \tfrac{1}{n+1}(e^{iu_1},\ldots ,e^{iu_{n_1}}, a e^{i u_{n_1+1}}y_1, \ldots ,a e^{i u_{n_1+1}}y_{n_2+1} ), \end{aligned}$$

where

  1. (i)

    \((y_1,y_2,\ldots ,y_{n_2+1})\) describes the standard sphere \({\mathbb {S}}^{n_2}\hookrightarrow {\mathbb {R}}^{n_2+1}\hookrightarrow {\mathbb {C}}^{n_2+1}\),

  2. (ii)

    \(a=\sqrt{n_2+1}\),

  3. (iii)

    \(u_1+\cdots +u_{n_1}+a^2 u_{n_1+1}=0\).

Remark 1.1

The technique we use in the proof of the Main Theorem is based on two steps. The first step is to take cyclic permutation of the covariant derivative of the Codazzi equation. The second step is then expressing the second fundamental form of the submanifold \(M^n\) with respect to a conveniently chosen frame. To do so, we proceed by induction (see [18]). One should notice that, eventually, our main result follows directly from the theorems in [15].

2 Preliminaries

In this section, we will recall the basic formulas for Lagrangian submanifolds in complex space forms. Let \(\tilde{M}^n(4\tilde{c})\) be a complex space form of complex dimension n and having constant holomorphic sectional curvature \(4\tilde{c}\). Let \(M^n\) be a minimal Lagrangian submanifold in \(\tilde{M}^n(4\tilde{c})\) given by the immersion \(\psi : M^n\rightarrow \tilde{M}^n(4\tilde{c})\) such that

$$\begin{aligned} M^n=M_1^{n_1}(c_1)\times M_2^{n_2}(c_2), \end{aligned}$$
(2.1)

where \(n_1+n_2=n\), \(M_1^{n_1}(c_1)\) and \(M_2^{n_2}(c_2)\) are manifolds of real dimensions \(n_1\) and \(n_2\) and have constant sectional curvature \(c_1\) and \(c_2\), respectively.

Let \(\nabla \) and \({\tilde{\nabla }}\) be the Levi-Civita connections on \(M^n\) and \(\tilde{M}^n(4\tilde{c})\), respectively. The formulas of Gauss and Weingarten write out as

$$\begin{aligned} {\tilde{\nabla }}_XY=\nabla _XY+h(X,Y), \quad {\tilde{\nabla }}_X\xi =-A_{\xi }X+\nabla ^{\perp }_X\xi , \end{aligned}$$
(2.2)

for XY tangent to \(M^n\) and \(\xi \) normal to \(M^n\), where hA and \(\nabla ^\perp \) are the second fundamental form, the shape operator and the normal connection, respectively.

Notice that we will always identify \(M^n\) with its immersed image in \(\tilde{M}^n(4\tilde{c})\). As \(M^n\) is Lagrangian, we have that the almost complex structure J interchanges the tangent and the normal spaces. Moreover, since J is parallel, we deduce that

$$\begin{aligned} \nabla ^{\perp }_XJY=J\nabla _XY,\quad A_{JX}Y=-Jh(X,Y)=A_{JY}X. \end{aligned}$$
(2.3)

The last formula implies that the cubic form g(h(XY), JZ) is totally symmetric. The minimality condition on \(M^n\) means that \({ trace}\,h=0\), and one may notice that this is equivalent to \({ trace}A_J=0\).

A straightforward computation shows that the equations of Gauss, Codazzi and Ricci are

$$\begin{aligned} R(X,Y)Z= & {} \tilde{c}\,(\langle Y,Z \rangle X-\langle X,Z \rangle Y)+ [A_{JX},A_{JY}]Z, \end{aligned}$$
(2.4)
$$\begin{aligned} (\nabla h)(X,Y,Z)= & {} (\nabla h)(Y,X,Z), \end{aligned}$$
(2.5)
$$\begin{aligned} R^\perp (X,Y)JZ= & {} \tilde{c}( \langle Y,Z \rangle JX-\langle X,Z \rangle JY )+J[A_{JX},A_{JY}]Z, \end{aligned}$$
(2.6)

where XYZ are tangent vector fields and the covariant derivative of h is given by

$$\begin{aligned} (\nabla h)(X,Y,Z)=\nabla ^{\perp }_X h(Y,Z)-h(\nabla _XY,Z)-h(Y,\nabla _XZ). \end{aligned}$$
(2.7)

Moreover, the following Ricci identity holds:

$$\begin{aligned}&(\nabla ^2 h)(X,Y,Z,W)-(\nabla ^2 h)(Y,X,Z,W)\nonumber \\&\quad =JR(X,Y)A_{JZ}W-h(R(X,Y)Z,W)-h(R(X,Y)W,Z), \end{aligned}$$
(2.8)

where XYZW are tangent vector fields and

$$\begin{aligned} \begin{aligned} (\nabla ^2 h)(W, X,Y,Z)=&\nabla ^{\perp }_W(( \nabla h)(X,Y,Z))-(\nabla h)(\nabla _WX,Y,Z)\\&-(\nabla h)(X,\nabla _WY,Z) -(\nabla h)(X,Y,\nabla _WZ). \end{aligned} \end{aligned}$$
(2.9)

In the following, we will prove an additional relation that is very useful in our computations. To do so, we will make use of the technique introduced in [1], as the Tsinghua Principle. First, take the covariant derivative in (2.52.6) with respect to W, and use (2.9) and (2.52.6), to obtain straightforwardly that

$$\begin{aligned} (\nabla ^2h)(W,X,Y,Z)-(\nabla ^2h)(W,Y,X,Z)=0. \end{aligned}$$
(2.10)

In the above equation we then cyclicly permute the first three vector fields and express each time the left-hand side of the equations using the Ricci identity in (2.8). It then follows that

$$\begin{aligned} \begin{aligned} 0&=R(W,X)Jh(Y,Z)-Jh(Y, R(W,X)Z)\\&\quad +R(X,Y)Jh(W,Z)-Jh(W, R(X,Y)Z)\\&\quad +R(Y,W)Jh(X,Z)-Jh(X, R(Y,W)Z). \end{aligned} \end{aligned}$$
(2.11)

Furthermore, given [20, Corollary 58, p. 89], we know that

$$\begin{aligned} R(X,Y)Z=c_1(\langle Y_1,Z_1 \rangle X_1-\langle X_1,Z_1 \rangle Y_1)+c_2(\langle Y_2,Z_2 \rangle X_2-\langle X_2,Z_2\rangle Y_2),\nonumber \\ \end{aligned}$$
(2.12)

where \(X_i,Y_i,Z_i\) are the projections of XYZ on the \(TM_i^{n_i}\) component of \(TM^n\), for \(i=1,2\), respectively.

We recall the following useful definitions and theorems (see [15]).

Definition 1

Let \(\psi _i : (M_i, g_i) \rightarrow \mathbb {CP}^{n_i}(4)\), \(i = 1, 2\), be two Lagrangian immersions and let \(\tilde{\gamma }=(\tilde{\gamma }_1, \tilde{\gamma }_2) : I \rightarrow {\mathbb {S}}^3(1)\subset {\mathbb {C}}^2\) be a Legendre curve. Then \(\psi =\Pi (\tilde{\gamma }_1 \tilde{\psi }_1; \tilde{\gamma }_2 \tilde{\psi }_2): I\times M_1\times M_2 \rightarrow \mathbb { CP}^n(4)\) is a Lagrangian immersion, where \(n=n_1 +n_2 +1\), \(\tilde{\psi }_i : M_i\rightarrow {\mathbb {S}}^{2n_i+1}(1)\) are horizontal lifts of \(\psi _i\), \( i = 1, 2\), respectively, and \(\Pi \) is the Hopf fibration. We call \(\psi \) a warped product Lagrangian immersion of \(\psi _1\) and \(\psi _2\). When \(n_1\) (or \(n_2\)) is zero, we call \(\psi \) a warped product Lagrangian immersion of \(\psi _2\) (or \(\psi _1\)) and a point.

Definition 2

In Definition 1, when

$$\begin{aligned} \tilde{\gamma }(t) = \Big (r_1 e^{i \left( \tfrac{r_2}{r_1}at\right) },r_2e^{i\left( -\tfrac{r_1}{r_2}at\right) }\Big ), \end{aligned}$$
(2.13)

where \(r_1, r_2\) and a are positive constants with \(r_1^2+r_2^2 = 1\), we call \(\psi \) a Calabi product Lagrangian immersion of \(\psi _1\) and \(\psi _2\). When \(n_1\) (or \(n_2\)) is zero, we call \(\psi \) a Calabi product Lagrangian immersion of \(\psi _2\) (or \(\psi _1\)) and a point.

Theorem 2.1

([15]). Let \(\psi : M \rightarrow \mathbb {CP}^n(4)\) be a Lagrangian immersion. Then \(\psi \) is locally a Calabi product Lagrangian immersion of an \((n-1)\)-dimensional Lagrangian immersion \(\psi _1 : M_1 \rightarrow \mathbb {CP}^{n-1}(4)\) and a point if and only if M admits two orthogonal distributions \({\mathcal {D}}_1\) (of dimension 1, spanned by a unit vector field \(E_1\)) and \({\mathcal {D}}_2\) (of dimension \(n-1\), spanned by \(\{E_2,\ldots , E_n\}\)) and there exist two real constants \(\lambda _1\) and \(\lambda _2\) such that

$$\begin{aligned} \begin{aligned} h(E_1,E_1)&= \lambda _1 JE_1,\ h(E_1,E_i) =\lambda _2 JE_i, \ i= 2,\ldots ,n,\\ \lambda _1&\ne 2\lambda _2. \end{aligned} \end{aligned}$$
(2.14)

Moreover, a Lagrangian immersion \(\psi : M \rightarrow \mathbb {CP}^n(4)\), satisfying the above conditions, has the following properties:

  1. (1)

    \(\psi \) is Hamiltonian minimal if and only if \(\psi _1\) is Hamiltonian minimal;

  2. (2)

    \(\psi \) is minimal if and only if \(\lambda _2 =\pm \frac{1}{\sqrt{n}}\) and \(\psi _1\) is minimal. In this case, up to a reparametrization and a rigid motion of \(\mathbb {CP}^n\), locally we have \(M=I\times M_1\) and \(\psi \) is given by \(\psi = \Pi \circ \tilde{\psi }\) with

    $$\begin{aligned} \tilde{\psi }(t,p)=\left( \sqrt{\tfrac{n}{n+1}}e^{i\frac{1}{n+1}t} \tilde{\psi _1}(p), \sqrt{\tfrac{1}{n+1}}e^{-i\frac{n}{n+1}t}\right) ,\quad \ (t,p)\in I\times M_1, \end{aligned}$$

    where \(\Pi \) is the Hopf fibration and \(\tilde{\psi }_1 : M_1 \rightarrow S^{2n-1}(1)\) is the horizontal lift of \(\psi _1\).

Theorem 2.2

([15]). Let \(\psi :M \rightarrow \mathbb {CP}^{n}(4)\) be a Lagrangian immersion. If M admits two orthogonal distributions \({\mathcal {D}}_1\) (of dimension 1, spanned by a unit vector field \(E_1\)) and \({\mathcal {D}}_2\) (of dimension \(n-1\), spanned by \(\{E_2,\ldots ,E_n \}\)), and that there exist local functions \(\lambda _1\), \(\lambda _2\) such that (2.14) holds, then M has parallel second fundamental form if and only if \(\psi \) is locally a Calabi product Lagrangian immersion of a point and an \((n-1)\)-dimensional Lagrangian immersion \(\psi _1:M_1\rightarrow \mathbb {CP}^{n-1}(4)\), which has parallel second fundamental form.

3 Proof of the Main Theorem

In this section, we study a minimal Lagrangian isometric immersion into a complex space form: \(\psi :M^n\rightarrow {\tilde{M}}^n\), where \(M^n=M_1^{n_1}(c_1)\times M_2^{n_2}(c_2)\), \(n=n_1+n_2\) and \(M_1^{n_1}(c_1)\) (resp. \(M_1^{n_2}(c_2)\)) is \(n_1\) (resp. \(n_2\))-dimensional Riemannian manifold with constant sectional curvature \(c_1\) (resp. \(c_2\)). We will prove the Main Theorem stated in introduction.

One should be aware that throughout the paper we will make the following identifications. As \(M=M_1 \times M_2\), we can write a tangent vector field \(Z(p,q)= (X(p,q), Y(p,q))\) where \(X(p,q) \in T_p M_1\) and \(Y(p,q) \in T_q M_2\). In general, the X notation (as well as \(X_i\), \(1\le i\le n_1\)) will denote a vector tangent at \((p,q)\in M^n\), with zero components on \(M_2\). We will also identify \(X(p,q) \in T_{p} M_1\) with \((X(p,q), 0) \in T_{(p,q)} M_1 \times M_2\) (and similarly \(Y(p,q) \in T_{q} M_2\) with \((0, Y(p,q)) \in T_{(p,q)} M_1 \times M_2\). Notice that, apriori, it means that X, as a vector field depends on q as well, not only on p. One should have in mind this meaning when reading \(X\in T_pM_1\), respectively \(Y\in T_qM_2\).

Nonetheless, a complete understanding will be acquired with the proofs of Lemmas 3.6 and 3.7 , when we will actually see that due to our particular choice of basis, X only depends on p.

First of all, we consider the case \(c_1^2+c_2^2\ne 0\). We begin with the following result.

Lemma 3.1

If \(c_1^2+c_2^2\ne 0\), then the shape operator \(A_J\) vanishes nowhere.

Proof

Assume that \(A_J\) vanishes at the point \(p \in M\). From Eq. (2.4) it follows that \(R(X,Y)Z=\tilde{c}(\langle Y,Z\rangle X-\langle X,Z\rangle Y)\), which yields that M has constant sectional curvature \(\tilde{c}\) at p. Moreover, by taking \(X_1,X_2,X_2\) in (2.4) and (2.12), we obtain that \(c_1=\tilde{c}\) and then by taking XYY in (2.4) and (2.12), \(X\in T_pM_1\), \(Y\in T_pM_2\), we get \(\tilde{c}=0\). Similarly, taking \(Y_1,Y_2,Y_2\in T_pM_2\) in (2.4) and (2.12), we get that \(c_2=0\). Therefore, we get a contradiction with \(c_1^2+c_2^2\ne 0\). \(\square \)

For \(c_1^2+c_2^2\ne 0\), if \(c_1c_2=0\), without loss of generality, we may assume that \(c_1=0\) and \(c_2\ne 0\). Therefore, we are left to consider the following two cases:

Case (i) \(c_1=0\) and \(c_2\ne 0\); Case (ii) \(c_1\ne 0\) and \(c_2\ne 0\).

3.1

In this subsection, we will deal with Case (i) and prove the following result.

Theorem 3.1

Let \(\psi :M^n\rightarrow {\tilde{M}}^n(4\tilde{c})\) be a minimal Lagrangian isometric immersion into a complex space form such that \((M^n,\langle \cdot ,\cdot \rangle )=M_1^{n_1}(c_1)\times M_2^{n_2}(c_2)\) and Case (i) occurs. Then we have \(c_2=\tfrac{n_1+n_2+1}{n_2+1}\tilde{c}>0\), say \(\tilde{c}=1\), so the ambient space is \(\mathbb { CP}^n(4)\) and the immersion is congruent with

$$\begin{aligned} \tfrac{1}{n+1}(e^{iu_1},\ldots ,e^{iu_{n_1}}, a e^{i u_{n_1+1}}y_1, \ldots ,a e^{i u_{n_1+1}}y_{n_2+1}), \end{aligned}$$

where

  1. (1)

    \((y_1,y_2,\ldots ,y_{n_2+1})\) describes the standard sphere \({\mathbb {S}}^{n_2}\hookrightarrow {\mathbb {R}}^{n_2+1}\hookrightarrow {\mathbb {C}}^{n_2+1}\),

  2. (2)

    \(a=\sqrt{n_2+1}\),

  3. (3)

    \(u_1+\cdots +u_{n_1}+a^2 u_{n_1+1}=0\).

The proof of Theorem 3.1 consists of several lemmas as following.

Lemma 3.2

Let \(\{X_i\}_{1\le i\le n_1}\) and \(\{Y_j\}_{1\le j\le n_2}\) be orthonormal bases of \(M_1^{n_1}(c_1)\) and \(M_2^{n_2}(c_2)\), respectively. Then we have

$$\begin{aligned} \langle A_{JX_i} X_j, Y_k\rangle =0, \end{aligned}$$
(3.1)

and

$$\begin{aligned} \langle A_{JX_i}Y_j , Y_k\rangle =\left\{ \begin{array}{ll} 0, &{} \mathrm{if }\ j\ne k,\\ \mu (X_i), &{} \mathrm{if }\ j=k, \end{array} \right. \end{aligned}$$
(3.2)

where \(\mu (X_i)=:\mu _i\) depends only on \(X_i\) for each \(i=1,\ldots ,n_1\).

Proof

Expressing (2.11) for \(X=Y_k, Y=Y_l, Z=X_i, W=X_j\), \(k\ne l\), and using (2.12), we see that there is only one term remaining in the right-hand side: \(0=R(Y_k,Y_l)A_{JX_i} X_j\). Using (2.12) again, we get

$$\begin{aligned} 0=\langle Y_l, A_{JX_i} X_j \rangle Y_k-\langle Y_k, A_{JX_i} X_j \rangle Y_l. \end{aligned}$$
(3.3)

It follows immediately the assertion (3.1) that

$$\begin{aligned} \langle Y_l, A_{JX_i} X_j\rangle =0,\quad \ 1\le l\le n_2,\ 1\le i,\quad j\le n_2. \end{aligned}$$
(3.4)

For the second relation, we proceed similarly by choosing in (2.11): \(X=Y_m\), \(Y=X_i\), \(Z=Y_l\), \(W=Y_k\), we obtain

$$\begin{aligned} 0=-c_2(\langle A_{JX_i} Y_l,Y_m\rangle Y_k-\langle A_{JX_i} Y_l, Y_k\rangle Y_m-\delta _{ml}A_{JX_i} Y_k+\delta _{kl}A_{JX_i} Y_m). \end{aligned}$$
(3.5)

In (3.5), let klm be distinct. Then we get

$$\begin{aligned} \langle A_{JX_i} Y_l,Y_m\rangle =0,\quad 1\le i\le n_1,\quad 1\le l, m\le n_2,\quad l\not =m. \end{aligned}$$
(3.6)

Again in (3.5), let assume that \(l=m\ne k\). Then we have

$$\begin{aligned} \langle A_{JX_i} Y_l,Y_l\rangle =\langle A_{JX_i} Y_k,Y_k\rangle , \quad 1\le i\le n_1,\quad 1\le l,k\le n_2,\quad l\ne k. \end{aligned}$$
(3.7)

By (3.4), (3.6) and (3.7), we have \(\mu (X_i)\) depends only on \(X_i\) such that

$$\begin{aligned} A_{JX_i} Y_l= \mu (X_i)Y_l,\quad 1\le i\le n_1,\quad 1\le l\le n_2. \end{aligned}$$

Then the assertion (3.2) immediately follows. \(\square \)

Lemma 3.3

Let \(\{X_i\}_{1\le i\le n_1}\) be an orthonormal basis in the tangent space of \(M_1^{n_1}\) at a point. Then it holds that

$$\begin{aligned} \mu (X_1)^2+\cdots +\mu (X_{n_1})^2=\tfrac{n_1}{n_2+1}{\tilde{c}}. \end{aligned}$$
(3.8)

Proof

We compute the sectional curvature \(K(\pi (X_i,Y_j))\) of the plane \(\pi \) spanned by \(X_i\) and \(Y_j\), for some fixed \(i=1,\ldots ,n_1\) and some fixed \(j=1,\ldots ,n_2\). We use on the one hand (2.12) and on the other hand (2.4) together with (3.2) to obtain

$$\begin{aligned} 0&= {\tilde{c}} + \langle A_{JY_j}Y_j, A_{JX_i}X_i \rangle - \langle A_{JX_i}Y_j, A_{JY_j}X_i \rangle \\&= {\tilde{c}} - \mu (X_i)^2+\langle A_{JY_j}Y_j, A_{JX_i}X_i \rangle ,\ \ 1\le i\le n_1,\ 1\le j\le n_2. \end{aligned}$$

Taking summation over \(i=1,\ldots ,n_1\), and using Lemma 3.2, we get

$$\begin{aligned} \begin{aligned} 0&=n_1{\tilde{c}} -\sum _{i=1}^{n_1}\mu (X_i)^2+\left\langle A_{JY_j} Y_j, \sum _{i=1}^{n_1}A_{JX_i}X_i \right\rangle \\&=n_1{\tilde{c}} -\sum _{i=1}^{n_1}\mu (X_i)^2+\sum _{k=1}^{n_1}\sum \limits _{i=1}^{n_1} \langle A_{JX_k} X_i, X_i \rangle \mu (X_k). \end{aligned} \end{aligned}$$
(3.9)

However, the minimality condition implies that for each \(k=1,\ldots ,n_1\) we have

$$\begin{aligned} 0=\sum _{i=1}^{n_1} \langle A_{JX_k} X_i, X_i \rangle + \sum _{j=1}^{n_2} \langle A_{JX_k} Y_j, Y_j \rangle =\sum _{i=1}^{n_1} \langle A_{JX_k} X_i, X_i\rangle +n_2\mu (X_k).\nonumber \\ \end{aligned}$$
(3.10)

Therefore, from (3.9) and (3.10), we obtain

$$\begin{aligned} \mu (X_1)^2+\cdots +\mu (X_{n_1})^2=\tfrac{n_1}{n_2+1}{\tilde{c}}. \end{aligned}$$
(3.11)

This completes the proof of Lemma 3.3. \(\square \)

Next, we will describe the construction of a local frame of vector fields for which we can determine the values of the shape operator \(A_{J}\). This is a crucial step and will be stated in Lemma 3.5. Let us describe first a general method for choosing suitable orthonormal vectors at a point on \(M^n\), which will be used recurrently in the proof of Lemma 3.5. The main idea originates from the very similar situation in studying affine hyperspheres in [12, 18].

Let \((p,q) \in M^n\) and \(U_pM_1^{n_1}=\{ u\in T_pM_1^{n_1}| \langle u,u\rangle =1\}\). As the metric on \(M_1^{n_1}\) is positive definite, we have that \(U_pM_1^{n_1}\) is compact. We define on this set the functions

$$\begin{aligned} f_{(p,q)}(u)=\langle A_{Ju}u,u\rangle ,\quad u\in U_pM_1^{n_1}. \end{aligned}$$
(3.12)

We know that there exists \(e_1\in U_pM_1^{n_1}\) for which \(f_{(p,q)}\) attains an absolute maximum: \(f_{(p,q)}(e_1)= \langle A_{Je_1}e_1,e_1\rangle =:\lambda _1 \). Let \(u\in U_pM_1^{n_1}\) such that \(\langle u, e_1\rangle =0\) and define \(g(t)= f_{(p,q)}(\cos (t) e_1+\sin (t) u )\). One may check that

$$\begin{aligned} g'(0)&=3 \langle A_{Je_1}e_1,u\rangle , \end{aligned}$$
(3.13)
$$\begin{aligned} g''(0)&=6 \langle A_{Je_1}u,u\rangle - 3f_{(p,q)}(e_1). \end{aligned}$$
(3.14)

Since g attains an absolute maximum for \(t=0\), we have that \(g'(0)=0\) and \(g''(0)\le 0\), i.e.

$$\begin{aligned} \left\{ \begin{aligned}&\langle A_{Je_1}e_1,u \rangle =0,\\&\langle A_{Je_1}e_1,e_1\rangle \ge 2\langle A_{Je_1}u,u \rangle ,\ u\perp e_1,\ \langle u,u \rangle =1. \end{aligned} \right. \end{aligned}$$
(3.15)

Therefore, \(e_1\) is an eigenvector of \( A_{Je_1} \) with \(\lambda _1\) the corresponding eigenvalue. Since \(A_{Je_1}\) is self-adjoint, we can further choose orthonormal vectors \(e_2, \ldots , e_{n_1}\), which are eigenvectors of \(A_{Je_1}\), with respectively the eigenvalues \(\lambda _{2},\ldots , \lambda _{n_1}\). To sum up, we have

$$\begin{aligned} A_{Je_1}e_i=\lambda _{i} e_i,\ i=1,\ldots , n_1;\ \ \lambda _{1} \ge 2\lambda _{i}\quad \ \mathrm{for}\ \ i\ge 2. \end{aligned}$$
(3.16)

Lemma 3.4

Let \((p,q)\in M_1^{n_1}\times M_2^{n_2}\) and \(\{X_i\}_{1\le i\le n_1}\) and \(\{Y_j\}_{1\le j\le n_2}\) be arbitrary orthonormal bases of \(T_pM_1^{n_1}\) and \(T_qM_2^{n_2}\), respectively. Then

$$\begin{aligned} A_{JY_j}Y_k=(\mu _1 X_1+\cdots +\mu _{n_1} X_{n_1})\delta _{jk},\quad \ 1\le j,k\le n_2, \end{aligned}$$
(3.17)

where \(\mu _i:=\mu (X_i)\) with \(\mu \) defined as before. Moreover, we have \(c_2=\frac{n_1+n_2+1}{n_2+1}\tilde{c}\).

Proof

From Lemma 3.2 we know that

$$\begin{aligned} A_{JY_j}Y_k=(\mu _1 X_1+\cdots +\mu _{n_1}X_{n_1})\delta _{jk}+\sum _{l=1}^{n_2}\alpha _l^{jk} Y_l, \end{aligned}$$

for real numbers \(\alpha _1^{jk},\ldots ,\alpha _{n_2}^{jk}\).

Now, we claim that \(\alpha _l^{jk}=0\) for all possible indexes, or equivalently,

$$\begin{aligned} \langle A_{JY_j}Y_k, Y_l\rangle =0\quad \mathrm{for\ any}\ Y_j,Y_k,Y_l\in T_qM_2. \end{aligned}$$
(3.18)

We will verify the claim by contradiction.

In fact, if it did not hold, then we could choose a unit vector \(Y_1(p,q)\in U_qM_2^{n_2}\) such that \(\alpha _1:=\langle A_{J Y_1}Y_1,Y_1\rangle >0\) is the maximum of the function \(f_{(p,q)}\) defined on \(T_qM_2^{n_2}\).

Define an operator \({\mathcal {A}}\) on \(T_qM_2^{n_2}\) by

$$\begin{aligned} {\mathcal {A}}(Y)=A_{J Y_1}Y-\langle A_{J Y_1}Y,X_1\rangle X_1 -\cdots -\langle A_{J Y_1}Y,X_{n_1}\rangle X_{n_1}. \end{aligned}$$

It is easy to show that \({\mathcal {A}}\) is self-adjoint and \(Y_1\) is one of its eigenvectors. We can choose orthonormal vectors \(Y_2,\ldots ,Y_{n_2}\in U_qM_2^{n_2}\) orthogonal to \(Y_1\), which are the remaining eigenvectors of the operator \({\mathcal {A}}\), associated to the eigenvalues \(\alpha _2,\ldots ,\alpha _{n_2}\) (notice that we have changed the notation for the corresponding \(\alpha _l^{jk}\) for more simplicity). Therefore, we have

$$\begin{aligned} \left\{ \begin{aligned}&A_{J Y_1}Y_1=\mu _1X_1+\cdots +\mu _{n_1}X_{n_1}+\alpha _1Y_1,\\&A_{J Y_1}Y_i=\alpha _iY_i,\ 1< i\le n_2. \end{aligned} \right. \end{aligned}$$
(3.19)

Taking in (2.4) \(X=Z=Y_1,Y=Y_i,1<i\le n_2\), using (3.19) and Lemmas 3.2 and 3.3 , we can obtain

$$\begin{aligned} \alpha _i^2-\alpha _1\alpha _i-\tfrac{n_1+n_2+1}{n_2+1}\tilde{c}+c_2=0. \end{aligned}$$
(3.20)

It follows that there exists an integer \(n_{2,1}\), \(0\le n_{2,1}\le n_2-1\), if necessary after renumbering the basis, such that

$$\begin{aligned} \left\{ \begin{aligned}&\alpha _2=\cdots =\alpha _{n_{2,1}+1}=\tfrac{1}{2}\Big (\alpha _1+\sqrt{\alpha _1^2 +4(\tfrac{n_1+n_2+1}{n_2+1}\tilde{c}-c_2)}\,\Big ),\\&\alpha _{n_{2,1}+2}=\cdots =\alpha _{n_2}=\tfrac{1}{2}\Big (\alpha _1-\sqrt{\alpha _1^2 +4(\tfrac{n_1+n_2+1}{n_2+1}\tilde{c}-c_2)}\,\Big ). \end{aligned} \right. \end{aligned}$$
(3.21)

Using Lemma 3.2, (3.19), (3.21) and \(\mathrm{trace}\,A_{J Y_1}=0\), we have

$$\begin{aligned} \alpha _1=\sqrt{\tfrac{4(\frac{n_1+n_2+1}{n_2+1}\tilde{c} -c_2)}{\big (\tfrac{n_2+1}{n_2-2n_{2,1}-1}\big )^2-1}}. \end{aligned}$$
(3.22)

Therefore, if there exists a unit vector field \( V\in TM_2^{n_2}\) such that \(A_{JV}V=\lambda V+\mu _1X_1+\cdots +\mu _{n_1}X_{n_1}\), then we see that

$$\begin{aligned} \lambda \in \left\{ \sqrt{\tfrac{4\big (\frac{n_1+n_2+1}{n_2+1}\tilde{c} -c_2\big )}{\big (\tfrac{n_2+1}{n_2-2n_{2,1}-1}\big )^2-1}}\,\right\} _{0\le n_{2,1}\le n_2-1}. \end{aligned}$$
(3.23)

Moreover, \(\alpha _1\) is the absolute maximum of \(f_{(p,q)}\) if and only if

$$\begin{aligned} \alpha _1=\sqrt{\tfrac{4\big (\tfrac{n_1 +n_2+1}{n_2+1}\tilde{c}-c_2\big )}{\big (\tfrac{n_2+1}{n_2-1}\big )^2-1}},\ \ \mathrm{corresponding\ to} \ n_{2,1}=0. \end{aligned}$$
(3.24)

Next, we show that if \(f_{(p,q)}\) attains an absolute maximum in \(Y_1\), we can extend \(Y_1\) differentiably to a unit vector field which is also denoted by \(Y_1\) on a neighbourhood U of (pq) such that, at every point \((p',q')\in U\), \(f_{(p',q')}\) attains an absolute maximum in \(Y_1(p',q')\).

In order to achieve that purpose, let \(\{E_1,\ldots ,E_{n_2}\}\) be an arbitrary differentiable orthonormal basis defined on a neighbourhood \(U'\) of (pq) such that \(E_1(p,q)=Y_1\). Then, we define a function \(\gamma \) by

$$\begin{aligned}&\gamma :{\mathbb {R}}^{n_2}\times U'\rightarrow {\mathbb {R}}^{n_2}:(a_1,\ldots ,a_{n_2},(p',q'))\mapsto (b_1,\ldots ,b_{n_2}), \\&b_k=\sum _{i,j=1}^{n_1}a_ia_j\langle A_{JE_i}E_j,E_k\rangle -\alpha _1a_k,\ 1\le k\le n_2. \end{aligned}$$

Using the fact that \(f_{(p,q)}\) attains an absolute maximum in \(E_1(p,q)\), we then obtain that

$$\begin{aligned} \begin{aligned} \tfrac{\partial b_k}{\partial a_m}(1,0,\ldots ,0,{(p,q)})&=2\langle (A_{JE_1{(p,q)}} E_m{(p,q)},E_k{(p,q)}\rangle -\alpha _1\delta _{km}\\&=\left\{ \begin{array}{ll} 0,&{} \mathrm{if}\ k\ne m, \\ \alpha _1,&{} \mathrm{if}\ k=m=1,\\ 2\alpha _k-\alpha _1,&{} \mathrm{if}\ k= m>1. \end{array} \right. \end{aligned} \end{aligned}$$

Since \(\alpha _1>0\) and given (3.21), we have \(2\alpha _k-\alpha _1\ne 0\) for \(k\ge 2\). Hence the implicit function theorem shows that there exist differentiable functions \(a_1,\ldots , a_{n_2}\), defined on a neighbourhood U of (pq), such that

$$\begin{aligned} a_1(p,q)=1,\quad a_2(p,q)=0,\ \ldots ,\quad a_{n_2}(p,q)=0. \end{aligned}$$

Define the local vector field V by

$$\begin{aligned} V=a_1E_1+\cdots +a_{n_1}E_{n_1}. \end{aligned}$$

Then we have \(V(p,q)=Y_1\) and \(A_{JV}V=\alpha _1 V+\mu _1\langle V,V\rangle X_1 +\cdots +\mu _{n_1}\langle V,V\rangle X_{n_1}\). Hence

$$\begin{aligned} A_{J\tfrac{V}{\sqrt{\langle V,V\rangle }}}\tfrac{V}{\sqrt{\langle V,V\rangle }} =\tfrac{\alpha _1}{\sqrt{\langle V,V\rangle }}\tfrac{V}{\sqrt{\langle V,V\rangle }} +\mu _1X_1+\cdots +\mu _{n_1}X_{n_1}. \end{aligned}$$

By (3.23), the continuity of \(\tfrac{\alpha _1}{\sqrt{\langle V,V\rangle }}\) and \(\langle V,V\rangle (q)=1\), we can derive that \(\langle V,V\rangle =1\) identically. Therefore, for any point \((p',q')\in U\), \(f_{('p,q')}\) attains an absolute maximum at \(V(p',q')\). Let \(Y_1=V\) and take orthonormal vector fields \(Y_2,\ldots ,Y_{n_2}\) orthogonal to \(Y_1\). Then \(\{Y_1,\ldots ,Y_{n_1}\}\) is a local basis satisfying

$$\begin{aligned} \left\{ \begin{aligned}&A_{JY_1}Y_1=\mu _1X_1+\cdots +\mu _{n_1}X_{n_1}+\alpha _1Y_1,\\ {}&A_{JY_1}Y_i=\alpha _iY_i,\ \ 1< i\le n_2, \end{aligned} \right. \end{aligned}$$
(3.25)

where \(\alpha _1\) is defined by (3.24), and

$$\begin{aligned} \alpha _2= \cdots =\alpha _{n_2}=\tfrac{1}{2}\big (\alpha _1-\sqrt{\alpha _1^2 +4(\tfrac{n_1+n_2+1}{n_2+1}\tilde{c}-c_2)}\,\big ). \end{aligned}$$
(3.26)

We recall that on the product manifold \(M^n\) we know that \(\langle \nabla _{Y_i}Y_j,X\rangle =0\) for \(i,j=1,\ldots ,n_2\) and X tangent to \(M_1\). Applying (2.52.6), and (3.24)–(3.26), we have that

$$\begin{aligned} \nabla _{Y_i}Y_1=0,\ \ 1\le i\le n_2. \end{aligned}$$
(3.27)

Hence, we have \(R(Y_1,Y_2)Y_1=0\), a contradiction to the fact that \(c_2\ne 0\). This verifies the claim and thus (3.17) follows.

Moreover, using (2.4), (2.12) and (3.17), we easily get the relation \(c_2=\frac{n_1+n_2+1}{n_2+1}\tilde{c}\). \(\square \)

Lemma 3.5

In Case (i), we have \(\tilde{c}>0\). Moreover, there exist local orthonormal frames of vector fields \(\{X_i\}_{1\le i\le n_1}\) of \(M_1^{n_1}\) and \(\{Y_j\}_{1\le j\le n_2}\) of \(M_2^{n_2}\), respectively, such that the operator \(A_J\) takes the following form:

$$\begin{aligned} \left\{ \begin{aligned} A_{JX_1}X_1&=\lambda _{1,1}X_1,\\ A_{JX_i}X_i&=\mu _1 X_1+\cdots +\mu _{i-1} X_{i-1}+\lambda _{i,i}X_i,\ 1< i\le n_1,\\ A_{JX_i}X_j&=\mu _i X_j,\ 1\le i<j,\\ A_{JX_i}Y_j&=\mu _i Y_j,\ 1\le i\le n_1,\ 1\le j\le n_2, \end{aligned} \right. \end{aligned}$$
(3.28)

where \(\lambda _{i,i},\, \mu _i\) are constants and satisfy

$$\begin{aligned} \lambda _{i,i}+(n-i)\mu _i=0,\quad \ 1\le i\le n_1. \end{aligned}$$
(3.29)

Proof

We will give the proof by induction on the index i of \(A_{JX_i}\). According to general principles, this consists of two steps as below.

The First Step of Induction

In this step, we should verify the assertion for \(i=1\). To do so, we have to show that, around any given \((p,q)\in M_1^{n_1}\times M_2^{n_2}\), there exist an orthonormal frame of vector fields \(\{X_i\}_{1\le i\le n_1}\) of \(TM_1^{n_1}\), \(\{Y_j\}_{1\le i\le n_2}\) of \(TM_2^{n_2}\), and smooth functions \(\lambda _{1,1}\) and \(\mu _1\), so that we have

$$\begin{aligned} \left\{ \begin{aligned}&A_{JX_1}X_1=\lambda _{1,1} X_1,\ \ A_{JX_1}Y_j=\mu _1 Y_j,\ \ 1\le j\le n_2,\\&A_{JX_1}X_i=\mu _1 X_i,\ \ 2\le i\le n_1,\\&\lambda _{1,1}+(n-1)\mu _1=0. \end{aligned} \right. \end{aligned}$$

The proof of the above conclusion will be divided into four claims as below.

Claim I-(1) Given \((p,q)\in M_1^{n_1}\times M_2^{n_2}\), there exist orthonormal bases \(\{X_i\}_{1\le i\le n_1}\) of \(T_pM_1^{n_1}\), \(\{Y_j\}_{1\le i\le n_2}\) of \(T_qM_2^{n_2}\), and real numbers \(\lambda _{1,1}>0\), \(\lambda _{1,2}=\cdots =\lambda _{1,n_1}\) and \(\mu _1\), such that the following relations hold:

$$\begin{aligned} \left\{ \begin{aligned} A_{JX_1}X_1&=\lambda _{1,1} X_1,\ \ A_{JX_1}X_i=\lambda _{1,i} X_i,\ 2\le i\le n_1,\\ A_{JX_1}Y_j&=\mu _1 Y_j,\ 1\le j\le n_2. \end{aligned} \right. \end{aligned}$$

Moreover, \(\lambda _{1,1}\) is the maximum of \(f_{(p,q)}\) defined on \(U_{p}M_1^{n_1}\). In particular, \(\tilde{c}>0\).

Proof of Claim I-(1)

First, if for an orthonormal basis \(\{X_i\}_{1\le i\le n_1}\) and for any \(i,j,k=1,\ldots , n_1\), \(\langle A_{JX_i}X_j,X_k\rangle =0\) holds, then by the fact \(trace A_{JX_i}=0\) and Lemma 3.2, we get \(\mu _i=0\). This further implies by Lemma 3.3 that \(\tilde{c}=0\). From this, using (2.4), (2.12) and Lemma 3.4, we can compute the sectional curvature of the section spanned by \(Y_1\) and \(Y_2\) to obtain that \(c_2=0\), which is a contradiction.

Accordingly, following the idea described right before Lemma 3.4, we can choose a vector \(X_1 \in U_pM_1^{n_1}\) such that \(f_{(p,q)}\) on \(U_pM_1^{n_1}\) attains its absolute maximum \(\lambda _{1,1}>0\) at \(X_1\). Then we can choose an orthonormal basis \(\{X_i\}_{1\le i\le n_1}\) of \(T_pM_1^{n_1}\) and an arbitrary orthonormal basis \(\{Y_j\}_{1\le i\le n_2}\) of \(T_qM_2^{n_2}\) such that for \(2\le k\le n_1\), \(A_{JX_1}X_k=\lambda _{1,k} X_k\) and \(\lambda _{1,1}\ge 2\lambda _{1,k}\). Moreover, by Lemma 3.2, \(A_{JX_1}Y_j=\mu _1 Y_j\) for \(1\le j\le n_2\).

Next, we will show that \(\lambda _{1,2}=\cdots =\lambda _{1,n_1}\), and that \(\lambda _{1,1},\lambda _{1,2}\) and \(\mu _1\) are all constants independent of (pq).

Taking in (2.4) that \(X=Z=X_1\) and \(Y=X_k\) for \(k\ge 2\), and using (2.12), we obtain

$$\begin{aligned} \lambda _{1,k}^2-\lambda _{1,1}\lambda _{1,k}-\tilde{c}=0,\ \ 2\le k\le n_1. \end{aligned}$$
(3.30)

As \(\tilde{c}\ge 0\) by (3.11) and \(\lambda _{1,1}\ge 2\lambda _{1,k}\) for \(2\le k\le n_1\), then (3.30) implies that

$$\begin{aligned} \lambda _{1,2}=\cdots =\lambda _{1,n_1}=\frac{1}{2}\Big (\lambda _{1,1}-\sqrt{\lambda _{1,1}^2+4\tilde{c}}\,\Big ). \end{aligned}$$
(3.31)

Similarly, taking \(X=Z=X_1\) and \(Y\in U_qM_2^{n_2}\) in (2.4) and using (2.12) and Lemma 3.2, we get

$$\begin{aligned} \mu _1^2-{\mu _1}\lambda _{1,1}-\tilde{c}=0. \end{aligned}$$
(3.32)

Thus we obtain

$$\begin{aligned} \mu _1 =\frac{1}{2}\Big (\lambda _{1,1}+\varepsilon _1\sqrt{\lambda _{1,1}^2+4\tilde{c}}\Big ),\ \varepsilon _1=\pm 1. \end{aligned}$$
(3.33)

Then, applying \({ trace}\,A_{JX_1}=0\), we get

$$\begin{aligned} \frac{1}{2}(n+1)\lambda _{1,1}+\frac{1}{2}(\varepsilon _1n_2-n_1+1)\sqrt{\lambda _{1,1}^2+4\tilde{c}}=0. \end{aligned}$$
(3.34)

It follows that \(\varepsilon _1n_2-n_1+1\not =0\) and

$$\begin{aligned} \Big [\Big (\frac{n+1}{\varepsilon _1n_2-n_1+1}\Big )^2-1\Big ]\lambda _{1,1}^2=4\tilde{c}. \end{aligned}$$
(3.35)

Moreover, (3.35) shows that \(\tilde{c}>0\), and that

$$\begin{aligned} \lambda _{1,1}=2\sqrt{\tfrac{\tilde{c}}{(\tfrac{n+1}{\varepsilon _1n_2-n_1+1})^2-1}}. \end{aligned}$$
(3.36)

This, together with (3.33), implies that \(\lambda _{1,1}\), \(\lambda _{1,2}=\cdots =\lambda _{1,n_1}\) and \(\mu _1\) are all constants independent of (pq). \(\square \)

Claim I-(2) \(\lambda _{1,2}=\cdots =\lambda _{1,n_1}=\mu _1\) and \(\lambda _{1,1}+(n-1)\mu _1=0\) .

Proof of Claim I-(2)

From (3.31) and (3.33), the first assertion is equivalent to showing that \(\varepsilon _1=-1\). Suppose on the contrary that \(\varepsilon _1=1\). Then we have

$$\begin{aligned} \mu _1\lambda _{1,2}=-\tilde{c}. \end{aligned}$$
(3.37)

Corresponding to the case \(c_2\ne 0\) we have \(n_2\ge 2\), then (3.49) implies that

$$\begin{aligned} n_1>n_2+1\ge 3. \end{aligned}$$
(3.38)

We rechoose a vector \(X_2\in U_pM_1^{n_1}\), which is orthogonal to \(X_1\) and such that \(\lambda _{2,2}=\langle A_{JX_2}X_2,X_2\rangle \) is the maximum of \(f_{(p,q)}\) on \(\{u\in U_pM_1^{n_1}\,|\,u\perp X_1\}\).

Define \({\mathcal {A}}\) on \(\{u\in T_pM_1^{n_1}\,|\,u\perp X_1\}\) by \({\mathcal {A}}(X)=A_{JX_2}X-\langle A_{JX_2}X,X_1\rangle X_1\). It is easy to show that \({\mathcal {A}}\) is self-adjoint and \(X_2\) is one of its eigenvectors. We can choose an orthonormal basis \(\{X_3,\ldots ,X_{n_1}\}\) for \(\{u\in T_pM_1^{n_1}\,|\,u\perp X_1, u\perp X_2\}\) so that they are the remaining eigenvectors of the operator \({\mathcal {A}}\), associated to eigenvalues \(\lambda _{2,3},\ldots ,\lambda _{2, n_1}\). In this way, we have obtained

$$\begin{aligned} A_{JX_2}X_2=\lambda _{1,2}X_1+\lambda _{2,2}X_2,\quad A_{JX_2}X_k=\lambda _{2,k}X_k,\quad 3\le k\le n_1. \end{aligned}$$
(3.39)

Taking \(X=Z=X_2,Y=X_k\) in (2.4) and using (3.39) together with (2.12), we obtain

$$\begin{aligned} \lambda _{2,k}^2-\lambda _{2,2}\lambda _{2,k}-\tilde{c}-\lambda _{1,2}^2=0,\quad 3\le k\le n_1. \end{aligned}$$
(3.40)

Given that \(\lambda _{2,2}\ge 2\lambda _{2,k}\), this implies that

$$\begin{aligned} \lambda _{2,k}=\frac{1}{2}\Big (\lambda _{2,2}-\sqrt{\lambda _{2,2}^2 +4\left( \tilde{c}+\lambda _{1,2}^2\right) }\Big ),\quad 3\le k\le n_1. \end{aligned}$$
(3.41)

Similarly, taking \(X=Z=X_2\) and \(Y\in U_qM_2^{n_2}\) in (2.4) and using (3.39) and (2.12), we get

$$\begin{aligned} {\mu _2}^2-{\mu _2}\lambda _{2,2}-\tilde{c}-\mu _1\lambda _{1,2}=0. \end{aligned}$$
(3.42)

Combining (3.37) with (3.42) we get

$$\begin{aligned} \mu _2^2-{\mu _2}\lambda _{2,2}=0. \end{aligned}$$
(3.43)

Therefore, we have

$$\begin{aligned} \mu _2=\frac{1}{2}(\lambda _{2,2}+\varepsilon _2\lambda _{2,2}),\quad \varepsilon _2=\pm 1. \end{aligned}$$
(3.44)

By using (3.39), (3.41), (3.44) and \({ trace}\,A_{JX_2}=0\), we have

$$\begin{aligned} \lambda _{2,2}+\frac{1}{2}(n_1-2)\Big (\lambda _{2,2}-\sqrt{\lambda _{2,2}^2 +4(\tilde{c}+\lambda _{1,2}^2)}\,\Big )+\frac{1}{2}{n_2}(\lambda _{2,2}+\varepsilon _2\lambda _{2,2})=0.\nonumber \\ \end{aligned}$$
(3.45)

Hence we have

$$\begin{aligned} \lambda _{2,2}=2\sqrt{\tfrac{\tilde{c}+\lambda _{1,2}^2}{\big (\tfrac{n_1+n_2+\varepsilon _2n_2}{n_1-2}\big )^2-1}}. \end{aligned}$$
(3.46)

Note that for \(\varepsilon _1=1\), (3.36) gives

$$\begin{aligned} \lambda _{1,1}=2\sqrt{\tfrac{\tilde{c}}{\big (\tfrac{n_1+n_2+1}{n_1-n_2-1}\big )^2-1}}. \end{aligned}$$
(3.47)

Using (3.38), we have

$$\begin{aligned} \begin{aligned} \tfrac{n_1+n_2+1}{n_1-n_2-1}-\tfrac{n_1+n_2+\varepsilon _2n_2}{n_1-2}\ge&\tfrac{n_1+n_2+1}{n_1-n_2-1}-\tfrac{n_1+2n_2}{n_1-2}\\ =&\tfrac{n_1-n_2-1+2(n_2+1)}{n_1-n_2-1}-\tfrac{n_1-2+2n_2+2}{n_1-2}\\ =&\tfrac{2(n_2+1)(n_2-1)}{(n_1-n_2-1)(n_1-2)}>0. \end{aligned} \end{aligned}$$

It follows that \(\lambda _{2,2}>\lambda _{1,1}\). This is a contradiction.

We have proved that \(\varepsilon _1=-1\) and thus \(\lambda _{1,2}=\cdots =\lambda _{1,n_1}=\mu _1\).

Finally, from \({ trace}\,A_{JX_{1}}=0\) we get \(\lambda _{1,1}+(n-1)\mu _1=0\) as claimed. \(\square \)

Claim I-(3) If there exists a unit vector \(V\in T_pM_1^{n_1}\) such that \(A_{JV}V=\lambda V\), then \(\lambda \) has only a finite number of possible values.

Proof of Claim I-(3)

Assume that there exists a unit vector \(V\in T_pM_1^{n_1}\) such that \(A_{JV}V=\lambda V\). Let \(X_1=V\) and \(\lambda _{1,1}=\lambda \). Then we may complete \(X_1\) to obtain an orthonormal basis \(\{X_i\}_{1\le i\le n_1}\) of \(T_pM_1^{n_1}\) such that, for each \(2\le k\le n_1\), \(X_k\) is the eigenvector of \(A_{JX_1}\) with eigenvalue \(\lambda _{1,k}\).

Then we have (3.30) from which we know the existence of an integer \(n_{1,1}\), \(0\le n_{1,1}\le n_1-1\) such that, if necessary after renumbering the basis, we have

$$\begin{aligned} \left\{ \begin{aligned}&\lambda _{1,2}=\cdots =\lambda _{1,n_{1,1}+1}=\frac{1}{2} \Big (\lambda _{1,1}+\sqrt{\lambda _{1,1}^2+4\tilde{c}}\,\Big ),\\&\lambda _{1,n_{1,1}+2}=\lambda _{1,n_1}=\frac{1}{2}\Big (\lambda _{1,1} -\sqrt{\lambda _{1,1}^2+4\tilde{c}}\,\Big ). \end{aligned} \right. \end{aligned}$$
(3.48)

Similarly, we have (3.33). By (3.48), (3.33) and the fact that \({ trace}\,A_{JX_1}=0\), we have

$$\begin{aligned} \frac{1}{2}(n_1+n_2+1)\lambda _{1,1}+\frac{1}{2}(2n_{1,1}-n_1+1 +\varepsilon _1n_2)\sqrt{\lambda _{1,1}^2+4\tilde{c}}=0. \end{aligned}$$
(3.49)

This immediately implies that \(\lambda _{1,1}\) has only finite possibilities. \(\square \)

Claim I-(4) The aforementioned tangent vector \(X_1\) at (pq) can be extended differentiably to a unit vector field, still denoted by \(X_1\), in a neighbourhood U of (pq), such that for each \((p',q')\in U\), \(f_{(p',q')}\) defined on \(U_{p'}M_1^{n_1}\) attains the absolute maximum at \(X_1(p',q')\).

Proof of Claim I-(4)

Let \(\{E_1,\ldots ,E_{n_1}\}\) be an arbitrary differentiable orthonormal basis defined on a neighbourhood \(U'\) of (pq) such that \(E_1{(p,q)}=X_1\). Then, from the fact \(A_{JX_1}X_1=\lambda _{1,1}X_1\) at (pq), we define a function \(\gamma \) by

$$\begin{aligned} \begin{aligned}&\gamma :\ {\mathbb {R}}^{n_1}\times U'\rightarrow {\mathbb {R}}^{n_1},\\&(a_1,\ldots ,a_{n_1},(p',q'))\mapsto (b_1,\ldots ,b_{n_1}), \end{aligned} \end{aligned}$$

where \(b_k=b_k(a_1,\ldots ,a_{n_1}):=\sum \nolimits _{i,j=1}^{n_1}a_ia_j\langle A_{JE_i}E_j, E_k\rangle -\lambda _{1,1}a_k\) for \(1\le k\le n_1\).

Using the fact that \(f_{(p,q)}\) attains an absolute maximum in \(E_1(p,q)\), and that, by Claim I-(1), \(A_{JE_1}E_k=\lambda _{1,k} E_k\) at (pq) for \(2\le k\le n_1\), we have the calculation that

$$\begin{aligned} \begin{aligned} \tfrac{\partial b_k}{\partial a_m}(1,0,\ldots ,0,{(p,q)})&=2\langle A_{JE_1{(p,q)}}E_m{(p,q)},E_k{(p,q)}\rangle -\lambda _{1,1}\delta _{km}\\&=\left\{ \begin{array}{ll} 0,&{} \mathrm{if}\ k\ne m, \\ \lambda _{1,1},&{} \mathrm{if}\ k=m=1,\\ 2\lambda _{1,k}-\lambda _{1,1},&{} \mathrm{if}\ k=m\ge 2. \end{array} \right. \end{aligned} \end{aligned}$$

Given the fact that \(\tilde{c}>0\), by (3.31) we have that \(2\lambda _{1,k}-\lambda _{1,1}\ne 0\) for \(k\ge 2\). Hence the implicit function theorem shows that there exist differentiable functions \(a_1,\ldots , a_{n_1}\), defined on a neighbourhood U of (pq) and satisfying

$$\begin{aligned} a_1(p,q)=1,\quad a_2(p,q)=0,\ \ldots ,\quad a_{n_2}(p,q)=0, \end{aligned}$$

such that

$$\begin{aligned} \left\{ \begin{aligned}&b_1(a_1(p',q'),\ldots ,a_{n_1}(p',q'), (p',q'))\equiv 0,\\&\ \ \ \ \cdots \\&b_{n_1}(a_1(p',q'),\ldots ,a_{n_1}(p',q'),(p',q'))\equiv 0. \end{aligned} \right. \end{aligned}$$

Therefore, the local vector field V defined by

$$\begin{aligned} V=a_1E_1+\cdots +a_{n_1}E_{n_1} \end{aligned}$$

satisfies \(V{(p,q)}=X_1\) and \(A_{JV}V=\lambda _{1,1}V\). Hence

$$\begin{aligned} A_{J{\tfrac{V}{\sqrt{\langle V,V \rangle }}}}\tfrac{V}{\sqrt{\langle V,V \rangle }} =\tfrac{\lambda _{1,1}}{\sqrt{\langle V,V \rangle }}\tfrac{V}{\sqrt{\langle V,V \rangle }}. \end{aligned}$$
(3.50)

According to Claim I-(3), there is a finite number of possible values that the function \(\tfrac{\lambda _{1,1}}{\sqrt{\langle V,V\rangle }}\) can take. On the other hand, since \(\tfrac{\lambda _{1,1}}{\sqrt{\langle V,V \rangle }}\) is continuous and \(\langle V,V \rangle (p)=1\), it must be that \(\langle V,V\rangle =1\) identically. Define on U a vector field \(X_1:=V\). By Claim I-(1) and its proof we know that for any point \((p',q')\in U\), \(f_{(p',q')}\) attains an absolute maximum at \(X_1(p',q')\). This verifies the assertion of Claim I-(4). \(\square \)

Finally, having determined the unit vector field \(X_1\) as in Claim I-(4), we further choose vector fields \(X_2,\ldots ,X_{n_1}\) (which are orthogonal to \(X_1\)) such that \(\{X_i\}_{1\le i\le n_1}\) is a local orthonormal frame of \(TM_1^{n_1}\). Then combining with Lemma 3.2, we complete immediately the proof for the first step of induction.

The Second Step of Induction

In this step, we first assume the assertion of Lemma 3.5 for all \(i\le k\), where \(k\in \{2,\ldots ,n_1-1\}\) is a fixed integer. Therefore, there exists a local orthonormal frame of vector fields \(\{X_i\}_{1\le i\le n_1}\) of \(M_1^{n_1}\) such that the operator \(A_J\) takes the following form:

$$\begin{aligned} \left\{ \begin{aligned}&A_{JX_1}X_1=\lambda _{1,1} X_1,\\&A_{JX_i}X_i=\mu _1 X_1+\cdots +\mu _{i-1} X_{i-1}+\lambda _{i,i}X_i,\ 1< i\le k,\\&A_{JX_i}X_j=\mu _i X_j,\ 1\le i\le k, \ i<j\le n_1, \\&A_{JX_i}Y=\mu _iY,\ 1\le i\le k,\ Y\in TM_2^{n_2}, \end{aligned} \right. \end{aligned}$$
(3.51)

where \(\mu _i\) and \(\lambda _{i,i}\) for \(1\le i\le k\) are constants that satisfy the relations:

$$\begin{aligned} \lambda _{i,i}+(n-i)\mu _i=0,\quad \ 1\le i\le k. \end{aligned}$$
(3.52)

Moreover, for \(1\le i\le k\) and \(p'\) around p, \(\lambda _{i,i}\) is the maximum of \(f_{(p',q')}\) defined on

$$\begin{aligned} \left\{ u\in T_{p'}M_1^{n_1} \mid \langle u,u\rangle =1, u\perp X_1,\ldots ,X_{i-1}\right\} . \end{aligned}$$

Then as a purpose of the second step, we should verify the assertion of Lemma 3.5 for \(i=k+1\). To do so, we have to show that there exists a local orthonormal frame of vector fields \(\{\tilde{X}_i\}_{1\le i\le n_1}\) of \(TM_1^{n_1}\) given by

$$\begin{aligned} \tilde{X}_1=X_1,\ldots ,\tilde{X}_k=X_k;\ \ \tilde{X}_l=\sum _{t=k+1}^{n_1}T^t_lX_t,\ k+1\le l\le n_1, \end{aligned}$$

such that \(T=(T_l^t)_{k+1\le l,t\le n_1}\) is an orthogonal matrix, and the operator \(A_J\) takes the following form:

$$\begin{aligned} \left\{ \begin{aligned}&A_{J\tilde{X}_1}\tilde{X}_1=\lambda _{1,1} \tilde{X}_1,\\&A_{J\tilde{X}_i}\tilde{X}_i=\mu _1 \tilde{X}_1+\cdots +\mu _{i-1} \tilde{X}_{i-1}+\lambda _{i,i}\tilde{X}_i,\ 2\le i\le k+1,\\&A_{J\tilde{X}_i}\tilde{X}_j=\mu _i \tilde{X}_j,\ 1\le i\le k+1, \ i+1\le j\le n_1, \\&A_{J\tilde{X}_i}Y=\mu _iY,\ 1\le i\le k+1,\ Y\in TM_2^{n_2}, \end{aligned} \right. \end{aligned}$$
(3.53)

where \(\mu _i\) and \(\lambda _{i,i}\) for \(1\le i\le k+1\) are constants and satisfy the relations

$$\begin{aligned} \lambda _{i,i}+(n-i)\mu _i=0,\ \ 1\le i\le k+1. \end{aligned}$$
(3.54)

Moreover, for \(1\le i\le k+1\) and \((p',q')\) around (pq), \(\lambda _{i,i}\) is the maximum of \(f_{(p',q')}\) defined on

$$\begin{aligned} \{u\in T_{p'}M_1^{n_1} \mid \langle u,u\rangle =1, u\perp \tilde{X}_1,\ldots ,u\perp \tilde{X}_{i}\}. \end{aligned}$$

Similarly to the first step, the proof of the above conclusion will also be divided into the verification of four claims.

Claim II-(1) For any \((p,q)\in M_1^{n_1}\times M_2^{n_2}\) , there exists an orthonormal basis \(\{{\bar{X}}_i\}_{1\le i\le n_1}\) of \(T_pM_1^{n_1}\) and real numbers \(\lambda _{k+1,k+1}>0, \lambda _{k+1,k+2}=\cdots =\lambda _{k+1,n_1}\) and \(\mu _{k+1}\) , such that the following relations hold:

$$\begin{aligned} \left\{ \begin{aligned}&A_{J{\bar{X}}_1}{\bar{X}}_1=\lambda _{1,1} {\bar{X}}_1,\\&A_{J{\bar{X}}_i}{\bar{X}}_i=\mu _1 {\bar{X}}_1+\cdots +\mu _{i-1} {\bar{X}}_{i-1}+\lambda _{i,i}X_i,\ 2\le i\le k+1,\\&A_{J{\bar{X}}_{k+1}}{\bar{X}}_i=\lambda _{k+1,i}{\bar{X}}_i,\,\ i\ge k+2,\\&A_{J{\bar{X}}_{k+1}}Y=\mu _{k+1}Y,\ \ Y\in T_qM_2^{n_2}. \end{aligned} \right. \end{aligned}$$

Proof of Claim II-(1)

By the induction assumption, we have an orthonormal basis \(\{X_i\}_{1\le i\le n_1}\) such that (3.51) and (3.52) hold. We first take \({\bar{X}}_1=X_1(p,q),\ldots ,{\bar{X}}_k=X_k(p,q)\). Then putting

$$\begin{aligned} V_k=\{u\in T_{p}M_1^{n_1} \mid u\perp {\bar{X}}_1,\dots ,u\perp {\bar{X}}_k\}, \end{aligned}$$

we will show that, restricting on \(U_pM_1^{n_1}\cap V_k\), the function \(f_{(p,q)}\not =0\).

Indeed, suppose on the contrary that \(f_{(p,q)}\,|_{V_k}=0\). Then letting \(\{u_i\}_{k+1\le i\le n_1}\) be an orthonormal basis of \(V_k\), we have \(\langle A_{Ju_i}u_{j},u_k\rangle =0\), \(k+1\le i,j,k\le n_1\). Taking in (2.4) that \(X=u_{k+2},Y=Z=u_{k+1}\), by assumption of induction and Lemma 3.2, we obtain \(\mu _1^2+\cdots +\mu _{k}^2+\tilde{c}=0\). This is a contradiction to the fact \({\tilde{c}}>0\).

Now, we can choose \({\bar{X}}_{k+1}\) such that \(f_{(p,q)}\), restricted on \(U_pM_1^{n_1}\cap V_k\), attains its maximum with value

$$\begin{aligned} \lambda _{k+1,k+1}:=\langle A_{J{\bar{X}}_{k+1}}{\bar{X}}_{k+1},{\bar{X}}_{k+1}\rangle >0. \end{aligned}$$

Consider the self-adjoint operator \({\mathcal {A}}:\ V_k\rightarrow V_k\) defined by

$$\begin{aligned} {\mathcal {A}}(X)=A_{J{\bar{X}}_{k+1}}X- \sum _{i=1}^k\langle A_{J{\bar{X}}_{k+1}}X,{\bar{X}}_i \rangle {\bar{X}}_i. \end{aligned}$$

It is easy to see that \({\mathcal {A}}({\bar{X}}_{k+1})=\lambda _{k+1,k+1}{\bar{X}}_{k+1}\). Hence, by the assumption of induction, we have

$$\begin{aligned} \lambda _{k+1,k+1}{\bar{X}}_{k+1}=&A_{J{\bar{X}}_{k+1}}{\bar{X}}_{k+1}-\sum _{i=1}^k \langle A_{J{\bar{X}}_{k+1}}{\bar{X}}_{k+1},{\bar{X}}_i\rangle {\bar{X}}_i\\ =&A_{J{\bar{X}}_{k+1}}{\bar{X}}_{k+1}-\sum _{i=1}^k\langle A_{J{\bar{X}}_{i}}{\bar{X}}_{k+1},{\bar{X}}_{k+1}\rangle {\bar{X}}_i\\ =&A_{J{\bar{X}}_{k+1}}{\bar{X}}_{k+1}-\sum _{i=1}^k\mu _i {\bar{X}}_i. \end{aligned}$$

Next, we choose \({\bar{X}}_{k+2}, \ldots ,{\bar{X}}_{n_1}\) as the remaining unit eigenvectors of \({\mathcal {A}}\), with corresponding eigenvalues \(\lambda _{k+1,k+2}\), \(\ldots \), \(\lambda _{k+1,n_1}\), respectively. Thus, by Lemma 3.2 we have \(\mu _{k+1}\), and the following relations:

$$\begin{aligned} \left\{ \begin{aligned} \begin{aligned}&A_{J{\bar{X}}_{k+1}}{\bar{X}}_{k+1}=\mu _1 {\bar{X}}_1+\cdots +\mu _{k} {\bar{X}}_{k}+\lambda _{k+1,k+1}{\bar{X}}_{k+1},\\&A_{J{\bar{X}}_{k+1}}{\bar{X}}_i=\lambda _{k+1,i}{\bar{X}}_i,\,\ k+2\le i\le n_1,\\&A_{J{\bar{X}}_{k+1}}Y=\mu _{k+1}Y,\ \ Y\in T_qM_2^{n_2}. \end{aligned} \end{aligned} \right. \end{aligned}$$
(3.55)

Now, taking in (2.4) that \(X=Z={\bar{X}}_{k+1}\) and \(Y={\bar{X}}_j\) with \(j\ge k+2\), combining with (2.12), we can obtain

$$\begin{aligned} \lambda _{k+1,j}^2-\lambda _{k+1, k+1}\lambda _{k+1,j}-\tilde{c}-(\mu _1^2+\cdots +\mu _{k}^2)=0. \end{aligned}$$
(3.56)

It follows that

$$\begin{aligned} \begin{aligned} \lambda _{k+1,k+2}&=\cdots =\lambda _{k+1,n_1}\\&=\frac{1}{2}\Big (\lambda _{k+1,k+1}-\sqrt{\lambda _{k+1,k+1}^2 +4(\tilde{c}+\mu _1^2+\cdots +\mu _{k-1}^2+\mu _k^2)}\,\Big ). \end{aligned}\nonumber \\ \end{aligned}$$
(3.57)

On the other hand, taking in (2.4) that \(X=Z={\bar{X}}_{k+1}\), and \(Y\in T_qM_2^{n_2}\) be a unit vector, combining with (2.12), we can obtain

$$\begin{aligned} \mu _{k+1}^2-\lambda _{k+1,k+1}\mu _{k+1}-\tilde{c}-(\mu _1^2+\cdots +\mu _{k}^2)=0. \end{aligned}$$
(3.58)

Hence

$$\begin{aligned} \mu _{k+1}=\frac{1}{2}\Big (\lambda _{k+1,k+1}+\varepsilon _{k+1} \sqrt{\lambda _{k+1,k+1}^2+4(\tilde{c}+\mu _1^2+\cdots +\mu _{k}^2)}\Big ), \end{aligned}$$
(3.59)

where \(\varepsilon _{k+1}=\pm 1\). Then, using that \({ trace}\,A_{J{\bar{X}}_{k+1}}=0\), we get \(n_1-n_2\varepsilon _{k+1}-k-1>0\) and

$$\begin{aligned}&\lambda _{k+1,k+1}=2\sqrt{\tfrac{\tilde{c}+\mu _1^2+\cdots +\mu _{k-1}^2+\mu _k^2}{\big (\tfrac{n_1+n_2-k+1}{n_1-n_2\varepsilon _{k+1}-k-1}\big )^2-1}}. \end{aligned}$$
(3.60)

By the assumption that \(\mu _1,\ldots ,\mu _k\) are constants we see that, as claimed, \(\lambda _{k+1,k+2}= \cdots =\lambda _{k+1,n_1}\) and \(\mu _{k+1}\) are also constants. \(\square \)

Claim II-(2) \(\lambda _{k+1,k+2}=\cdots =\lambda _{k+1,n_1}=\mu _{k+1}\) and \(\lambda _{k+1,k+1}+(n-k-1)\mu _{k+1}=0.\)

Proof of Claim II-(2)

From (3.57) and (3.59), the first assertion is equivalent to showing that \(\varepsilon _{k+1}=-1\). Suppose, on the contrary, that \(\varepsilon _{k+1}=1\). Then we have

$$\begin{aligned} \mu _{k+1}\lambda _{k+1,i}=-(\tilde{c}+\mu _1^2+\cdots +\mu _{k}^2),\quad \ i\ge k+2. \end{aligned}$$
(3.61)

Similarly to obtaining (3.60), now we have

$$\begin{aligned} n_1-n_2-k-1>0 \end{aligned}$$
(3.62)

and

$$\begin{aligned} \lambda _{k+1,k+1}=2\sqrt{\tfrac{\tilde{c}+\mu _1^2+\cdots +\mu _{k}^2}{\big (\tfrac{n_1+n_2-k+1}{n_1-n_2-k-1}\big )^2-1}}. \end{aligned}$$
(3.63)

Put

$$\begin{aligned} V_{k+1}=\{u\in T_{p}M_1^{n_1} \mid u\perp {\bar{X}}_1,\dots ,u\perp {\bar{X}}_{k+1}\}. \end{aligned}$$

Then, a similar argument as in the proof of Claim II-(1) shows that, restricting on \(U_pM_1^{n_1}\cap V_{k+1}\), the function \(f_{(p,q)}\not =0\).

Now, by a totally similar process as in the proof of Claim II-(1), we can choose another orthonormal basis \(\{X'_i\}_{1\le i\le n_1}\) of \(T_pM_1^{n_1}\) with \(X'_j={\bar{X}}_j\) for \(1\le j\le k+1\) such that \(f_{(p,q)}\), restricting on \(U_pM_1^{n_1}\cap V_{k+1}\), attains its maximum \(\lambda _{k+2,k+2}>0\) at \(X'_{k+2}\) so that \(\lambda _{k+2,k+2} =h(A_{JX'_{k+2}}X'_{k+2},X'_{k+2})\).

As before, we define a self-adjoint operator \({\mathcal {A}}:\ V_{k+1}\rightarrow V_{k+1}\) by

$$\begin{aligned} {\mathcal {A}}(X)=A_{JX'_{k+2}}X- \sum _{i=1}^{k+1}\langle A_{JX'_{k+2}}X,X'_i \rangle X'_i. \end{aligned}$$

Then we have \({\mathcal {A}}(X'_{k+2})=\lambda _{k+2,k+2}X'_{k+2}\). As before we will choose \(X'_{k+3}, \ldots ,X'_{n_1}\) as the remaining unit eigenvectors of \({\mathcal {A}}\), with corresponding eigenvalues \(\lambda _{k+2,k+3}\), \(\ldots \), \(\lambda _{k+2,n_1}\), respectively. In this way, we can prove that

$$\begin{aligned} \left\{ \begin{aligned}&A_{JX'_{k+2}}X'_{k+2}=\mu _1 X'_1+\cdots +\mu _{k}X'_{k}+\lambda _{k+1,k+2}X'_{k+1}+\lambda _{k+2,k+2}X'_{k+2},\\&A_{JX'_{k+2}}X'_i=\lambda _{k+2,i}X'_i,\ \ k+3\le i\le n_1. \end{aligned} \right. \nonumber \\ \end{aligned}$$
(3.64)

Taking \(X=Z=X'_{k+2}\) and \(Y=X'_i\) for \(k+3\le i\le n_1\) in (2.4) and using (2.12), we obtain

$$\begin{aligned} \lambda _{k+2,i}^2-\lambda _{k+2,k+2}\lambda _{k+2,i} -\tilde{c}-(\mu _1^2+\cdots +\mu _{k}^2+\lambda _{k+1,i}^2)=0,\ \ k+3\le i\le n_1.\nonumber \\ \end{aligned}$$
(3.65)

Noting that for \(k+3\le i\le n_1\) we have \(\lambda _{k+2,k+2}\ge 2\lambda _{k+2,i}\), it follows from (3.65) that

$$\begin{aligned} \lambda _{k+2,i}&=\frac{1}{2}\Big (\lambda _{k+2,k+2}-\sqrt{\lambda _{k+2,k+2}^2 +4(\tilde{c}+\mu _1^2+\cdots +\mu _{k}^2+\lambda _{k+1,i}^2)}\,\Big ),\nonumber \\&\quad i\ge k+3. \end{aligned}$$
(3.66)

Similarly, let \(X=Z=X'_{k+2}\) and \(Y\in T_qM_2^{n_2}\) be a unit vector in (2.4). Using (2.12) we get

$$\begin{aligned} \mu _{k+2}^2-{\mu _{k+2}}\lambda _{k+2,k+2}-\tilde{c} -(\mu _1^2+\cdots +\mu _{k}^2+\lambda _{k+1,i}\mu _{k+1})=0,\ i\ge k+2.\nonumber \\ \end{aligned}$$
(3.67)

Combining (3.61) and (3.67) we obtain

$$\begin{aligned} {\mu ^2_{k+2}}-{\mu _{k+2}}\lambda _{k+2,k+2}=0, \end{aligned}$$
(3.68)

and therefore it holds that

$$\begin{aligned} \mu _{k+2}=\frac{1}{2}(\lambda _{k+2,k+2}+\varepsilon _{k+2}\lambda _{k+2,k+2}),\ \varepsilon _{k+2}=\pm 1. \end{aligned}$$
(3.69)

Then, using \({ trace}\,A_{JX'_{k+2}}=0\), we can get \(n_1-k-2>0\) and

$$\begin{aligned} \lambda _{k+2,k+2}=2\sqrt{\tfrac{\tilde{c}+\mu _1^2+\cdots +\mu _{k}^2+\lambda _{k+1,i}^2}{\big (\tfrac{n_1+n_2-k+\varepsilon _{k+2}n_2}{n_1-k-2}\big )^2-1}}\,,\ \ i\ge k+2. \end{aligned}$$
(3.70)

Given (3.62), we have the following calculations:

$$\begin{aligned} \begin{aligned} \frac{n_1+n_2-k+1}{n_1-n_2-k-1}-\frac{n_1+n_2+\varepsilon _{k+2}n_2-k}{n_1-k-2}&>\frac{n_1+n_2-k+1}{n_1-n_2-k-1}-\frac{n_1+2n_2-k}{n_1-k-2}\\&=\frac{2(n_2+1)(n_2-1)}{(n_1-n_2-k-1)(n_1-k-2)}\,. \end{aligned}\nonumber \\ \end{aligned}$$
(3.71)

Then, by (3.63) and (3.70), we get \(\lambda _{k+2,k+2}>\lambda _{k+1,k+1}\), which is a contradiction.

Therefore, \(\varepsilon _{k+1}=-1\) and \(\lambda _{k+1,k+2}=\cdots =\lambda _{k+1,n_1}=\mu _{k+1}\), as claimed.

Finally, from \({ trace}\ A_{J{\bar{X}}_{k+1}}=0\), we get

$$\begin{aligned} \lambda _{k+1,k+1}+(n-k-1)\mu _{k+1}=0. \end{aligned}$$

This completes the verification of Claim II-(2). \(\square \)

Claim II-(3) Let \(\{X_i\}_{1\le i\le n_1}\) be the local orthonormal vector fields of M which form a basis for the first component as in the assumption of induction. If a unit vector field V of \(TM_1^{n_1}\setminus \mathrm{span}\{X_1,\ldots ,X_k\}\) has the property that \(A_{JV}V=\lambda V+\mu _1X_1+\cdots +\mu _kX_k\) , then the function \(\lambda \) takes values of only finite possibilities.

Proof of Claim II-(3)

We first carry the discussion at an arbitrary fixed point (pq) Let \(X'_{k+1}:=V\), \(X'_1=X_1,\ldots ,X'_k=X_k\), \(\lambda _{k+1,k+1}:=\lambda \).

Put \(V_k=\{u\in T_{p}M_1^{n_1} \mid u\perp X_1,\ldots ,u\perp X_k\}\). Define \({\mathcal {A}}: V_k\rightarrow V_k\) by

$$\begin{aligned} {\mathcal {A}}(X)=A_{JV}X-\sum _{i=1}^k\langle A_{JV}X, X_i\rangle X_i. \end{aligned}$$

It is easily seen that \({\mathcal {A}}\) is a self-adjoint transformation and that \({\mathcal {A}}(V)=\lambda V\). Thus, we can choose an orthonormal basis \(\{X'_i\}_{k+1\le i\le n_1}\) of \(V_k\), such that \({\mathcal {A}}(X'_i)=\lambda _{i,i}X'_i\) for \(k+2\le i\le n_1\). Then, as before we see that (3.56) holds, and thus there exists an integer \(n_{1,k+1}\), \(0\le n_{1,k+1}\le n_1-(k+1)\) such that, if necessary after renumbering the basis, we have

$$\begin{aligned} \left\{ \begin{aligned}&\lambda _{k+1,k+2}=\cdots =\lambda _{k+1,n_{1,k+1}+k+1}\\&\ =\frac{1}{2}\Big (\lambda _{k+1,k+1}+\sqrt{\lambda _{k+1,k+1}^2 +4(\tilde{c}+\mu _1^2+\cdots +\mu _{k-1}^2+\mu _k^2)}\Big ),\\&\lambda _{k+1, n_{1,k+1}+k+2}=\cdots =\lambda _{k+1,n_1}\\&\ =\frac{1}{2}\Big (\lambda _{k+1,k+1}-\sqrt{\lambda _{k+1,k+1}^2 +4(\tilde{c}+\mu _1^2+\cdots +\mu _{k-1}^2+\mu _k^2)}\Big ). \end{aligned}\right. \end{aligned}$$
(3.72)

Then, using \({ trace}\,A_{JX'_{k+1}}=0\), we can show that

$$\begin{aligned} \begin{aligned}&\lambda _{k+1,k+1}=2\sqrt{\tfrac{\tilde{c}+\mu _1^2+\cdots +\mu _{k-1}^2+\mu _k^2}{\big (\tfrac{n_1+n_2-k+1}{2n_{1,k+1}-n_1+n_2\varepsilon _{k+1}+k+1}\big )^2-1}}. \end{aligned} \end{aligned}$$
(3.73)

Finally, noticing that by assumption \(\mu _1,\ldots ,\mu _k\) are constants, and that the set

$$\begin{aligned} \big \{n_{1,k+1}(p)\,|\,p\in M_1^{n_1}\big \} \end{aligned}$$

consists of finite numbers, we get the assertion that \(\lambda =\lambda _{k+1,k+1}\) takes values of only finite possibilities. \(\square \)

Claim II-(4) Let \(\{X_i\}_{1\le i\le n_1}\) be the local vector fields on M as in the assumption of induction, \(V_k=\{u\in T_pM_1^{n_1} \mid \langle u, u\rangle =1, u\perp X_1, \dots , u\perp X_k\}\). The unit vector \({\bar{X}}_{k+1}\in T_pM_1^{n_1}\) determined in Claim II-(1) can be extended differentiably to be a unit vector field, denoted by \(\tilde{X}_{k+1}\), in a neighbourhood U of (pq), such that for each \((p', q')\in U, f_{(p', q')}\) defined on \(V_k\) attains the absolute maximum at \(\tilde{X}_{k+1}{(p', q')}\).

Proof of Claim II-(4)

Let \(\{E_{k+1},\ldots ,E_{n_1}\}\) be arbitrary differentiable orthonormal vector fields of \(V_k\) defined on a neighbourhood \(U'\) of (pq) such that \(E_{k+1}(p,q)={\bar{X}}_{k+1}\). Then, we define a function \(\gamma \) by

$$\begin{aligned} \begin{aligned}&\gamma :\ {\mathbb {R}}^{n_1-k}\times U'\rightarrow {\mathbb {R}}^{n_1-k},\\&\quad (a_{k+1}, \ldots ,a_{n_1},(p',q'))\mapsto (b_{k+1},\ldots ,b_{n_1}), \end{aligned} \end{aligned}$$

where \(b_l=\sum _{i,j=k+1}^{n_1}a_ia_j\langle A_{JE_i}E_j,E_l\rangle -\lambda _{k+1,k+1}a_l,\ l=k+1\le l\le n_1\). Using the fact that \(f_{(p,q)}\) attains an absolute maximum in \(E_{k+1}{(p,q)}\) so that

$$\begin{aligned} \langle A_{JE_{k+1}}E_{l},E_{l}\rangle |_{(p,q)}=\lambda _{k+1,l},\ \ l\ge k+1, \end{aligned}$$

we then obtain that

$$\begin{aligned} \begin{aligned} \tfrac{\partial b_l}{\partial a_m}(1,0,\ldots ,0,(p,q))&=2 \langle A_{JE_{k+1}{(p,q)}} E_m{(p,q)},E_l{(p,q)}-\lambda _{k+1,k+1}\delta _{lm}\\&=\left\{ \begin{array}{ll} 0,&{}\quad \ \mathrm{if}\ l\ne m ,\\ \lambda _{k+1,k+1},&{}\quad \ \mathrm{if}\ l= m=k+1 ,\\ 2\lambda _{k+1,l}-\lambda _{k+1,k+1},&{}\quad \ \mathrm{if}\ l= m\ge k+2. \end{array} \right. \end{aligned} \end{aligned}$$

As \(\tilde{c}>0\), then from (3.57) we obtain that \(2\lambda _{k+1,l}-\lambda _{k+1,k+1}\ne 0\). Hence, similar to the proof of Claim I-(4), the implicit function theorem shows that there exist differentiable functions \(a_{k+1},\ldots , a_{n_1}\), defined on a neighbourhood U of (pq), such that the local vector field V, defined by

$$\begin{aligned} V=a_{k+1}E_{k+1}+\cdots +a_{n_1}E_{n_1}, \end{aligned}$$

has the property \(V(p,q)=X_{k+1}\) and satisfies that

$$\begin{aligned} A_{JV}V=\lambda _{k+1, k+1}V+\mu _1\langle V,V\rangle X_1+\cdots +\mu _k \langle V,V\rangle X_k. \end{aligned}$$

Hence

$$\begin{aligned} A_{J\tfrac{V}{\sqrt{\langle V,V \rangle }}} \tfrac{V}{\sqrt{\langle V,V \rangle }} =\tfrac{\lambda _{k+1,k+1}}{\sqrt{\langle V,V \rangle }} \tfrac{V}{\sqrt{\langle V,V \rangle }}+\mu _1X_1+\cdots +\mu _kX_k. \end{aligned}$$
(3.74)

According to Claim II-(3), the function \(\tfrac{\lambda _{k+1,k+1}}{\sqrt{\langle V,V \rangle }}\) can take a finite number of values. On the other hand, \(\tfrac{\lambda _{k+1,k+1}}{\sqrt{\langle V, V \rangle }}\) is continuous and \(\langle V,V \rangle {(p,q)}=1\). Thus \(\langle V,V \rangle =1\) holds identically. Let \(\tilde{X}_{k+1}:=V\). Then (3.74) and \(\langle V,V \rangle =1\) imply that for any \((p',q')\in U\), \(f_{(p',q')}\) defined on \(V_k(p',q')\) attains an absolute maximum at \(\tilde{X}_{k+1}(p',q')\). \(\square \)

Finally, we choose vector fields \(\tilde{X}_1=X_1,\ldots ,\tilde{X}_k=X_k\) and \(\tilde{X}_{k+2},\ldots ,\tilde{X}_{n_1}\) such that \(\{\tilde{X}_1,\tilde{X}_2,\ldots ,\tilde{X}_{n_1}\}\) are orthonormal vector fields of M which together span a basis for the first component of the tangent space. Then, combining with Lemma 3.2, we immediately fulfil the second step of induction.

Accordingly, we have completed the proof of Lemma 3.5. \(\square \)

In the following part, we aim at giving the explicit parametrization of \(\psi :\,M^n\rightarrow {\tilde{M}}^n(4\tilde{c})\). For this we will use Theorems 2.1 and 2.2 from [15].

Firstly, we will prove that the submanifold \(M^n\) has parallel second fundamental form. We will do this by direct computations: for the local orthonormal frame \(\{X_i\}_{1\le i\le n_1}\) of \(M_1^{n_1}\) as determined in Lemma 3.5, we will use the Codazzi equation in (2.52.6) to show that, for each \(1\le i\le n_1\), \(X_i\) is a parallel vector field. Then we will further prove that \(\psi :\,M^n\rightarrow {\tilde{M}}^n(4\tilde{c})\) has parallel second fundamental form.

Lemma 3.6

Let \(\{X_1,\ldots ,X_{n_1}\}\) be the local orthonormal frame of M, as determined in Lemma 3.5 and let \(\{Y_1,\ldots ,Y_{n_2}\}\) be a local vector fields on M which form a basis for the second component. Of course as the vector fields \(Y_i\) can be freely chosen, we pick them in such a way that \(Y_i(p',q')=Y_i(p,q')\), i.e. the \(Y_i\) depend only on the second component. Then

$$\begin{aligned} \nabla X_i=0,\quad \ 1\le i\le n_1. \end{aligned}$$

Proof

We will proceed by induction on the subscript of \(X_i\) and prove separately that \( \nabla _X X_i=0,\ X\in TM_1^{n_1}\) and \(\nabla _Y X_i=0,\ Y\in TM_2^{n_2},\) where \( 1\le i\le n_1.\)

Let us check first that \( \nabla _X X_i=0,\ X\in TM_1^{n_1}\).

For \(i\ge 2\), by using (2.3) and (3.28), we have

$$\begin{aligned} \left\{ \begin{aligned}&J(\nabla h)(X_i,X_1,X_1)=(2\mu _1-\lambda _{1,1})\nabla _{X_i} X_1,\\&J(\nabla h) (X_1,X_i,X_1)=-\mu _1\nabla _{X_1} X_i+A_{JX_1}(\nabla _{X_1}X_i)+A_{JX_i}(\nabla _{X_1}X_1). \end{aligned} \right. \end{aligned}$$

Then the Codazzi equations \(J(\nabla h)(X_i,X_1,X_1)=J(\nabla h) (X_1,X_i,X_1)\) give that

$$\begin{aligned} (2\mu _1-\lambda _{1,1})\nabla _{X_i} X_1=-\mu _1\nabla _{X_1} X_i+A_{JX_1}(\nabla _{X_1}X_i)+A_{JX_i}(\nabla _{X_1}X_1). \end{aligned}$$
(3.75)

Taking the component in the direction of \(X_1\) in (3.75) we can get \(\nabla _{X_1} X_1=0\). Substituting \(\nabla _{X_1} X_1=0\) into (3.75), and then taking the component in the direction of \(X_i\), we get \(\langle \nabla _{X_i} X_1,X_k\rangle =0\) for \(2\le i,k\le n_1\).

The above facts immediately verify for the first step of induction that

$$\begin{aligned} \nabla _X X_1=0,\quad X\in TM_1^{n_1}. \end{aligned}$$

Next, assume by induction that for a fixed \(j\ge 2\) it holds

$$\begin{aligned} \nabla _X X_k=0,\quad X\in TM_1^{n_1},\quad k=1,\ldots ,\quad j-1. \end{aligned}$$
(3.76)

We claim that \(\nabla X_j=0\). The proof of the claim will be given in four cases.

(1) From the induction assumption and the fact that \(\langle X_i,X_l \rangle =\delta _{il}\), we get

$$\begin{aligned} \langle \nabla _{X_i}X_j,X_k\rangle =-\langle \nabla _{X_i}X_k,X_j\rangle =0,\quad 1\le i\le n_1,\quad k\le j. \end{aligned}$$

(2) For \(i\le j-1\), by the induction assumption we have

$$\begin{aligned} \begin{aligned} J(\nabla h) (X_i,X_j,X_j)&=-\nabla _{X_i}A_{JX_j}X_j+2A_{JX_j}\nabla _{X_i}X_j\\&=\lambda _{j,j}\nabla _{X_i}X_j-2A_{JX_j}\nabla _{X_i}X_j\\&=(\lambda _{j,j}-2\mu _j)\nabla _{X_i}X_j;\\ J(\nabla h) (X_j,X_i,X_j)&=-\nabla _{X_j}A_{JX_i} X_j+A_{JX_j}\nabla _{X_j}X_i+A_{JX_i}\nabla _{X_j}X_j\\&=-\mu _i\nabla _{X_j}X_j+A_{JX_j}\nabla _{X_j}X_i+A_{JX_i}\nabla _{X_j}X_j\\&=-\mu _i \nabla _{X_j}X_j +A_{JX_i}\nabla _{X_j}X_j. \end{aligned} \end{aligned}$$

Then, by \(J(\nabla h) (X_i,X_j,X_j)=J(\nabla h) (X_j,X_i,X_j)\), we immediately get

$$\begin{aligned} \langle \nabla _{X_i}X_j, X_{j_0}\rangle =0, \quad \ i\le j-1,\quad j+1\le j_0\le n_1. \end{aligned}$$

(3) For \(j+1\le j_0\le n_1\), similar and direct calculations give that

$$\begin{aligned} \begin{aligned} J(\nabla h) (X_{j_0},X_j,X_j)&=-\nabla _{X_{j_0}}A_{JX_j}X_j+2A_{JX_j}\nabla _{X_{j_0}}X_j\\&=\lambda _{j,j}\nabla _{X_{j_0}}X_j-2A_{JX_j}\nabla _{X_{j_0}}X_j\\&=(\lambda _{j,j}-2\mu _j)\nabla _{X_{j_0}}X_j;\\ J(\nabla h) (X_j,X_{j_0},X_j)&=-\nabla _{X_j}A_{JX_{j_0}}X_j+A_{JX_j}\nabla _{X_j}X_{j_0}+A_{JX_{j_0}}\nabla _{X_j}X_j\\&=-\mu _j \nabla _{X_j}X_{j_0}+A_{JX_j}\nabla _{X_j}X_{j_0}+A_{JX_{j_0}}\nabla _{X_j}X_j. \end{aligned} \end{aligned}$$

By \(J(\nabla h) (X_j,X_{j_0},X_j)=J(\nabla h) (X_j,X_{j_0},X_j)\) and taking the component in the direction of \(X_{j}\), we obtain that

$$\begin{aligned} \langle \nabla _{X_j}X_j, X_{j_0}\rangle =0,\quad \ j+1\le j_0\le n_1. \end{aligned}$$

(4) For \(i\ge j+1\), by similar calculations for both sides of

$$\begin{aligned} J(\nabla h)(X_i,X_{j},X_j)=J(\nabla h)(X_j, X_{i},X_j), \end{aligned}$$

and taking the component in the direction of \(X_{j_0}\) for \(j_0\ge j+1\), we can get

$$\begin{aligned} \langle \nabla _{X_{i}} X_j,X_{j_0} \rangle =0,\quad \ i\ge j+1,\quad j_0\ge j+1. \end{aligned}$$

Summing up the above four cases, we finally get the assertion

$$\begin{aligned} \nabla _X X_j=0,\ X\in TM_1^{n_1}. \end{aligned}$$

Finally, we must prove that \(\nabla _Y X_i=0,\ Y\in TM_2^{n_2}, 1\le i\le n_1.\) The proof follows the same steps as before. For instance, we start with the Codazzi equation \(J(\nabla h)(X_i, Y_1,X_1)=J(\nabla h)(Y_1,X_i,X_1)\), \(i>1\). Multiplying once by \(X_1\) and once by \(Y_j\), \(j\le n_1\), we get that \(\nabla _{Y_1}X_1=0\). Then, \(\nabla _{Y_j}X_1=0\), \(j>1\) follows similarly from \(J(\nabla h)(Y_i, X_1,X_1)=J(\nabla h)(X_1,Y_i,X_1)\), \(i>1\). We then complete the proof of this part by following the same steps as for \(\nabla _X X_i=0,\ X\in TM_1^{n_1}.\)

By induction we have completed the proof of Lemma 3.6. \(\square \)

Lemma 3.7

Under the condition of Theorem 3.1, the submanifold \(\psi : M^n\rightarrow \tilde{M}^n(4\tilde{c})\) has parallel second fundamental form: \(\nabla h=0\).

Proof

We have that \(M^n=M_1^{n_1}(c_1)\times M_2^{n_2}(c_2)\) for \(c_1=0\), \(c_2>0\) and \(\tilde{c}=1\). Let \(\{X_i\}_{1\le i\le n_1}\) and \(\{Y_j\}_{1\le j\le n_2}\) be the local orthonormal frames of vector fields of \(M_1^{n_1}\) and \(M_2^{n_2}\), respectively, as described in Lemma 3.5. Consider arbitrarily \(X\in TM_1^{n_1}\) and \(Y\in TM_2^{n_2}\). We will make use of the Codazzi equation (2.52.6), Eqs. (3.17), (3.28) and the fact that \(\nabla X_i=0,1\le i\le n_1\). We need, additionally, to know that \(\nabla _{X_i}Y=0\), for \(i<n_1\). For every \(Y_j\) chosen in the basis of \(T_qM_2^{n_2}\), we take its horizontal lift on \(T_{(p,q)}M_1^{n_1}\times M_2^{n_2}\), which we still denote by \(Y_j\). Our setting corresponds now to [20, Proposition 56, p. 89]. Hence, \(\nabla _{X}Y=0\).

Given the symmetries of \(\nabla h\), it is enough to evaluate \(\nabla h(X_k,Y_i,Y_j)\), \(\nabla h(Y_i,X_k,Y_j)\), \(\nabla h(X,X_i,X_j)\), \(\nabla h(Y,Y_i,Y_j)\), \(\nabla h(Y,X_i,X_j)\). By direct calculations we obtain \(\nabla h=0\). \(\square \)

Completion of the Proof of Theorem 3.1

Let \(\{X_i\}_{1\le i\le n_1}\) and \(\{Y_j\}_{1\le j\le n_2}\) be the local orthonormal frames of vector fields of \(M_1^{n_1}\) and \(M_2^{n_2}\), respectively, as described in Lemma 3.5. Now, we consider the two distributions \({\mathcal {D}}_1\) spanned by \(X_1\), and \({\mathcal {D}}_2\) spanned by \(\{X_2,\ldots ,X_{n_1}, Y_1,\ldots ,Y_{n_2}\}\). Given the form of \(A_{JX_1}\) in (3.28), we may apply Theorem 2.1 and obtain that \(\psi : M^n\rightarrow \mathbb {CP}^n(4)\) is locally a Calabi product Lagrangian immersion of an \((n-1)\)-dimensional Lagrangian immersion \(\psi _1:M_{1,1}^{n-1}\rightarrow \mathbb {CP}^{n-1}(4)\) and a point, i.e., \(M^n=I_1\times M_{1,1}^{n-1}\), \(I_1\subset {\mathbb {R}}\). As \(\psi \) is minimal in our case, we may further apply Theorem 2.1 (2). Therefore, we get that

$$\begin{aligned} \mu _1=\pm \frac{1}{\sqrt{n}}\, \text { and }\psi _1 \text { is minimal}, \end{aligned}$$

and \(\psi =\Pi \circ \tilde{\psi }\) for

$$\begin{aligned} \tilde{\psi }(t,p)=\Big (\sqrt{\tfrac{n}{n+1}}e^{i\frac{1}{n+1}t} \tilde{\psi _1}(p), \sqrt{\tfrac{1}{n+1}}e^{-i\frac{n}{n+1}t}\Big ), \quad (t,p)\in I_1\times M_{1,1}^{n-1}, \end{aligned}$$

where \(\Pi : {\mathbb {S}}^{2n+1}(1)\rightarrow \mathbb {CP}^{n}(4)\) is the Hopf fibration and \(\tilde{\psi }_1:M_{1,1}^{n-1}\rightarrow {\mathbb {S}}^{2n-1}(1)\) is the horizontal lift of \(\psi _1\).

Consider next the immersion \(\psi _1:M_{1,1}^{n-1}\rightarrow \mathbb {CP}^{n-1}(4)\). From (3.28) we may see that the restriction \(A^1_J\) of the shape operator \(A_J\) on \(\{X_2,\ldots ,X_{n_1}, Y_1,\ldots ,Y_{n_2}\}\) (which spans \(TM_{1,1}^{n-1}\)) is defined as

$$\begin{aligned} \left\{ \begin{aligned} A^1_{JX_2}X_2&=\lambda _{2,2}X_2,\\ A^1_{JX_i}X_i&=\mu _2 X_2+\cdots +\mu _{i-1} X_{i-1}+\lambda _{i,i}X_i,\ \ 3\le i\le n_1,\\ A^1_{JX_i}X_j&=\mu _i X_j, \ \ 2\le i\le j-1,\\ A^1_{JX_i}Y_j&=\mu _i Y_j,\ \ 2\le i\le n_1,\ \ 1\le j\le n_2,\\ A^1_{JY_i}Y_j&=\delta _{ij}(\mu _2 X_2+\cdots +\mu _{n_1} X_{n_1}). \end{aligned}\right. \end{aligned}$$
(3.77)

We then apply Theorem 2.1 on \(M_{1,1}^{n-1}\) by identifying \({\mathcal {D}}_1\) with \(span\{X_2\}\) and \({\mathcal {D}}_2\) with \(span\{X_3,\ldots ,X_{n_1}, Y_1, \ldots ,Y_{n_2}\}\), and obtain that \(\psi _1:M_{1,1}^{n-1}\rightarrow \mathbb {CP}^{n-1}(4)\) is locally a Calabi product Lagrangian immersion of an \((n-2)\)-dimensional Lagrangian immersion \(\psi _2 :M_{1,2}^{n-2}\rightarrow \mathbb {CP}^{n-2}(4)\) and a point, thus \(M_{1,1}^{n-1}=I_2\times M_{1,2}^{n-2}\) and \(M^n=I_1\times I_2\times M_{1,2}^{n-2}\), \(I_2\subset {\mathbb {R}}\).

As \(\psi _2\) is minimal, we further apply Theorem 2.1 (2), and we get

$$\begin{aligned} \mu _2=\pm \frac{1}{\sqrt{n-1}},\quad \ \psi _2 \text { is minimal}, \end{aligned}$$

and \(\psi _1=\Pi _1\circ \tilde{\psi _1}\) for

$$\begin{aligned} \tilde{\psi }_1(t,p)=\Big (\sqrt{\tfrac{n-1}{n}} e^{i\tfrac{1}{n}t}\tilde{\psi }_2(p), \sqrt{\tfrac{1}{n}}e^{-i\tfrac{n-1}{n}t}\Big ),\quad \ (t,p)\in I_2\times M_{1,2}^{n-2}, \end{aligned}$$

where \(\Pi _1: {\mathbb {S}}^{2n-1}(1)\rightarrow \mathbb {CP}^{n-1}(4)\) is the Hopf fibration, and \(\tilde{\psi }_2:M_{1,2}^{n-2}\rightarrow {\mathbb {S}}^{2n-3}(1)\) is the horizontal lift of \(\psi _2\).

In this way, we can apply Theorem 2.1 for the \((n_1-1)^{th}\) time because, inductively, we have that \(\psi _{n_1-2}:M^{n-(n_1-2)}_{1,n_1-2} \rightarrow \mathbb {CP}^{n-(n_1-2)}(4)\) is a Lagrangian immersion and the restriction \(A^{n_1-2}_J\) of the shape operator \(A_J\) on \(\{X_{n_1-1},X_{n_1}, Y_1, \ldots ,Y_{n_2}\}\) (which spans \(TM_{1,n_1-2}^{n-(n_1-2)}\)) is defined as

$$\begin{aligned} \left\{ \begin{aligned}&A^{n_1-2}_{JX_{n_1-1}}X_{n_1-1}=\lambda _{n_1-1,n_1-1}X_{n_1-1},\\&A^{n_1-2}_{JX_{n_1-1}}X_{n_1}=\mu _{n_1-1} X_{n_1},\\&A^{n_1-2}_{JX_{n_1-1}}Y_j=\mu _{n_1-1} Y_j,\ \ 1\le j\le n_2,\\&A^{n_1-2}_{JY_i}Y_j=\delta _{ij}\mu _{n_1-1} X_{n_1-1},\,\ 1\le i, j\le n_2. \end{aligned} \right. \end{aligned}$$
(3.78)

Applying therefore Theorem 2.2 by identifying \({\mathcal {D}}_1\) with \(span\{X_{n_1-1}\}\) and \({\mathcal {D}}_2\) with \(span\{X_{n_1}, Y_1,\ldots ,Y_{n_2}\}\), we obtain that \(M_{1,n_1-2}\) is locally a Calabi product Lagrangian immersion of an \((n-(n_1-1))\)-dimensional Lagrangian immersion \(\psi _{n_1-1} :M_{1,n_1-1}\rightarrow \mathbb {CP}^{n-(n_1-1)}(4)\) and a point. Thus \(M_{1,n_1-2}=I_{n_1-1} \times M_{1,n_1-1}\) and \(M^n=I_1\times I_2\times \cdots \times I_{n_1-1}\times M_{1,n_1-1}\), \(I_{n_1-1}\subset {\mathbb {R}}\).

As \(\psi _{n_1-2}\) is minimal, we further apply Theorem 2.1 (2) to see that

$$\begin{aligned} \mu _{n_1-1}=\pm \frac{1}{\sqrt{n-(n_1-1)+1}},\ \ \psi _{n_1-1} \text { is minimal}, \end{aligned}$$

and \(\psi _{n_1-2}=\Pi _{n_1-2}\circ \tilde{\psi }_{n_1-2}\) for

$$\begin{aligned} \begin{aligned} \tilde{\psi }_{n_1-2}(t,p)=&\Big (\sqrt{\tfrac{n-(n_1-2)}{(n-(n_1-2)) +1}}e^{i\frac{1}{n-(n_1-2)+1}t}\tilde{\psi }_{n_1-1}(p), \\&\quad \ \ \sqrt{\tfrac{1}{n-(n_1-2)+1}}e^{-i\frac{n-(n_1-2)}{n-(n_1-2)+1}t}\Big ), \ (t,p)\in I_{n_1-1}\times M_{1,n_1-1}. \end{aligned} \end{aligned}$$

Here, \(\Pi _{n_1-2}: {\mathbb {S}}^{2n-2n_1+5}(1)\rightarrow \mathbb {CP}^{n-(n_1-2)}(4)\) is the Hopf fibration, and \(\tilde{\psi }_{n_1-1}:M_{1,n_1-1}\rightarrow {\mathbb {S}}^{2n-2n_1+3}(1)\) is the horizontal lift of \(\psi _{n_1-1}\).

We want to apply Theorem 2.1 for the \(n_1^{th}\) time, for the Lagrangian immersion \(\psi _{n_1-1}:M^{n-(n_1-1)}_{1,n_1-1}\rightarrow \mathbb {CP}^{n-(n_1-1)}(4)\), given that the restriction \(A^{n_1-1}_J\) of the shape operator \(A_J\) on \(\{X_{n_1}, Y_1, \ldots ,Y_{n_2}\}\) (which spans \(TM_{1,n_1-1}\)) is defined as

$$\begin{aligned} \left\{ \begin{aligned}&A^{n_1-1}_{JX_{n_1}}X_{n_1}=\lambda _{n_1,n_1}X_{n_1},\\&A^{n_1-1}_{JX_{n_1}}Y_j=\mu _{n_1} Y_j,\ \ 1\le j\le n_2,\\&A^{n_1-1}_{JY_i}Y_j=\delta _{ij}\mu _{n_1} X_{n_1}. \end{aligned} \right. \end{aligned}$$
(3.79)

Applying therefore Theorem 2.2 by identifying \({\mathcal {D}}_1\) with \(span\{X_{n_1}\}\), and \({\mathcal {D}}_2\) with \(span\{ Y_1,\ldots ,Y_{n_2}\}\), we obtain that \(\psi _{n_1-1}: M_{1,n_1-1}\rightarrow \mathbb {CP}^{n-(n_1-1)}(4)\) is locally a Calabi product Lagrangian immersion of an \((n-n_1)\)-dimensional Lagrangian immersion \(\psi _{n_1} :M_{1,n_1}\rightarrow \mathbb {CP}^{n-n_1}(4)\) and a point. Thus \(M_{1,n_1-1}=I_{n_1}\times M_{1,n_1}\) and

$$\begin{aligned} M^n=I_1\times I_2\times \cdots \times I_{n_1}\times M_{1,n_1},\, I_{n_1}\subset {\mathbb {R}}. \end{aligned}$$
(3.80)

As \(\psi _{n_1-1}\) is minimal, we further apply Theorem 2.1 (2) and we get

$$\begin{aligned} \mu _{n_1}=\pm \frac{1}{\sqrt{n-n_1+1}},\ \ \psi _{n_1} \text { is minimal}, \end{aligned}$$

and \(\psi _{n_1-1}=\Pi _{n_1-1}\circ \tilde{\psi }_{n_1-1}\) for

$$\begin{aligned} \begin{aligned} \tilde{\psi }_{n_1-1}(t,p)&=\Big (\sqrt{\tfrac{n-(n_1-1)}{(n-(n_1-1)) +1}}e^{i\frac{1}{n-(n_1-1)+1}t}\tilde{\psi }_{n_1}(p), \\&\quad \sqrt{\tfrac{1}{n-(n_1-1)+1}}e^{-i\frac{n-(n_1-1)}{n-(n_1-1)+1}t}\Big ), \ (t,p)\in I_{n_1}\times M_{1,n_1}, \end{aligned} \end{aligned}$$

where \(\Pi _{n_1-1}: {\mathbb {S}}^{2n-2n_1+3}(1)\rightarrow \mathbb {CP}^{n-n_1+1}(4)\) is the Hopf fibration and \(\tilde{\psi }_{n_1}:M_{1,n_1}\rightarrow {\mathbb {S}}^{2n-2n_1+1}(1)\) is the horizontal lift of \(\psi _{n_1}\).

Notice that the restriction \(A^{n_1}_J\) of the shape operator \(A_J\) on \(\{Y_1,\ldots ,Y_{n_2}\}\) is \(A^{n_1}_{JY_i}Y_j=0\). Therefore, we eventually have that \(M^n\) is locally a Calabi product Lagrangian immersion of \(n_1\) points and an \(n_2\)-dimensional Lagrangian immersion

$$\begin{aligned} \psi _{n_1}:M^{n_2}_2\rightarrow \mathbb {CP}^{n-n_1}(4), \end{aligned}$$

for \(M^{n_2}_2:=M_{1,n_1}\) which has vanishing second fundamental form. Moreover,

$$\begin{aligned} M^n=I_1\times I_2\times \cdots \times I_{n_1}\times M^{n_2}_2,\quad \ I_1,\ldots ,I_{n_1}\subset {\mathbb {R}}. \end{aligned}$$

Finally, for \(q\in M^{n_2}_2\) the parametrization of \(\psi : M^n\rightarrow \mathbb {CP}^n(4)\) is given by:

$$\begin{aligned} \psi (t_1,\ldots ,t_{n_1},q)=\Big (&\sqrt{\tfrac{n-(n_1-1)}{n+1}}e^{i \big (\frac{t_1}{n+1}+\frac{t_2}{n}+\cdots +\frac{t_{n_1-1}}{n-(n_1-2)+1} +\frac{t_{n_1}}{n-(n_1-1)+1}\big )}\tilde{\psi }_{n_1}(q), \\&\tfrac{1}{\sqrt{n+1}}e^{i\big (\frac{t_1}{n+1}+\frac{t_2}{n} +\cdots +\frac{t_{n_1-1}}{n-(n_1-2)+1}-\frac{n-(n_1-1)}{n-(n_1-1)+1}t_{n_1}\big )},\\&\tfrac{1}{\sqrt{n+1}}e^{i\big (\frac{t_1}{n+1}+\frac{t_2}{n} +\cdots +\frac{t_{n_1-2}}{n-(n_1-3)+1}-\frac{n-(n_1-2)}{n-(n_1-2)+1}t_{n_1-1}\big )},\\&\ \ \ \ldots \\&\tfrac{1}{\sqrt{n+1}}e^{i\big (\frac{t_1}{n+1}+\frac{t_2}{n}-\frac{n-2}{(n-2)+1}t_3)}, \tfrac{1}{\sqrt{n+1}}e^{i(\frac{t_1}{n+1}-\frac{n-1}{n}t_2\big )},\\&\tfrac{1}{\sqrt{n+1}}e^{-i\frac{n}{n+1}t_1}\Big ), \end{aligned}$$

which, writing \(\tilde{\psi }_{n_1}(q)=:(y_1,\ldots ,y_{n_2+1})\), is equivalent to

$$\begin{aligned} \begin{aligned} \psi (t_1,\ldots ,t_{n_1},q)=\Big (&\tfrac{1}{\sqrt{n+1}}e^{i u_1},\ldots , \tfrac{1}{\sqrt{n+1}}e^{i u_{n_1}},\\&\sqrt{\tfrac{n_2+1}{n+1}}e^{i u_{n_1+1}}\big (y_1, y_2,\ldots , y_{n_2+1}\big )\Big ), \end{aligned} \end{aligned}$$
(3.81)

where \(\{u_i\}_{1\le i\le n_1+1}\) are defined by

$$\begin{aligned} \left\{ \begin{aligned}&u_1=-\tfrac{n}{n+1}t_1,\\&\ \ \ \ \ \ldots \\&u_{n_1}=\tfrac{t_1}{n+1}+\tfrac{t_2}{n}+\cdots +\tfrac{t_{n_1-1}}{n-(n_1-2) +1}-\tfrac{n-(n_1-1)}{n-(n_1-1)+1}t_{n_1},\\&u_{n_1+1}=\tfrac{t_1}{n+1}+\tfrac{t_2}{n} +\cdots +\tfrac{t_{n_1-1}}{n-(n_1-2)+1}+\tfrac{t_{n_1}}{n-(n_1-1)+1}, \end{aligned} \right. \end{aligned}$$

and they satisfy \(u_1+u_2+\cdots +u_{n_1}+(n_2+1)u_{n_1+1}=0\).

This completes the proof of Theorem 3.1. \(\square \)

3.1 3.2

Now, we deal with Case (ii), that is, we treat the case when \(c_1\ne 0\) and \(c_2\ne 0\).

We begin with the following result whose proof is similar to that of (3.4).

Lemma 3.8

If Case (ii) occurs, then we have

$$\begin{aligned} \langle Y_l, A_{JX_i} X_j\rangle =\langle X_i, A_{JY_l} Y_m\rangle =0, \quad 1\le i,j\le n_1, \quad 1\le l,m\le n_2. \end{aligned}$$
(3.82)

Then as a main result of this subsection we can prove the following lemma.

Lemma 3.9

Case (ii) does not occur.

Proof

Suppose on the contrary that Case (ii) does occur. From Lemma 3.1 we know that \(A_J\) vanishes nowhere. We may assume that there exist \(X\in T_pM_1^{n_1}\) such that \(A_{JX}\ne 0\) at the point p. Given Lemma 3.8, similarly to the proof of Lemma 3.5, we can show that there exists a local orthonormal frame \(\{X_1,\ldots ,X_{n_1}\}\in TM_1^{n_1}\) on a neighbourhood of p such that the shape operator satisfies

$$\begin{aligned} A_{JX_1}X_1=\lambda _1X_1,\quad A_{JX_1}X_i=\lambda _2X_i, \quad 2\le i\le n_1, \end{aligned}$$
(3.83)

where \(\lambda _1\) and \(\lambda _2\) are constants. Then, similarly to the proof of (3.27), we can show that \({\nabla }_XX_1=0\) for any \(X \in TM_1^{n_1}\). This implies that \({R}(X_1,X_2)X_1=0\), which is a contradiction to \(c_1c_2\ne 0\). \(\square \)

Completion of the Proof of the Main Theorem

If \(c_1=c_2=0\), it follows from (2.12) that \((M^n,\langle \cdot ,\cdot \rangle )\) is flat. According to the result of [11, 17] and [6] (see the Gauss equation (3.5) in [6]), we get point (1) of the Main Theorem.

If \(c_1^2+c_2^2\ne 0\), we have two cases: Case (i) and Case (ii).

For Case (i), by Theorem 3.1, we obtain the minimal Lagrangian submanifold as stated in point (2) of the Main Theorem.

Case (ii), by Lemma 3.9, does not occur.

Hence, we have completed the proof of the Main Theorem. \(\square \)