1 Introduction

In comparison geometry, one of the most important theorems is the Laplacian comparison theorem for distance function in complete Riemannian manifolds. The theorem states that for a complete Riemnnian manifold \(M^n\) with \(Ri{c_M} \ge (n - 1)H\), one has \({\Delta _M}r \le {\Delta _H}r\), where \({\Delta _M}r\) is the Laplacian of the distance function r on M and \({\Delta _H}r\) is the Laplacian of the distance function r on the model space (i.e. a simply connected space form) of constant sectional curvature H. The theorem has many consequences in Riemannian geometry such as Myers’ theorem, Bishop–Gromov volume comparison theorem [38], Cheeger–Gromoll splitting theorem [11] and their applications in topology [13, 38], etc.

There are some extensions for the Laplace operator. One of the well known extensions of the Laplace operator is the weighted Laplace operator which is defined as \({\Delta _f} = \Delta - \nabla f.\nabla \) for \(f \in {C^\infty }(M)\). This operator plays the same role on weighted manifolds as Laplacian on manifolds. Many results in comparison geometry for Laplace operator have been extended to this operator and weighted manifolds, for example refer to [7, 13, 14, 18, 22, 31, 35]. On these manifolds, the tensor \(Ri{c_f} = Ric + Hessf\) plays the same role as the Ricci tensor on Riemannian manifolds. Also Wylie extended the notion of sectional curvature on these manifolds and got some valuable results [19, 36]. The p-Laplace operator \({\Delta _p}u = div\left( {{{\left| {\nabla u} \right| }^{p - 2}}\nabla u} \right) \) is another extension of the Laplace operator and has a rich study in comparison geometry [28,29,30].

Another extension of the Laplace operator is the elliptic divergence type operator \({L_A}u: = div(A\nabla u)\), where A is a positive definite self-adjoint (1, 1)-tensor field on a complete Riemannian manifold. A natural and major question is how to extend the results for Laplace operator and Ricci tensor to this operator. In this regard, Bakry and Emery invented the so-called curvature-dimension inequality [5]. Let L be a second order differential operator with \(L1=0\). By the use of L, Bakry and Emery defined differential forms \({\Gamma },\,{\Gamma _2}\) as follows,

$$\begin{aligned} \Gamma (u,v) = \frac{1}{2}\left\{ {L(uv) - uL(v) - vL(u)} \right\} \end{aligned}$$

and

$$\begin{aligned} {\Gamma _2}(u,v) = \frac{1}{2}\left\{ {L(\Gamma (u,v)) - \Gamma (u,L(v)) - \Gamma (v,L(u))} \right\} . \end{aligned}$$

The operator L satisfies the CD(nK)- curvature-dimension inequality, when the following differential inequality holds,

$$\begin{aligned} {\Gamma _2}(u,u) \geqslant \frac{1}{n}{(Lu)^2} - K\Gamma (u,u),{} {} {} {} {} {} {} \forall u \in {C^\infty }(M).{} \end{aligned}$$
(1.1)

The usual Bochner formula can be reformulated as follows in terms of \(\Gamma ,{\Gamma _2}\),

$$\begin{aligned} \Delta \Gamma (u,u) = 2\left\| {Hessu} \right\| _2^2 + 2\Gamma (u,\Delta u) + 2Ric(\nabla u,\nabla u). \end{aligned}$$

Since \(\left\| {Hessu} \right\| _2^2 \geqslant \frac{1}{n}{\left( {\Delta u} \right) ^2}\), the \(CD(n,K)-\) curvature-dimension inequality for \(\Delta \) is equivalent to [21]

$$\begin{aligned} Ric(\nabla u,\nabla u) \geqslant K. \end{aligned}$$

In a weighted manifold \(\left( {M,{e^{ - f}}} \right) \), \(f \in {C^\infty }(M)\) for the weighted Laplacian \(L = \Delta - \left\langle {\nabla f,.} \right\rangle \), one has

$$\begin{aligned} {\Gamma _2}(u,u) = Hess\,f\,(\nabla u,\nabla u) + \left\| {Hessu} \right\| _2^2, \end{aligned}$$

and for \(m \geqslant n\), L satisfies the \(CD(K,m)-\)curvature-dimension inequality, iff [21]

$$\begin{aligned} \nabla f \otimes \nabla f \leqslant \left( {m - n} \right) \left[ {Ric + Hessf - Kg} \right] . \end{aligned}$$

The use of inequality (1.1) and properties of the heat semigroup has been proved to be a powerful tool in the study of Markov diffusion operators on manifolds. Bakry and Ledux and their collaborators have succeeded to re-obtain several well-known fundamental results for Riemannian manifolds satisfying the curvature-dimension inequality, when Laplacian is replaced by L [6]. Qian used the so called Bakry–Emery’s curvature-dimension inequality and several basic properties of the distance function to extend the Mean Curvature Comparison for elliptic operators as follows [25].

Theorem 1.1

[25]. Let L be an elliptic differential operator of second order on an m dimensional smooth manifold M. If L satisfies a curvature-dimension inequality \(CD(n;-K)\) for some constants \(n > 0\); \(K > 0\), and if the distance d induced by \(\Gamma \) is complete, then

$$\begin{aligned} L{\rho ^2} \le n\left\{ {1 + \sqrt{1 + \frac{{4K{\rho ^2}}}{n}} } \right\} ,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,on\ M - cut(p)\mathrm{{;}}\, \end{aligned}$$

where \(\rho (x) = dist(x,p)\).

Inspired by these wonderful results we would like to give some nice extensions of them as follows:

In Theorem 1.2, we give an extension of the mean curvature comparison theorem different from Theorem 1.1. Our proof of Theorem 1.2 is compeletely different from the proof of Theorem 1.1. This is because we have neither assumed the strong curvature-dimension inequality \(CD(n;-K)\) condition, nor completeness of the metric induced by \(\Gamma \), but we used an extension of the Ricci tensor and a different Bochner formula obtained in [2, 16]. In fact we give an upper bound for \({L_A}r\) where \({L_A}\) is an elliptic operator of the form \({L_A}u = div(A\nabla u)\) and A is a self-adjoint \((1,1)-\) Codazzi tensor field on the manifold.

We provide another extension of mean curvature comparison theorem. Via this approach, we use the extension of Bochner formula obtained in [2, 16] and extend the Ricci tensor as \(\left( {X,Y} \right) \mapsto Ric\left( {X,AY} \right) \). We also extend some important classical theorems in comparison geometry such as Myers’ theorem, Bishop–Gromov volume comparison theorem and its consequences including Yau- Calabi theorem [37] for the growth of the volume of geodesic balls, Gallot and Anderson’s theorem. We also extend some famous consequences of the mean curvature comparison theorem like as Cheeger–Gromoll splitting theorem and its applications in topology. Finally we get an upper estimate for the ends of a manifold as for Riemannian or weighted Riemannian manifolds.

Our main results are Theorems 1.21.6. We state and explain them as follows. First we prove the extended mean curvature comparison Theorem 1.2 which is different from Theorem 1.1. We present two kinds of extensions of mean curvature comparison. The first one is a result for the differential operator \({\Delta _{A,f}}\) and we use it to extend Myers’ theorem and Cheeger–Gromoll splitting theorem. The second one is for differential operator \({L_{A,f}}\) which is used to extend the volume comparison theorem.

Theorem 1.2

(Extended mean curvature comparison). Let \(x_0 \in M\), \(r(x) = dist(x_0,x)\) and H be constant.

  1. (a)

    If \(\left( {n - 1} \right) {\delta _n}H{\left| X \right| ^2} \le Ric(X,AX)\) and \(\left| {{f^A}} \right| \le K\) where K is some constant ( for \( H>0 \) assume \(r \le \frac{\pi }{{4\sqrt{H} }}\)), then along any minimal geodesic segment from \(x_0\) we have,

    $$\begin{aligned} {\Delta _{A,{f^A}}}r \le {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}\left( {n - 1} \right) }}} \right) {\Delta _H}r. \end{aligned}$$
  2. (b)

    If \(\left( {n - 1} \right) {\delta _n}H \le Ri{c_{-Trace(A)}}({\partial _r},A{\partial _r})\) , \(\left| {{f^A}} \right| \le K\) and \(\left| {Trace(A)} \right| \le K'\) where K and \({K'}\) are some constants ( for \( H>0 \) assume \(r \le \frac{\pi }{{4\sqrt{H} }}\)), then along any minimal geodesic segment from \(x_0\) we have,

    $$\begin{aligned} {L_A}r \le {\delta _n}\left( {1 + \frac{{4\left( {K + K'} \right) }}{{{\delta _n}\left( {n - 1} \right) }}} \right) \left( {{\Delta _H}r} \right) + {\partial _r}.{f^A}(r), \end{aligned}$$

where the notations \({L_{A,f}}\), \({\Delta _{A,f}}\), \(Ri{c_{Trace(A)}}({\partial _r},A{\partial _r})\)and \({\delta _n}\) are defined in Definitions 2.3 and 2.2, \( f^A \) is an estimates for a contraction of the tensor field \({T^{{\nabla _{\nabla r}}A}}\), defined in Definition 3.2.

As a consequence of the extended mean curvature comparison for the operator \({\Delta _{A,f}}\) and by inspiring the ideas of [23, 31, 34], we prove a variant of Myer’s theorem via the excess functions as follows.

Theorem 1.3

If \(Ric(X,AX) \ge \left( {n - 1} \right) {\delta _n}H{\left| X \right| ^2}\) for any vector field \(X \in {\mathfrak {X}}(M)\) and some constant \( H>0 \) and \(\left| {{f^A}} \right| \le \,K\) Then

  1. (a)

    M is compact and \(diam(M) \le \frac{\pi }{{\sqrt{H} }} + \frac{{4K}}{{{\delta _n}(n - 1)\sqrt{H} }},\)

  2. (b)

    M has finite fundamental group.

The Bishop–Gromov volume comparison theorem (see [13] or [38]) is one of the most important theorems in differential geometry and has many important applications, we extend the volume comparison theorem as follows.

Theorem 1.4

Let M be a Riemannian manifold, \( x_0 \in M \) and \( r(x):=dist(x_0,x) \). Let A be a self adjoint (1,1)-tensor field on M and \( R_T \) be a constant. Assume the following conditions

  1. (1)

    For some constant H we have \(Ri{c_{-Trace(A)}}(X,AX) \ge \left( {n - 1} \right) {\delta _n}H{\left| X \right| ^2}\) ( If \( H>0 \), assume \({R_T} \le \frac{\pi }{{4\sqrt{H} }}\) ),

  2. (2)

    \(\left| {{f^A}} \right| \le K\) and \(\left| {Trace(A)} \right| \le K'.\)

Then for \(m = C\left( {{\delta _n},n,{\delta _1},K,K',H} \right) = \left[ {\frac{{{\delta _n}(n - 1) + 4(K + K')}}{{{\delta _1}}}} \right] + 2\), the following results hold,

  1. (a)

    If \(\left| {\nabla {f^A}} \right| \le a\), then for any \(0 <r\le {R}\), then

    $$\begin{aligned}\frac{{vo{l^A}(B(p,R))}}{{vo{l^A}(B(p,r))}} \le {e^{\left( {a/{\delta _1}} \right) R}}\frac{{vol_H^m(R)}}{{vol_H^m(r)}}.\end{aligned}$$
  2. (b)

    For any \(0 <r\le {R}\) and \(p>1\),

    $$\begin{aligned}&{\left( {\frac{{vol^A(B({x_0},R))}}{{vol_H^mB(R)}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B({x_0},R))}}{{vol_H^mB(r)}}} \right) ^{1/p}}\\&\quad \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R}}\int _r^R {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB(t)} \right) }^{1 + 1/p}}}}dt.} \end{aligned}$$
  3. (c)

    For any \(0 < {r_1} \le {r_2} \le {R_1} \le {R_2} \le {R_T}\), one has the following extended volume comparison formula for annular regions,

    $$\begin{aligned}\begin{array}{l} {\left( {\frac{{vol^A(B({x_0},{r_2},{R_2}))}}{{vol_H^mB({r_2},{R_2})}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B({x_0},{r_1},{R_1}))}}{{vol_H^mB({r_1},{R_1})}}} \right) ^{1/p}} \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R_T}}\\ \qquad \times \left[ {\int _{{R_1}}^{{R_2}} {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB({r_2},t)} \right) }^{1 + 1/p}}}}} dt + \int _{{r_1}}^{{r_2}} {\frac{{{R_1}sn_H^{m - 1}({R_1})}}{{{{\left( {vol_H^mB(t,{R_1})} \right) }^{1 + 1/p}}}}dt} } \right] . \\ \end{array}\end{aligned}$$

Here \(B(x_0,r)\) is the geodesic ball with center \(x_0\) and radius r, \(B(x_0,{R_1},R) = B(x_0,R)\backslash B(x_0,{R_1})\). \({\delta _n},{\delta _1}\) and \(vol^A\) are defined in Definitions 2.2, 4.1 and \({{f^A}}\) is defined in Definition 3.2.

One of the major and beautiful consequences of mean curvature comparison theorem, is the Cheeger–Gromoll splitting theorem [11]. The theorem states that a compelet manifold \(M^n\) with nonnegative Ricci curvature which contains a line can split as a Reimannian product \( {N^{n - 1}} \times {\mathbb {R}}\) and \(Ri{c_N} \ge 0\). We extend this theorem as follows.

Theorem 1.5

(Extended Cheeger–Gromoll Splitting theorem). If M contains a line and \(Ric(Y,AY) \ge 0\) for any vector field Y, by defining \(N = {\left( {{b^ + }} \right) ^{ - 1}}\left( 0 \right) \) and \({A_N} = pro{j_{{{\left( {\nabla {b^ + }(0)} \right) }^ \bot }}} \circ A \circ pro{j_{{{\left( {\nabla {b^ + }(0)} \right) }^ \bot }}}\), one has \(M = {N^{n - 1}} \times {\mathbb {R}} \) and \(Ric(X,{A_N}X) \ge 0\), where \({b^ + }\) is the Bussemann function associated to the ray \({\gamma _ + }(t)\) and \(X \in \Gamma \left( {TN} \right) \).

The number of ends of a manifold is an important concept in topology and Differential geometry, so finding an upper bound for it is an important problem. Cai invented an approach to estimate the number of ends of a Riemannian manifold, when the Ricci tensor is non negative outside of a compact set [8]. Wu used that approach for weighted manifolds [33]. Similarly, we get an explicit upper bound for the number of ends of a manifold, when the extended Ricci tensor is nonnegative outside of a compact set .

Theorem 1.6

Let \(x_0 \in M\) be a fixed point and \( H,R>0 \) be two constants.

Assume \(Ri{c_{-Trace(A)}}(X,AX) \ge - (n - 1)H{\delta _n}{\left| X \right| ^2}\) in the geodesic ball \(B(x_0,R)\) and \(Ri{c_{-Trace(A)}}(X,AX) \ge 0\) outside the ball \(B(x_0,R)\), then the number of ends N(AMR) of M is estimated as

$$\begin{aligned}N(A,M,R) \le \frac{{2m}}{{m - 1}}{\left( {\sqrt{H} R} \right) ^{ - m}}\exp \left( {\frac{{17R}}{2}\left( {\left( {m - 1} \right) \sqrt{H} + 1} \right) } \right) ,\end{aligned}$$

where \(p > m\) and

$$\begin{aligned} m:= & {} C\left( {{\delta _n},n,{\delta _1},K,K',H} \right) = \left[ {\frac{{{\delta _n}(n - 1) + 4(K + K')}}{{{\delta _1}}}} \right] + 2 \\ K:= & {} \mathop {\sup }\limits _{x \in B({x_0},25R/2)} \left| {{f^A}\left( x \right) } \right| , \\ K':= & {} \mathop {\sup }\limits _{x \in B({x_0},25R/2)} \left| {Trace(A)\left( x \right) } \right| , \end{aligned}$$

The paper is organized as follows. In Sect. 2, we give the preliminaries. Passing to Sect. 3, we prove the extended mean curvature comparison Theorem 1.2 and as an application, we prove the Myers’ theorem 1.3. At the end of this section we recall some weak inequalities which we use them to the extended Bishop–Gromov volume comparison Theorem and estimation of the excess functions. Section 4 is devoted to the extension of volume comparison theorem. As an application, we extend Yau and Calabi theorem [37] on the growth of the volume and extend Gallot’s theorem on the estimation of the first Betti number and extend Anderson’s theorem [4]. We generalize the Cheeger–Gromoll splitting theorem and extend some famous topological results of this theorem in Sect. 5. Section 6 is for the estimate of the excess function and its applications in topology. We give an upper bound for the number of ends of the manifold in Sect. 7. Finally in Sect. 7, as an example we use an extended Ricci tensor on some hypersurfaces immersed isometrically in a Riemannian or Lorentzian manifold of constant sectional curvature and show that the extended Ricci tensor is greater than the Ricci tensor of the hypersurface, so the study of the geometry and topology of a Riemannian hypersurface by the extended Ricci tensor maybe better than the original one.

2 Preliminaries

In this section, we present the preliminaries. Throughout the paper \(M=( M,\langle , \rangle )\) is a complete Riemannian manifold, unless otherwise stated.

Definition 2.1

A self-adjoint operator A on M is a \( \left( {1,1} \right) \)-tensor field with the following property,

$$\begin{aligned}\forall X,Y \in {\mathfrak {X}}(M), \left\langle {AX,Y} \right\rangle = \left\langle {X,AY} \right\rangle .\end{aligned}$$

Definition 2.2

Let A be a self-adjoint positive definite operator on M, A is called bounded if there are constants \(\delta _1 ,\delta _n > 0\) such that \(\delta _1< \left\langle {X,AX} \right\rangle < \delta _n\) for any unit vector field \(X \in {\mathfrak {X}}(M)\).

Definition 2.3

Let A be a self-adjoint operator on M. We define \(L_{A}, \Delta _{A}, \Delta _{A,f}, L_{A,f}, Ric_{A}\) and \(Ric_f\) as follows:

  1. (a)

    \( {L_A}(u): = div\left( {A\nabla u} \right) = \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\left( {A\nabla u} \right) ,{e_i}} \right\rangle } \),

  2. (b)

    \( {\Delta _A}(u): = \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,A{e_i}} \right\rangle } \),

  3. (c)

    \({\Delta _{A,f}}(u) = {\Delta _A}u - \left\langle {\nabla f,\nabla u} \right\rangle \,\,\forall f,u \in {C^\infty }(M),\) and for \(X \in {\mathfrak {X}}(M),\,{\Delta _{A,X}} (u): = {\Delta _A}u - \left\langle {\nabla f,X} \right\rangle .\)

  4. (d)

    \({L_{A,f}}(u): = {e^f}div\left( {{e^{ - f}}A\nabla u} \right) ,\,\,\forall f,u \in {C^\infty }(M).\)

  5. (e)

    \(Ri{c_f}(X,AY): = Ric(X,AY) + Hessf(X,Y),\,\forall X,Y \in {\mathfrak {X}}(M).\)

  6. (f)

    \(Ri{c_A}(X,Y): = \sum \nolimits _i {\left\langle {R(X,A{e_i}){e_i},Y} \right\rangle } ,\,\forall X,Y \in X(M).\) where \(\left\{ {{e_i}} \right\} \) is a local orthonormal frame.

\(Ri{c_A}\) and \(Ric(-,A-)\) are both extensions of the Ricci tensor.

Example 2.4

Here we give three examples for \({L_{A.f}}\) and \(\Delta _{A,f}\).

  1. (1)

    When \(A=id\), then \({L_{A,f}} = {\Delta _{A,f}} = {\Delta _f} = \Delta - \left\langle {\nabla f,.} \right\rangle .\)

  2. (2)

    Let \({\Sigma ^n} \subset {M^{n + 1}}\) be a Riemannain hypersurface with shape operator A, the so-called Newton transformations related to the shape operator A are inductively defined by [3]

    $$\begin{aligned}\begin{array}{*{20}{c}} {{P_0}: = id,} \\ {{P_k}: = {S_k}I - A \circ {P_{k - 1}}{} {} {} ,{} {} {} {} 1 \leqslant k \leqslant n,} \end{array}\end{aligned}$$

    \(S_k\) is the \(kth-\) mean curvature of \(\Sigma \). Associated to each Newton transformation \(P_k\), the operator \({L_k}:{C^\infty }(\Sigma ) \rightarrow {C^\infty }(\Sigma )\) which is defined as,

    $$\begin{aligned}{L_k}u = tr({P_k} \circ hessu) = {\Delta _{{P_k}}}u\end{aligned}$$

    is a second order differential operator. Note that,

    $$\begin{aligned}{L_{{P_k}}}u = div({P_k}\nabla u) = {L_k}u + \left\langle {div{P_k},\nabla u} \right\rangle . \end{aligned}$$

    When the ambient manifold M has constant sectional curvature, then \(div{P_k} = 0\) and

    $$\begin{aligned}{L_k}u = {\Delta _{{P_k}}}u = {L_{{P_k}}}u.\end{aligned}$$

    In the case \(f \in {C^\infty }(\Sigma )\) is constant, then

    $$\begin{aligned}{L_k}u = {\Delta _{{P_k},f}}u = {L_{{P_k},f}}u.\end{aligned}$$
  3. (3)

    If \(A = {P_k}\) and \(f=H\), be the mean curvature function, then

    $$ \begin{aligned}{\Delta _{A,f}}u&= {L_k}u - n\left\langle {\nabla H,\nabla u} \right\rangle {} {} \,\, \& {L_{A,f}}u\\&= {L_k}u + \left\langle {div{P_k} - {P_k}\nabla H,\nabla u} \right\rangle ,{} {} u \in {C^\infty }(\Sigma ).\end{aligned}$$

Remark 2.5

The operators \(L_k\) are important in the study of the geometry of hypersurfaces. Applying the maximum principles such as Omori–Yau maximum principles to these operators led to many beautiful and important results [3]. It seems that the results of this paper provide an approach that may have potential applications in comparison geometry for Riemannian hypersurfaces of Riemannian or Lorentzian space forms.

By the following proposition for the distance function r(x), \({\Delta _A}r\) has the same asymptotic behavior as \(\Delta r\), when \(r \rightarrow 0\).

Proposition 2.6

Let \(x_0 \in M\) be a fixed point and \(r(x) = dist(x_0,x)\), then \(\mathop {\lim }\nolimits _{r \rightarrow 0} {r^2}\left( {{\Delta _A}r} \right) = 0\).

Proof

One knows that

$$\begin{aligned}Hessr = \frac{1}{r}\left( {\left\langle \,,\,\right\rangle - dr \otimes dr} \right) + O(1){} {} ,{} {} r \rightarrow {0^ + }.\end{aligned}$$

Let \(\left\{ {{e_i}} \right\} \) be a local orthonormal frame field with \({e_1} = \nabla r\). By Definition 2.3 we have,

$$\begin{aligned}{\Delta _A}r =&\sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla r,A{e_i}} \right\rangle = } \sum \nolimits _i {Hessr\left( {{e_i},A{e_i}} \right) } = \frac{1}{r}\left( {Trace(pro{j_{\partial _r^ \bot }} \circ {{\left. A \right| }_{\partial _r^ \bot }}} \right) \\&+ O(1){} {} {} {} ,{} {} {} {} r \rightarrow {0^ + }.\end{aligned}$$

\(\square \)

For comparison results in geometry one needs Bochner formula. The following theorem provides the extended Bochner formula.

Theorem 2.7

(Extended Bochner formula) [2, 16]. Let A be a self-adjoint operator on M, then,

$$\begin{aligned}\begin{array}{*{20}{l}} {\frac{1}{2}{L_A}({{\left| {\nabla u} \right| }^2}) = }&{}{\frac{1}{2}\left\langle {\nabla {{\left| {\nabla u} \right| }^2},div(A)} \right\rangle + Trace\left( {A \circ hes{s^2}\left( u \right) } \right) + \left\langle {\nabla u,\nabla ({\Delta _A}u)} \right\rangle }\\ {}&{}{ - {\Delta _{\left( {{\nabla _{\nabla u}}A} \right) }}u + Ri{c_A}(\nabla u,\nabla u),\,\,\,\forall u \in {C^\infty }(M).\,} \end{array}\end{aligned}$$

where, \(Ri{c_A}\) was defined in Definition 2.3 and \(hessu(X): = {\nabla _X}\nabla u\).

Definition 2.8

Let B be a (1, 1)-tensor field on M. Define \( T^B \) as,

$$\begin{aligned}{T^B}(X,Y): = \left( {{\nabla _X}B} \right) Y - \left( {{\nabla _Y}B} \right) X.\end{aligned}$$

It is clear that \(T^B\) is a (2,1) tensor field and when \( T^B=0 \), then B is a Codazzi tensor, that is, \((\nabla _{X}B)\langle Y,Z\rangle =(\nabla _{Y}B)\langle X,Z\rangle \).

Example 2.9

If B is the shape operator of a hypersurface \({\Sigma ^n} \subset {M^{n + 1}}\) then

$$\begin{aligned}{T^B}(Y,X) = {\left( {{\bar{R}}(Y,X)N} \right) ^T},\end{aligned}$$

where \({{\bar{R}}}\) is the curvature tensor of M and N is a unit normal vector field on \({\Sigma ^n} \subset {M^{n + 1}}\).

Lemma 2.10

Let B be a (1,1)-self-adjoint tensor field on M, then,

$$\begin{aligned}\left\langle {X,{T^B}(Y,Z)} \right\rangle = \left\langle {{T^B}(Y,X),Z} \right\rangle + \left\langle {{T^B}(X,Z),Y} \right\rangle .\end{aligned}$$

Proof

By computation, we have,

$$\begin{aligned} \left\langle {X,{T^B}(Y,Z)} \right\rangle= & {} \left\langle {X,\left( {{\nabla _Y}B} \right) Z - \left( {{\nabla _Z}B} \right) Y} \right\rangle \\= & {} \left\langle {\left( {{\nabla _Y}B} \right) X - \left( {{\nabla _X}B} \right) Y,Z} \right\rangle + \left\langle {Y,\left( {{\nabla _X}B} \right) Z - \left( {{\nabla _Z}B} \right) X} \right\rangle \\= & {} \left\langle {{T^B}(Y,X),Z} \right\rangle + \left\langle {{T^B}(X,Z),Y} \right\rangle . \end{aligned}$$

\(\square \)

To simplify the extended Bochner formula 2.7, we give some properties of the second covariant derivation of the operator B in the following Lemma.

Lemma 2.11

Let B be a (1,1)-self-adjoint tensor field on M and \( X,Y,Z \in {\mathfrak {X}}(M) \), then

  1. (a)

    \(\left( {{\nabla ^2}B} \right) \left( {X,Y,Z} \right) = \left( {{\nabla ^2}B} \right) \left( {X,Z,Y} \right) + R(Z,Y)\left( {BX} \right) - B\left( {R(Z,Y)X} \right) ,\)

  2. (b)

    \(\left( {{\nabla ^2}B} \right) \left( {X,Y,Z} \right) - \left( {{\nabla ^2}B} \right) \left( {Y,X,Z} \right) = \left( {{\nabla _Z}T^B} \right) (X,Y).\)

Proof

For part (a) we have,

$$\begin{aligned} {\nabla ^2}B(X,Y,Z)= & {} \left( {\nabla \left( {\nabla B} \right) } \right) (X,Y,Z) = \left( {{\nabla _Z}\left( {\nabla B} \right) } \right) (X,Y)\\= & {} \left( {{\nabla _Z}\left( {{\nabla _Y}B} \right) } \right) X + \left( {{\nabla _Y}B} \right) \left( {{\nabla _Z}X} \right) \\&- \left( {{\nabla _Y}B} \right) ({\nabla _Z}X) - \left( {{\nabla _{{\nabla _Z}Y}}B} \right) (X)\\= & {} \left( {{\nabla _Z}\left( {{\nabla _Y}B} \right) } \right) X - \left( {{\nabla _{{\nabla _Z}Y}}B} \right) X \end{aligned}$$

Similarly,

$$\begin{aligned} {\nabla ^2}B(X,Z,Y) = \left( {{\nabla _Y}\left( {{\nabla _Z}B} \right) } \right) X - \left( {{\nabla _{{\nabla _Y}Z}}B} \right) X. \end{aligned}$$

Thus

$$\begin{aligned}&{\nabla ^2}B(X,Y,Z) - {\nabla ^2}B(X,Z,Y)\mathrm{{ }}\\&\qquad =\left( {{\nabla _Z}{\nabla _Y}B} \right) X - \left( {{\nabla _Y}\left( {{\nabla _Z}B} \right) } \right) X - \left( {{\nabla _{\left[ {Z,Y} \right] }}B} \right) X\\&\qquad =\left( {R(Z,Y)B} \right) X = R(Z,Y)\left( {BX} \right) - B\left( {\left( {R(Z,Y)X} \right) } \right) . \end{aligned}$$

For part (b), by definition of T, we have

$$\begin{aligned} {\nabla ^2}B(X,Y,Z)= & {} \left( {{\nabla _Z}\left( {\nabla B} \right) } \right) \left( {X,Y} \right) \\= & {} {\nabla _Z}\left( {\left( {\nabla B} \right) \left( {Y,X} \right) + {T^B}\left( {Y,X} \right) } \right) \\&- \left( {\nabla B} \right) \left( {{\nabla _Z}X,Y} \right) - \left( {\nabla B} \right) \left( {X,{\nabla _Z}Y} \right) \\= & {} \left( {{\nabla _Z}\left( {\nabla B} \right) \left( {Y,X} \right) } \right) + \left( {\nabla B} \right) \left( {{\nabla _Z}Y,X} \right) + \left( {\nabla B} \right) \left( {Y,{\nabla _Z}X} \right) \\&+ {\nabla _Z}\left( {{T^B}\left( {Y,X} \right) } \right) - \left( {\nabla B} \right) \left( {{\nabla _Z}X,Y} \right) - \left( {\nabla B} \right) \left( {X,{\nabla _Z}Y} \right) \\= & {} \left( {{\nabla _Z}\left( {\nabla B} \right) \left( {Y,X} \right) } \right) + \left( {{\nabla _Z}{T^B}} \right) \left( {X,Y} \right) . \end{aligned}$$

\(\square \)

Lemma 2.12

Let B be a \( (1,1)-\)self-adjoint tensor field on M , then

$$\begin{aligned} \left\langle {\left( {\Delta B} \right) X,X} \right\rangle =&\left\langle {\left( {{\nabla _X}divB} \right) ,X} \right\rangle - Ri{c_B}\left( {X,X} \right) \\&+ Ric\left( {X,BX} \right) + \left\langle {{\nabla ^*}{T^B}(X),X} \right\rangle . \end{aligned}$$

where \(\nabla ^*\) is adjoint of \(\nabla \) and

$$\begin{aligned} {\nabla ^*}{T^B}(X) = \sum \nolimits _i {\left( {{\nabla _{{e_i}}}{T^B}} \right) (X,{e_i})}. \end{aligned}$$

Proof

For simplicity let \(\left\{ {{e_i}} \right\} \) be an orthonormal local frame field in a normal neighborhood of p such that with \({\nabla _{{e_i}}}{e_j} = 0\) at p. At p Lemma 2.11 implies,

$$\begin{aligned}\begin{array}{*{20}{l}} {\left\langle {\left( {\Delta B} \right) X,X} \right\rangle }&{}{ = \sum \nolimits _i {\left\langle {\left( {{\nabla _{{e_i}}}{\nabla _{{e_i}}}B} \right) X,X} \right\rangle } = \sum \nolimits _i {\left\langle {{\nabla ^2}B(X,{e_i},{e_i}),X} \right\rangle } }\\ {}&{}{ = \sum \nolimits _i {\left\langle {{\nabla ^2}B({e_i},X,{e_i}),X} \right\rangle + \left\langle {{\nabla ^*}{T^B}(X),X} \right\rangle } .} \end{array}\end{aligned}$$

So by Lemma 2.11, part (a) we have

$$\begin{aligned}\begin{array}{*{20}{l}} {\left\langle {\left( {\Delta B} \right) X,X} \right\rangle }&{}{ = \sum \nolimits _i {\left\langle {{\nabla ^2}B({e_i},X,{e_i}),X} \right\rangle } + \left\langle {{\nabla ^*}{T^B}(X),X} \right\rangle }\\ {}&{} = \sum \nolimits _i {\left\langle {{\nabla ^2}B({e_i},{e_i},X) + R({e_i},X)\left( {B{e_i}} \right) - B\left( {\left( {R({e_i},X){e_i}} \right) } \right) ,X} \right\rangle }\\ &{}\quad + \left\langle {{\nabla ^*}{T^B}(X),X} \right\rangle \\ {}&{}{ = \left\langle {\left( {{\nabla _X}divB} \right) ,X} \right\rangle - Ri{c_B}\left( {X,X} \right) + Ric\left( {X,BX} \right) + \left\langle {{\nabla ^*}{T^B}(X),X} \right\rangle .} \end{array} \end{aligned}$$

\(\square \)

The extended Bochner formula 2.7 is very much complicated. The complication of the formula is due the existence of the term \({\Delta _{{\nabla _u}B}}u \). Howerer, when \({{\Delta _{{\nabla _u}B}}u \le 0}\), the extended Bochner formula yields a simple Riccati inequality as the usual Bochner formula does for Laplacian. In the following proposition, we give a presentation for \({{\Delta _{{\nabla _u}B}}u}\) which seems useful for its estimation.

Proposition 2.13

Let B be a (1,1)-self-adjoint tensor field on M, then

$$\begin{aligned}\begin{array}{*{20}{l}} {{\Delta _{\left( {{\nabla _{{\nabla _u}}}B} \right) }}u}&{}{ = \nabla u.\nabla u.Trace(B) - \left\langle {\nabla u,\left( {\Delta B} \right) \nabla u} \right\rangle + \left\langle {\nabla u,\left( {{\nabla ^*}{T^B}} \right) (\nabla u)} \right\rangle }\\ {}&{}{\,\,\,\,\, + \sum \nolimits _i {\left\langle {{e_i},{T^B}(\nabla u,{\nabla _{{e_i}}}\nabla u)} \right\rangle } + \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{\nabla u}}B} \right) }}({e_i},\nabla u),{e_i}} \right\rangle } .} \end{array}\end{aligned}$$

Proof

Let B be a (1, 1)-tensor field, then

$$\begin{aligned}\begin{array}{*{20}{l}} {{\Delta _{\left( {{\nabla _{{\nabla _u}}}B} \right) }}u}&{}{ = \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,\left( {{\nabla _{{e_i}}}B} \right) \nabla u} \right\rangle } + \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}(\nabla u,{e_i})} \right\rangle } }\\ {}&{}{ = \sum \nolimits _i {{e_i}.\left\langle {\nabla u,\left( {{\nabla _{{e_i}}}B} \right) \nabla u} \right\rangle - \sum \nolimits _i {\left\langle {\nabla u,\left( {\nabla _{{e_i}}^2B} \right) \nabla u} \right\rangle } } }\\ {}&{}\quad { - \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,\left( {{\nabla _{{e_i}}}B} \right) \nabla u} \right\rangle } + \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}(\nabla u,{e_i})} \right\rangle } }\\ {}&{}{ = \sum \nolimits _i {{e_i}.\left\langle {\nabla u,\left( {{\nabla _{\nabla u}}B} \right) {e_i}} \right\rangle } + \sum \nolimits _i {{e_i}.\left\langle {\nabla u,{T^B}({e_i},\nabla u)} \right\rangle } - \left\langle {\nabla u,\left( {\Delta B} \right) \nabla u} \right\rangle }\\ {}&{}\quad { - {\Delta _{\left( {{\nabla _{{\nabla _u}}}B} \right) }}u + 2\sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}(\nabla u,{e_i})} \right\rangle .} } \end{array}\end{aligned}$$

Note

$$\begin{aligned}\begin{array}{l} \sum \nolimits _i {{e_i}.\left\langle {\nabla u,{T^B}({e_i},\nabla u)} \right\rangle } + 2\sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}(\nabla u,{e_i})} \right\rangle } \\ \,\,\,\,\,\,\,\, = \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}({e_i},\nabla u)} \right\rangle } + \sum \nolimits _i {\left\langle {\nabla u,\left( {{\nabla _{{e_i}}}{T^B}} \right) ({e_i},\nabla u)} \right\rangle } \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + \sum \nolimits _i {\left\langle {\nabla u,{T^B}({e_i},{\nabla _{{e_i}}}\nabla u)} \right\rangle + 2\sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}(\nabla u,{e_i})} \right\rangle } } \\ \,\,\,\,\,\, = \sum \nolimits _i {\left\langle {\nabla u,\left( {{\nabla _{{e_i}}}{T^B}} \right) ({e_i},\nabla u)} \right\rangle + \sum \nolimits _i {\left\langle {\nabla u,{T^B}({e_i},{\nabla _{{e_i}}}\nabla u)} \right\rangle } } \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}(\nabla u,{e_i})} \right\rangle } \\ \,\,\,\,\,\, =- \left\langle {\nabla u,\left( {{\nabla ^*}{T^B}} \right) (\nabla u)} \right\rangle + \sum \nolimits _i {\left\langle {\nabla u,{T^B}({e_i},{\nabla _{{e_i}}}\nabla u)} \right\rangle + \sum \nolimits _i {\left\langle {{\nabla _{{e_i}}}\nabla u,{T^B}(\nabla u,{e_i})} \right\rangle } } \\ \,\,\,\,\,\, = -\left\langle {\nabla u,\left( {{\nabla ^*}{T^B}} \right) (\nabla u)} \right\rangle + \sum \nolimits _i {\left\langle {{e_i},{T^B}(\nabla u,{\nabla _{{e_i}}}\nabla u)} \right\rangle } \\ \end{array}\end{aligned}$$

In other words,

$$\begin{aligned}\begin{array}{*{20}{l}} {{\Delta _{\left( {{\nabla _{{\nabla _u}}}B} \right) }}u}&{}{ = \left\langle {\nabla u,div\left( {{\nabla _{\nabla u}}B} \right) } \right\rangle - \left\langle {\nabla u,\left( {\Delta B} \right) \nabla u} \right\rangle - \left\langle {\nabla u,\left( {{\nabla ^*}{T^B}} \right) (\nabla u)} \right\rangle }\\ {}&{}{\,\,\,\,\,\, + \sum \nolimits _i {\left\langle {{e_i},{T^B}(\nabla u,{\nabla _{{e_i}}}\nabla u)} \right\rangle .} } \end{array}\end{aligned}$$

But,

$$\begin{aligned} \left\langle {\nabla u,div\left( {{\nabla _{\nabla u}}B} \right) } \right\rangle= & {} \sum \nolimits _i {\left\langle {\nabla u,\left( {{\nabla _{{e_i}}}\left( {{\nabla _{\nabla u}}B} \right) } \right) {e_i}} \right\rangle } = \sum \nolimits _i {\left\langle {\left( {{\nabla _{{e_i}}}\left( {{\nabla _{\nabla u}}B} \right) } \right) \nabla u,{e_i}} \right\rangle } \\ {}= & {} \sum \nolimits _i {\left\langle {\left( {{\nabla _{\nabla u}}\left( {{\nabla _{\nabla u}}B} \right) } \right) {e_i} + {T^{\left( {{\nabla _{\nabla u}}B} \right) }}({e_i},\nabla u),{e_i}} \right\rangle } \\ {}= & {} \nabla u.\nabla u.Trace(B) + \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{\nabla u}}B} \right) }}({e_i},\nabla u),{e_i}} \right\rangle .} \end{aligned}$$

So,

$$\begin{aligned}\begin{array}{*{20}{l}} {{\Delta _{\left( {{\nabla _{{\nabla _u}}}B} \right) }}u}&{}{ = \nabla u.\nabla u.Trace(B) - \left\langle {\nabla u,\left( {\Delta B} \right) \nabla u} \right\rangle - \left\langle {\nabla u,\left( {{\nabla ^*}{T^B}} \right) (\nabla u)} \right\rangle }\\ {}&{}{\,\,\,\,\, + \sum \nolimits _i {\left\langle {{e_i},{T^B}(\nabla u,{\nabla _{{e_i}}}\nabla u)} \right\rangle } + \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{\nabla u}}B} \right) }}({e_i},\nabla u),{e_i}} \right\rangle .} } \end{array}\end{aligned}$$

\(\square \)

So, the extended Bochner formula in Theorem 2.7 can be rewritten as follows.

Proposition 2.14

Let B a (1,1)-self-adjoint tensor field on M and \(u \in {C^\infty }(M)\), then

$$\begin{aligned}\begin{array}{*{20}{l}} {\frac{1}{2}{L_B}({{\left| {\nabla u} \right| }^2}) = }&{}{\frac{1}{2}\left\langle {\nabla {{\left| {\nabla u} \right| }^2},div(B} \right\rangle + Trace\left( {B \circ hes{s^2}\left( u \right) } \right) + \left\langle {\nabla u,\nabla ({\Delta _B}u)} \right\rangle }\\ {}&{}{ - \nabla u.\nabla u.Trace(B) + \left\langle {\left( {{\nabla _{\nabla u}}divB} \right) ,\nabla u} \right\rangle + 2\left\langle {{\nabla ^*}{T^B}(\nabla u),\nabla u} \right\rangle }\\ {}&{}- \sum \nolimits _i \left\langle {{e_i},{T^B}(\nabla u,{\nabla _{{e_i}}}\nabla u)} \right\rangle - \sum \nolimits _i \left\langle {{T^{\left( {{\nabla _{\nabla u}}B} \right) }}({e_i},\nabla u),{e_i}} \right\rangle \\ &{} + Ric\left( {\nabla u,B\nabla u} \right) . \end{array}\end{aligned}$$

Proof

The result follows from Proposition 2.13, Theorem 2.7 and Lemma 2.12.

\(\square \)

3 Extended Mean Curvature Comparison

In this section, we prove two versions of the extended mean curvature comparison theorem, when A is a (1, 1)-self-adjoint Codazzi tensor. The first one is for the elliptic operator \({\Delta _{A,f}}\), which is used for the extension of Myers’ theorem, Cheeger–Gromoll splitting theorem and estimating the excess functions. The second is for the elliptic operator \({L_{A,f}}\) which is used to extend Bishop–Gromov volume comparison theorem and its topological results. For the first one we use the tensor \(Ric({\partial _r},A{\partial _r})\) and for the second one, the tensor \(Ri{c_{Trace(A)}}({\partial _r},A{\partial _r}) = Ric({\partial _r},A{\partial _r}) + {\partial _r}.\left\langle {divA,{\partial _r}} \right\rangle \) is used. Let \( x_0 \in M\) be a fixed point, we define \( r(x) = dist(x_0, x) \), then r(x) is smooth on \(M\backslash cut(x_0)\) and \(\left| {\nabla r} \right| = 1\). For simplicity, we denote \(\nabla r\) by \({\partial _r}\). So by Theorem 2.14 we get Theorem 3.1 as follows.

Theorem 3.1

Let A be a (1,1)-self-adjoint Codazzi tensor on M and \( r(x) := dist(x_0, x) \), then

$$\begin{aligned}0 = Trace\left( {A \circ hes{s^2}\left( r \right) } \right) + {\partial _r}.({\Delta _A}r) + Ric({\partial _r},A{\partial _r}) - \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{{\partial _r}}}A} \right) }}({e_i},{\partial _r}),{e_i}} \right\rangle }, \end{aligned}$$

on \(M\backslash cut(x_0)\).

To get the extended mean curvature comparison, we need to approximate \(\sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{{\partial _r}}}A} \right) }}({\partial _r},{e_i}),{e_i}} \right\rangle } \). When it is negative, a simple Riccati inequality is obtained, but when this it is positive the case is more complicated, and we estimate it by using \(Hessf^A({\partial _r},{\partial _r})\) and adapt the approach of [31] to estimate \({\Delta _{A,{f^A}}}r\) and \({L_{A,{f^A}}}r\). Let us define \(F^A\) and \(f^A\).

Definition 3.2

We define the continuous function \({F^A}\) as follows,

$$\begin{aligned}{F^A}(x): = \mathop {\max }\limits _{X \in {T_x}M,\left| X \right| = 1} \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _X}A} \right) }}({e_i},X),{e_i}} \right\rangle } (x).\end{aligned}$$

We define the function \({f^A}\) as a smooth function which satisfies the following condition,

  1. (*)

    \({F^A}{g_M} \le Hess\left( {{f^A}} \right) \), in the sense of quadratic forms, where \({g_M}\) is the metric tensor of the manifold M.

Example 3.3

In the following \(f^A=0\).

  1. (a)

    When \({\Sigma ^n} \subset {M^{n + 1}}(c)\) is a totally ubilical hypersurface, then \({\Delta _{{\nabla _{\nabla u}}A}}u = 0\), so \( F^A=0 \), thus \( f^A=0 \).

  2. (b)

    For a Codazzi tensor A, if \({\nabla ^2}A = 0\) then \( f^A=0 \).

By the following Lemma, we find a radial function \(f^A\) which satisfies condition (*) by some conditions on the radial sectional curvature of M.

Lemma 3.4

Assume \({F^A}(x) \le K(x)\) and \(f^A\) be a radial function. If \( {f^A} \) is a solution of the following differential inequality, then the condition (*) is satisfied.

$$\begin{aligned}\mathop {\sup }\limits _{x \in B(r)} K(x)=K(r) \le \left( {{f^A}'' + \frac{{h'}}{h}{f^A}'} \right) , {f^A}' > 0\end{aligned}$$

where h is the solution of differential equation,

$$\begin{aligned}\left\{ {\begin{array}{*{20}{l}} {h'' - Gh = 0,}\\ {h(0) = 0{} {} ,{} h'(0) = 1,} \end{array}} \right. \end{aligned}$$

and G is a suitable function, with \({\sec _{rad}} \le - G\) (\({\sec _{rad}}\) is the radial sectional curvature of M).

Now we present proof of Theorem 1.2. We follow the proof of Theorem 1.1 in [31]. In [31] the Authors proved when \(Ric+Hessf \ge (n-1)H\) then \({\Delta _f}r \le \Delta _H^{n + 4k}r\). we inspire their proof to get our result.

Proof of Theorem 1.2

We are inspired by the proof of Theorem 3.1 of [31]. For the first part, by assumption A is positive semi-definite, so for any smooth function u we have

$$\begin{aligned}Trace\left( {A \circ hes{s^2}\left( u \right) } \right) \ge \frac{{{{({\Delta _A}u)}^2}}}{{(TraceA)}},\end{aligned}$$

Since A is bounded, we have

$$\begin{aligned} \frac{1}{{\left( {n - 1} \right) {\delta _n}}} \le \frac{1}{{Trace(A)}}. \end{aligned}$$

So we get the following differential inequality,

$$\begin{aligned} 0 \ge \frac{{{{\left( {{\Delta _A}r} \right) }^2}}}{{\left( {n - 1} \right) {\delta _n}}} + {\partial _r}.({\Delta _A}r) + Ric({\partial _r},A{\partial _r}) - {\partial _r}.{\partial _r}.{f^A}. \end{aligned}$$
(3.1)

Let \(\gamma (t)\) be a minimal geodesic through the point \(x_0\). Then,

$$\begin{aligned}0 \ge \frac{{{{\left( {{\Delta _A}r} \right) }^2}}}{{\left( {n - 1} \right) {\delta _n}}} + ({\Delta _A}r)' + Ric\left( {\gamma '(t),A\gamma '(t)} \right) - {\left( {{f^A}(t)} \right) ^{\prime \prime }}.\end{aligned}$$

On the space form \({M_H^n}\) with constant sectional curvature H, one has (see [13])

$$\begin{aligned}\frac{{{{({\Delta _H}r)}^2}}}{{n - 1}} + ({\Delta _H}r)' + (n - 1)H = 0.\end{aligned}$$

We know that \({\Delta _H}r = \left( {n - 1} \right) \frac{{s{n'_H}(r)}}{{s{n_H}(r)}}\) (see [13]), where

$$\begin{aligned}s{n_H}(r) = \left\{ {\begin{array}{*{20}{c}} {\frac{1}{{\sqrt{H} }}\sin (\sqrt{H} r)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,H > 0,} \\ {r\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,H = 0,} \\ {\frac{1}{{\sqrt{ - H} }}\sinh (\sqrt{ - H} r)\,\,\,\,\,\,H < 0.} \\ \end{array}} \right. \end{aligned}$$

By assumption, \(\left( {n - 1} \right) {\delta _n}H \le Ric({\partial _r},{A\partial _r})\). So,

$$\begin{aligned} {\left( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} \right) ^\prime } \le - \left( {\frac{{{{({\Delta _A}r)}^2}}}{{\left( {n - 1} \right) \delta _n^2}} - \frac{{{{({\Delta _H}r)}^2}}}{{n - 1}}} \right) + \frac{1}{{{\delta _n}}}{\left( {{f^A}(t)} \right) ^{\prime \prime }}. \end{aligned}$$
(3.2)

Formula (3.2) and computation give that

$$\begin{aligned}&{{\left( {sn_H^2(r)\left( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} \right) } \right) }^\prime }\\&\quad = 2s{{n'}_H}(r)s{n_H}(r)\left( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} \right) + sn_H^2(r){{\left( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} \right) }^\prime } \\&\quad \le \frac{{ 2sn_H^2(r)}}{{( {n - 1} )}}( {{\Delta _H}r} )( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} ) - sn_H^2(r)( {\frac{{{{({\Delta _A}r)}^2}}}{{( {n - 1} )\delta _n^2}} - \frac{{{{({\Delta _H}r)}^2}}}{{n - 1}}} ) \\&\qquad + \frac{{sn_H^2(r)}}{{{\delta _n}}}{{\left( {{f^A}(r)} \right) }^{\prime \prime }} \\&\quad = \frac{{sn_H^2(r)}}{{\left( {n - 1} \right) }}\left( 2{\frac{{\left( {{\Delta _H}r} \right) \left( {{\Delta _A}r} \right) }}{{{\delta _n}}} - 2{{\left( {{\Delta _H}r} \right) }^2} - \frac{{{{({\Delta _A}r)}^2}}}{{\delta _n^2}} + {{\left( {{\Delta _H}r} \right) }^2}} \right) \\&\qquad +\frac{{sn_H^2(r)}}{{{\delta _n}}}{\left( {{f^A}(r)} \right) ^{\prime \prime }} \\&\quad =- \frac{{sn_H^2(r)}}{{\left( {n - 1} \right) }}{{\left( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} \right) }^2} + \frac{{sn_H^2(r)}}{{{\delta _n}}}{{\left( {{f^A}(r)} \right) }^{\prime \prime }} \\&\quad \le \frac{{sn_H^2(r)}}{{{\delta _n}}}{{\left( {{f^A}(r)} \right) }^{\prime \prime }}. \end{aligned}$$

By Proposition 2.6, \(\mathop {\lim }\nolimits _{r \rightarrow 0} sn_H^2(r)\left( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} \right) = 0\). So integration with respect to r, concludes,

$$\begin{aligned} \frac{1}{{{\delta _n}}}sn_H^2(r)\left( {{\Delta _A}r} \right)\le & {} sn_H^2(r)\left( {{\Delta _H}r} \right) + \frac{1}{{{\delta _n}}}\int _0^r {sn_H^2(t){{\left( {{f^A}(t)} \right) }^{\prime \prime }}dt}\\= & {} sn_H^2(r)\left( {{\Delta _H}r} \right) + \frac{1}{{{\delta _n}}}sn_H^2(r){{\left( {{f^A}(r)} \right) }^\prime }\\&- \frac{1}{{{\delta _n}}}\int _0^r {{{\left( {sn_H^2(t)} \right) }^\prime }{{\left( {{f^A}(t)} \right) }^\prime }dt} . \end{aligned}$$

By Definition 2.3(c) one has,

$$\begin{aligned}\frac{1}{{{\delta _n}}}sn_H^2(r)\left( {{\Delta _{A,{f^A}}}(r)} \right) \le sn_H^2(r)\left( {{\Delta _H}r} \right) - \frac{1}{{{\delta _n}}}\int _0^r {{{\left( {sn_H^2(t)} \right) }^\prime }{{\left( {{f^A}(t)} \right) }^\prime }dt} .\end{aligned}$$

So integration with respect to t implies,

$$\begin{aligned}\frac{1}{{{\delta _n}}}sn_H^2(r)\left( {{\Delta _{A,{f^A}}}(r)} \right) \le&sn_H^2(r)\left( {{\Delta _H}r} \right) - \frac{1}{{{\delta _n}}}{f^A}(r){\left( {sn_H^2(r)} \right) ^\prime }\\&+ \frac{1}{{{\delta _n}}}\int _0^r {{{\left( {sn_H^2(t)} \right) }^{\prime \prime }}{f^A}(t)dt.} \end{aligned}$$

When \( H>0 \), by assumption, we have \(r \le \frac{\pi }{{4\sqrt{H} }}\), so \({\left( {sn_H^2(t)} \right) ^{\prime \prime }} \ge 0\), thus

$$\begin{aligned}\frac{1}{{{\delta _n}}}sn_H^2(r)\left( {{\Delta _{A,{f^A}}}(r)} \right) \le sn_H^2(r)\left( {{\Delta _H}r} \right) + \frac{{2K}}{{{\delta _n}}}{\left( {sn_H^2(r)} \right) ^\prime }.\end{aligned}$$

We know,

$$\begin{aligned}{\left( {sn_H^2(r)} \right) ^\prime } = 2{\left( {sn_H^{}(r)} \right) ^\prime }sn_H^{}(r) = \frac{2}{{n - 1}}\left( {{\Delta _H}r} \right) \left( {sn_H^2(r)} \right) ,\end{aligned}$$

so,

$$\begin{aligned}\left( {{\Delta _{A,{f^A}}}(r)} \right) \le {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}\left( {n - 1} \right) }}} \right) \left( {{\Delta _H}r} \right) .\end{aligned}$$

For the second part, from the condition on \(Ri{c_{Trace(A)}}\left( {{\partial _r},A{\partial _r}} \right) \) we have,

$$\begin{aligned}{\left( {\frac{{{\Delta _A}}}{{{\delta _n}}} - {\Delta _H}r} \right) ^\prime } \le - \left( {\frac{{{{({\Delta _A}r)}^2}}}{{\left( {n - 1} \right) \delta _n^2}} - \frac{{{{({\Delta _H}r)}^2}}}{{n - 1}}} \right) + \frac{1}{{{\delta _n}}}{\left( {{f^A}(t) - Trace(A)} \right) ^{\prime \prime }}.\end{aligned}$$

By similar computation,

$$\begin{aligned}{\left( {sn_H^2(r)\left( {\frac{{{\Delta _A}r}}{{{\delta _n}}} - {\Delta _H}r} \right) } \right) ^\prime } \le \frac{{sn_H^2(r)}}{{{\delta _n}}}{\left( {{f^A}(t) - Trace(A)} \right) ^{\prime \prime }}.\end{aligned}$$

Thus,

$$\begin{aligned} \frac{1}{{{\delta _n}}}sn_H^2(r)\left( {{\Delta _A}r} \right)\le & {} sn_H^2(r)\left( {{\Delta _H}r} \right) + \frac{1}{{{\delta _n}}}\int _0^r {sn_H^2(t){{\left( {{f^A}(t) - Trace(A)} \right) }^{\prime \prime }}dt} \\ {}= & {} sn_H^2(r)\left( {{\Delta _H}r} \right) + \frac{1}{{{\delta _n}}}sn_H^2(r){{\left( {{f^A}(r) - Trace(A)} \right) }^\prime } \\&-\frac{1}{{{\delta _n}}}\int _0^r {{{\left( {sn_H^2(t)} \right) }^\prime }{{\left( {{f^A}(t) - Trace(A)} \right) }^\prime }dt}. \end{aligned}$$

Note that A is a Codazzi tensor, so \(divA = \nabla Trace(A)\), and

$$\begin{aligned}&\frac{1}{{{\delta _n}}}sn_H^2(r)\left( {\left( {{\Delta _A}r} \right) + \left\langle {divA,{\partial _r}} \right\rangle - {\partial _r}.{f^A}(r)} \right) \le sn_H^2(r)\left( {{\Delta _H}r} \right) \\&\quad - \frac{1}{{{\delta _n}}}\int _0^r {{{\left( {sn_H^2(t)} \right) }^\prime }{{\left( {{f^A}(t) - Trace(A)} \right) }^\prime }dt} , \end{aligned}$$

and \({L_A}r = \left\langle {divA,{\partial _r}} \right\rangle + {\Delta _A}r\), thus,

$$\begin{aligned}&\frac{1}{{{\delta _n}}}sn_H^2(r)\left( {{L_A}r - {\partial _r}.{f^A}(r)} \right) \le sn_H^2(r)\left( {{\Delta _H}r} \right) \\&\qquad - \frac{1}{{{\delta _n}}}\int _0^r {{{\left( {sn_H^2(t)} \right) }^\prime }{{\left( {{f^A}(t) - Trace(A)} \right) }^\prime }dt} .\end{aligned}$$

Similarly,

$$\begin{aligned} {L_A}r \le {\delta _n}\left( {1 + \frac{{4\left( {K + K'} \right) }}{{{\delta _n}\left( {n - 1} \right) }}} \right) \left( {{\Delta _H}r} \right) + {\partial _r}.{f^A}(r). \end{aligned}$$
(3.3)

\(\square \)

Remark 3.5

When \( H>0 \), for \(\frac{\pi }{{4\sqrt{H} }} \le r \le \frac{\pi }{{2\sqrt{H} }}\) we have

$$\begin{aligned}\int _0^r {{{\left( {sn_H^2(t)} \right) }^{\prime \prime }}{f^A}(t)dt} \le&K\left( {\int _0^{\frac{\pi }{{4\sqrt{H} }}} {{{\left( {sn_H^2(t)} \right) }^{\prime \prime }}dt} \, - \int _{\frac{\pi }{{4\sqrt{H} }}}^r {{{\left( {sn_H^2(t)} \right) }^{\prime \prime }}dt} \,\,\,} \right) \\&= K\left( {\frac{2}{{\sqrt{H} }}\, - s{n_H}(2r)} \right) .\end{aligned}$$

So, when \(\frac{\pi }{{4\sqrt{H} }} \le r \le \frac{\pi }{{2\sqrt{H} }}\),

$$\begin{aligned}\left( {{\Delta _{A,{f^A}}}(r)} \right) \le {\delta _n}\left( {1 + \frac{1}{{{\delta _n}}}\frac{{4K}}{{\left( {n - 1} \right) \sin \left( {2\sqrt{H} r} \right) }}} \right) \left( {{\Delta _H}r} \right) .\end{aligned}$$

This estimate will be used to prove the extended Myers’ theorem.

Corollary 3.6

If Trace(A) is constant and \(Trace(A)H \le Ric\left( {{\partial _r},A{\partial _r}} \right) \), then

$$\begin{aligned} {\Delta _{A,{f^A}}}r\le & {} \frac{{Trace(A)}}{{n - 1}}\left( {1 + \frac{{4K}}{{Trace(A)}}} \right) {\Delta _H}r,\\ {L_A}r\le & {} \frac{{Trace(A)}}{{n - 1}}\left( {1 + \frac{{4K}}{{Trace(A)}}} \right) \left( {{\Delta _H}r} \right) + {\partial _r}.{f^A}(r). \end{aligned}$$

Now we prove the extended Myers’ theorem by using the so called excess functions. In fact the idea is used in [23, 31, 34]. By adapting their approach we obtain the compactness result using the extended mean curvature Theorem 1.2 for the elliptic differential operator \({{\Delta _{A,{f^A}}}}\) to the excess function.

Proof of Theorem 1.3

(Myers’ theorem) (a) Let pq are two points in M with \(dist\left( {p,q} \right) \ge \frac{\pi }{{\sqrt{H} }}\). Define \(B: = dist\left( {p,q} \right) - \frac{\pi }{{\sqrt{H} }}\), \({r_1}(x): = dist\left( {p,x} \right) \) and \({r_2}(x): = dist\left( {q,x} \right) \). Let \({e_{p,q}}(x)\) be the excess function associated to the points pq. By triangle inequality, we have \({e_{p,q}}(x) \ge 0\) and \({e_{p,q}}\left( {\gamma (t)} \right) = 0\), where \( \gamma \) is the minimal geodesic joining pq. Hence \({\Delta _{A,{f^A}}}e\left( {\gamma (t)} \right) \ge 0\) in the barrier sense. Let \({y_1} = \gamma \left( {\frac{\pi }{{2\sqrt{H} }}} \right) \) and \({y_2} = \gamma \left( {B + \frac{\pi }{{2\sqrt{H} }}} \right) \). So \({r_i}\left( {{y_i}} \right) = \frac{\pi }{{2\sqrt{H} }} , i=1,2\). Remark 3.5 concludes that

$$\begin{aligned} {\Delta _{A,{f^A}}}({r_i})({y_i}) \le 2K\sqrt{H}. \end{aligned}$$
(3.4)

From (3.1) and assumption on \(Ric\left( {{\partial _r},A{\partial _r}} \right) \) we get,

$$\begin{aligned} {\Delta _{A,{f^A}}}r \le {\Delta _{A,{f^A}}}{r_0} - (n - 1){\delta _n}H\left( {r - {r_0}} \right) . \end{aligned}$$

Thus,

$$\begin{aligned} {\Delta _{A,{f^A}}}{r_1}({y_2}) \le {\Delta _{A,{f^A}}}{r_1}({y_1}) - B(n - 1){\delta _n}H. \end{aligned}$$
(3.5)

So by (3.4) and (3.5) we have

$$\begin{aligned}0 \le {\Delta _{A,{f^A}}}\left( {{e_{p,q}}} \right) ({y_2}) = {\Delta _{A,{f^A}}}{r_1}({y_2}) + {\Delta _{A,{f^A}}}{r_2}({y_2}) \le 4K\sqrt{H} - B(n - 1){\delta _n}H,\end{aligned}$$

thus \(B \le \frac{{4K}}{{{\delta _n}(n - 1)\sqrt{H} }}\) and

$$\begin{aligned}dist(p,q) \le \frac{\pi }{{\sqrt{H} }} + \frac{{4K}}{{{\delta _n}(n - 1)\sqrt{H} }}.\end{aligned}$$

(b): Let\(\left( {{\bar{M}},\Phi } \right) \) be the universal cover of M, then we define \({\bar{A}}: = {\Phi ^*}A = {\left( {{\Phi _*}} \right) ^{ - 1}} \circ A \circ {\Phi _*}\). Note that for any unit vector field \(X \in {\mathfrak {X}}(M)\), we have

$$\begin{aligned} \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{{\Phi ^*}X}}{\bar{A}}} \right) }}\left( {{\Phi ^*}{e_i},{\Phi ^*}X} \right) ,{\Phi ^*}{e_i}} \right\rangle }&= \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _X}A} \right) }}\left( {{e_i},X} \right) ,{e_i}} \right\rangle } \circ \Phi \\&\le Hess{f^A}\left( {X,X} \right) . \end{aligned}$$

So by defining \({f^{{\bar{A}}}}: = {f^A} \circ \Phi \), one has

$$\begin{aligned}\sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{{\Phi ^*}X}}{\bar{A}}} \right) }}\left( {{\Phi ^*}{e_i},{\Phi ^*}X} \right) ,{\Phi ^*}{e_i}} \right\rangle } \le Hess{f^{{\bar{A}}}}\left( {{\Phi ^*}X,{\Phi ^*}X} \right) ,\end{aligned}$$

and

$$\begin{aligned}\left| {{f^{{\bar{A}}}}} \right| = \left| {{f^A}} \right| \,\,\,\,\,\,\,\,\,and\,\,\,\,\,\,{\bar{R}}ic\left( {{\Phi ^*}X,{\bar{A}}\left( {{\Phi ^*}X} \right) } \right) = Ric\left( {X,AX} \right) .\end{aligned}$$

Thus the universal cover \( {{\bar{M}}}\) is compact and consequently M has finite fundamental group. \(\square \)

To generalize the inequalities of the extended mean curvature comparison theorem to M globally, we need three definitions of inequalities in weak senses (see [13]). The first one is the weak inequality in barrier sense which is originally defined by Calabi [9] in 1958.

Definition 3.7

(see [13]). Let \(f \in {C^0}(M)\), \( X\in {\mathfrak {X}}(M) \) and A be a bounded below \((1,1)-\)tensor field as in Definition 2.2. Then \({\Delta _{A,X}}u \ge v\) in the barrier sense, if for any point \( x_0 \) in M and any \(\varepsilon > 0\), there exists a function \({u_{{x_0},\varepsilon }}\) which is called a support function and a neighborhood \({U_{{x_0},\varepsilon }}\) of \(x_0\), such that the following properties are satisfied,

  1. (a)

    \({u_{{x_0},\varepsilon }} \in {C^2}({U_{{x_0},\varepsilon }})\),

  2. (b)

    \({u_{{x_0},\varepsilon }}({x_0}) = u({u_{{x_0}}})\) and \(u(x) \ge {u_{{x_0},\varepsilon }}(x)\) for all \(x \in {U_{{x_0},\varepsilon }}\),

  3. (c)

    \({\Delta _{A,X}}{u_{{x_0},\varepsilon }}({x_0}) \ge v - \varepsilon .\)

Similarly, \({\Delta _{A,X}}u \le v\) in barrier sense, if \({\Delta _{A,X}}\left( { - u} \right) \ge - v\) in the sense just defined.

By [9] we know that, If \(\gamma \) is a minimal geodesic from p to q, then for any \(\varepsilon > 0\) the function \({r_{q,\varepsilon }}(x) = \varepsilon + dist\left( {\gamma (\varepsilon ),x} \right) \) is an upper barrier for the distance function \(r(x) = dist\left( {p,x} \right) \). So we get the following inequality in barrier sense for the distance function. The following lemma is used in Proposition 6.2 for the extension of Quantitative Maximal Principle of Abresch and Gromoll and to get the same inequality in distribution sense in Lemma 3.11.

Lemma 3.8

(see [13]). Let \( p \in M \) and \(r(x) = dist\left( {p,x} \right) \). If \({\Delta _{A,X}}(r) \le \alpha (r)\) point-wise on \(M\backslash cut(p)\) for a continuous function \(\alpha \) and \(v \in {C^2}({\mathbb {R}})\) be non-negative, \(u(x) = v(r(x))\) and suppose \(v' \ge 0\), then

  1. (a)

    If \(v' \ge 0\), then \({\Delta _{A,X}}(u) \le \left| {\nabla r} \right| _A^2v''(r) + \alpha (r)v'(r)\) in barrier sense on M.

  2. (b)

    If \(v' \le 0\), then \({\Delta _{A,X}}(u) \ge \left| {\nabla r} \right| _A^2v''(r) + \alpha (r)v'(r)\) in barrier sense on M.

The same results hold for \({L_{A,f}}(r)\).

The second definition of inequality in weak sense is defined in the sense of viscosity which was introduced by Crandall and Lions in [12].

Definition 3.9

(see [13]). Let \(h\in {C^0}(M)\), Then \({L_{A,f}}h(p) \ge a\) in the viscosity sense, if \({L_{A,f}}\phi (p) \ge a\) whenever \(\phi \in {C^2}\left( U \right) \) and \(\left( {h - \phi } \right) (q) = \mathop {\inf }\nolimits _U \left( {h - \phi } \right) \), where U is a neighborhood of q . Similarly \({L_{A,f}}h \le a\) is defined.

By Lemma 5.1, it is clear that barrier sub solutions are viscosity sub solutions. The last and very useful notion of inequality is inequality in the sense of distribution.

Definition 3.10

(see [13]). For continuous functions uh on the manifold M, \({L_{A,f}}(u) \le h\) in weak or distribution sense, if \(\int _M {u{L_{A,f}}(\phi )} dvo{l_g} \le \int _M \phi hvo{l_g}\) for each \(\phi \in Li{p_c}(M)\).

When A is bounded from below as in Definition 2.2, it is known that, if u is a viscosity solution of \({L_{A,f}}u \le h\) on M, it is also a distribution solution and vice versa (see [17, Theorem 3.2.11 ] or [20]). The following lemma is used for proving the monotonicity of the volume of geodesic balls in Theorems1.4 and 4.3,

Lemma 3.11

Let \({L_{A,f}}(r) \le \alpha (r)\) point-wise on \(M\backslash cut(p)\) for a continuous function \(\alpha \). Let \(u(x) = v(r(x))\) where \(v \in {C^2}({\mathbb {R}})\) be non-negative and \(v' \ge 0\), \(v'' = 0\), then \({L_{A,f}}u \le v'\alpha (r)\) in the distribution sense on M.

Proof

By Lemma 3.8, the inequality is valid in barrier sense for \({L_{A,f}}(r)\). So it is valid in viscosity sense and by [20] or [17] it is valid in the distribution sense. \(\square \)

4 Extended Volume Growth

In this section, we get some results on the growth of extended volume. We define the extended volume as follows.

Definition 4.1

Let M be a Riemannian manifold, A be a self adjoint (1,1)-tensor field on M, \( x_0\in M \) and \( r(x):=dist(x_0,x) \). We define the extended volume of the geodesic ball \( B(x_0,R) \) as \(vol^A(B(x_0,R): = \int _{B(x_0,R)} {\left\langle {A\nabla r,\nabla r} \right\rangle } dvo{l_g}\).

We compare this volume with the usual volume of geodesic balls in the model spaces \({{\mathbb {R}}^n}\,,\,{{\mathbb {S}}^n}\) and \({{\mathbb {H}}^n}\). In order to give the proof of Theorem 1.4 we state and prove Theorems 4.2 and 4.3.

Theorem 4.2

Let M be as before, \( x_0\in M \) and \( r(x):=dist(x_0,x) \), A be a self adjoint (1,1)-tensor field on M. Assume that \({L_A}r \le \frac{1}{C}\frac{{s{{n'}_H}(r)}}{{s{n_H}(r)}} + a\), point-wise on \(M\backslash cut(x_0)\) and the following condition holds,

$$\begin{aligned} \frac{1}{{C\left( {m - 1} \right) }} \le \left\langle {\nabla r,A\nabla r} \right\rangle . \end{aligned}$$
(4.1)

Then

$$\begin{aligned}\frac{{vo{l^A}(B(p,R))}}{{vo{l^A}(B(p,r))}} \le {e^{\left( {a/{\delta _1}} \right) R}}\frac{{vol_H^m(R)}}{{vol_H^m(r)}}.\end{aligned}$$

Proof

We use the standard method appeared in the proof of Theorem 2.14 of [24] But the computation is different in our case. By Lemma 3.11, the inequality holds weakly (in distribution sense) on M. Thus, for every \(0 \le \varphi \in Li{p_c}(M)\), we have

$$\begin{aligned} - \int _M {\left\langle {\nabla \varphi ,A\nabla r} \right\rangle dvo{l_g}} \le \frac{1}{C}\int _M {\left( {\frac{{s{{n'}_H}\left( {r(x)} \right) }}{{s{n_H}\left( {r(x)} \right) }} + a} \right) \varphi dvo{l_g}} . \end{aligned}$$
(4.2)

For any \(\varepsilon > 0\), we apply the test function \({\varphi _\varepsilon }(x)\) to the above (weak) inequality,

$$\begin{aligned}{\varphi _\varepsilon }(x) = {\rho _\varepsilon }(r(x))sn_H^{1 - m}(r(x)){e^{ - (a/{\delta _1})r(x)}}\end{aligned}$$

where \({\rho _\varepsilon }(t)\) is the function

$$\begin{aligned}{\rho _\varepsilon }(t) = \left\{ {\begin{array}{*{20}{c}} 0 &{} {} &{} {t \in \left[ {0,r} \right) ,} \\ {\frac{{t - r}}{\varepsilon }} &{} {} &{} {t \in \left[ {r,r + \varepsilon } \right) ,} \\ 1 &{} {} &{} {t \in \left[ {r + \varepsilon ,R - \varepsilon } \right) ,} \\ {\frac{{R - t}}{\varepsilon }} &{} {} &{} {t \in \left[ {R - \varepsilon ,R} \right) ,} \\ 0 &{} {} &{} {t \in \left[ {R, + \infty } \right) .} \\ \end{array}} \right. \end{aligned}$$

By computation we have,

$$\begin{aligned}\nabla {\varphi _\varepsilon } =&\left\{ { - \frac{{{\chi _{R - \varepsilon ,R}}}}{\varepsilon } + \frac{{{\chi _{r,r + \varepsilon }}}}{\varepsilon } - \left( {\left( {m - 1} \right) \frac{{s{{n'}_H}(r(x))}}{{s{n_H}(r(x))}} + (a/{\delta _1})} \right) {\rho _\varepsilon }} \right\} \\&\times {e^{ - (a/{\delta _1})r(x)}}sn_H^{ - m + 1}(r(x))\nabla r,\end{aligned}$$

for a.e. \(x \in M\), where \({\chi _{s,t}}\) is the characteristic function of the set \(B(x_0,t)\backslash B(x_0,s)\). By inserting \({\varphi _\varepsilon }\) in to (4.2) and computation we get,

$$\begin{aligned}\begin{array}{*{20}{l}} {{} {} {} \frac{1}{\varepsilon }{\smallint _{\left( {B({x_0},R)\backslash B({x_0},R - \varepsilon )} \right) }}sn_H^{ - m + 1}(r(x)){e^{ - (a/{\delta _1})r(x)}}\left\langle {\nabla r,A\nabla r} \right\rangle dvo{l_g}}\\ {{} {} {} - \frac{1}{\varepsilon }{\smallint _{\left( {B({x_0},r)\backslash B({x_0},r + \varepsilon )} \right) }}sn_H^{ - m + 1}(r(x)){e^{ - (a/{\delta _1})r(x)}}\left\langle {\nabla r,A\nabla r} \right\rangle dvo{l_g}}\\ {{} {} {} {} {} {} {} \,\,\,\,\,\,\,\,{} \le \int _M {\left( {\frac{1}{C} - \left( {m - 1} \right) \left\langle {\nabla r,A\nabla r} \right\rangle } \right) s{{n'}_H}(r(x))sn_H^{ - m}(r(x)){e^{ - (a/{\delta _1})r(x)}}{\rho _\varepsilon }dvo{l_g}} }\\ {{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} \,\,\,\,\,\,{} {} + \int _M {\left( {a - \frac{a}{{{\delta _1}}}\left\langle {\nabla r,A\nabla r} \right\rangle } \right) {e^{ - (a/{\delta _1})r(x)}}sn_H^{1 - m}(r(x)){\rho _\varepsilon }dvo{l_g}.} } \end{array}\end{aligned}$$

So if, (note that \(s{n_H}^\prime (r(x))sn_H^{ - m}(r(x)) \ge 0\))

$$\begin{aligned}\left( {\frac{1}{C} - \left( {m - 1} \right) \left\langle {\nabla r,A\nabla r} \right\rangle } \right) \le 0 \Leftrightarrow \frac{1 }{{C\left( {m - 1} \right) }} \le \left\langle {\nabla r,A\nabla r} \right\rangle , \end{aligned}$$

then

$$\begin{aligned}\begin{array}{*{20}{l}} {\frac{1}{\varepsilon }{\smallint _{\left( {B({x_0},R)\backslash B({x_0},R - \varepsilon )} \right) }}sn_H^{ - m + 1}(r(x)){e^{ - (a/{\delta _1})r(x)}}\left\langle {\nabla r,A\nabla r} \right\rangle dvo{l_g}}\\ {{} {} {} {} {} {} {} {} - \frac{1}{\varepsilon }{\smallint _{\left( {B({x_0},r)\backslash B({x_0},r + \varepsilon )} \right) }}sn_H^{ - m + 1}(r(x)){e^{ - (a/{\delta _1})r(x)}}\left\langle {\nabla r,A\nabla r} \right\rangle dvo{l_g} \le 0.} \end{array}\end{aligned}$$

Letting \(\varepsilon \rightarrow 0\), we conclude,

$$\begin{aligned}\frac{{vo{l^A}(\partial B({x_0},R))}}{{{e^{(a/{\delta _1})R}}sn_H^{m - 1}(R)}} - \frac{{vo{l^A}(\partial B({x_0},r))}}{{{e^{(a/{\delta _1})r}}sn_H^{m - 1}(r)}} \le 0.\end{aligned}$$

So the function

$$\begin{aligned}r \mapsto \frac{{vo{l^A}(\partial B(p,r))}}{{{e^{\left( {a/{\delta _1}} \right) r(x)}}sn_H^{m - 1}(r)}}\end{aligned}$$

is non-increasing. By using Lemma 3.2 of [38],

$$\begin{aligned}\frac{{\int _{{R_1}}^R {vo{l^A}(\partial B(p,t)dt} }}{{\int _{{r_1}}^r {vo{l^A}(\partial B(p,t))dt} }} \le \frac{{\int _{{R_1}}^R {{e^{\left( {a/{\delta _1}} \right) t}}sn_H^{m - 1}(t)dt} }}{{\int _{{r_1}}^r {{e^{\left( {a/{\delta _1}} \right) t}}sn_H^{m - 1}(t)} dt}},\end{aligned}$$

for any \(\,{r_1}\mathrm{{ < }}\,r\,\) and \({R_1}\mathrm{{ < }}\,R\) and \(r\,\mathrm{{ < }}\,R\). In other words,

$$\begin{aligned}\frac{{vo{l^A}(B(p,{R_1},R))}}{{vo{l^A}(B(p,{r_1},r))}} \le \frac{{vol_H^{m,\left( {a/{\delta _1}} \right) }({R_1},R)}}{{vol_H^{m,\left( {a/{\delta _1}} \right) }({r_1},r)}}.\end{aligned}$$

Where \(vol_H^{m,\left( {a/{\delta _1}} \right) }(r,R) = \int _r^R {\int _{{S^{m - 1}}} {{e^{\left( {a/{\delta _1}} \right) t}}sn_H^{m - 1}(t)} d{\theta _{m - 1}}dt} \) is the volume of the annulus \(B(O,R)\backslash B(O,r)\) in the pointed metric measure space \(M_{H,\left( {a/{\delta _1}} \right) }^m = \left( {M_H^m,{g_H},{e^{ - hr}}dvo{l_g},O} \right) \) and \(h(x) = - \left( {a/{\delta _1}} \right) dist(x,O)\) where O is a fixed point in the simply connected \( m- \) dimensional space form \( {M_H^m} \) of constant sectional curvature H. So by the same discusstion as in [31], the result follows.

\(\square \)

Theorem 4.3

Let M be as before, \( x_0\in M \) and \( r(x):=dist(x_0,x) \), A be a self adjoint (1,1)-tensor field on M. Assume that \({L_A}r \le \frac{1}{C}\frac{{s{{n'}_H}(r)}}{{s{n_H}(r)}} + {\partial _r}.{f^A}(r)\) point-wise on \(M\backslash cut(x_0)\) and the following condition holds,

$$\begin{aligned} \frac{1}{{C\left( {m - 1} \right) }} \le \left\langle {\nabla r,A\nabla r} \right\rangle . \end{aligned}$$
(4.3)

Then

  1. (a)

    For any \(0<R\), we have,

    $$\begin{aligned}\frac{d}{{dR}}\left( {\frac{{vol^A(B(x_0,R))}}{{vol_H^mB(R)}}} \right) \le&\frac{1}{{{\delta _n}}}\frac{{{c_m}{R^{}}sn_H^{m - 1}(R)}}{{{{\left( {vol_H^mB(R)} \right) }^{1 + 1/p}}}}{\left( {\frac{{vol^A\left( {B(x_0,R)} \right) }}{{vol_H^mB(R)}}} \right) ^{1 - 1/p}}\\&{\left\| {\left( {\left( {\delta _n^{1/p}\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R}}.\end{aligned}$$
  2. (b)

    For any \(0<r< R \), we get

    $$\begin{aligned}&{\left( {\frac{{vol^A(B(x_0,R))}}{{vol_H^mB(R)}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B(x_0,r))}}{{vol_H^mB(r)}}} \right) ^{1/p}}\\&\qquad \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R}}\int _r^R {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB(t)} \right) }^{1 + 1/p}}}}dt.} \end{aligned}$$
  3. (c)

    For any \(0 < {r_1} \le {r_2} \le {R_1} \le {R_2} \), the following extended volume comparison inequality for annular regions holds,

    $$\begin{aligned}\begin{array}{l} {\left( {\frac{{vol^A(B({x_0},{r_2},{R_2}))}}{{vol_H^mB({r_2},{R_2})}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B({x_0},{r_1},{R_1}))}}{{vol_H^mB({r_1},{R_1})}}} \right) ^{1/p}} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R_2}}\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \times \left[ {\int _{{R_1}}^{{R_2}} {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB({r_2},t)} \right) }^{1 + 1/p}}}}} dt + \int _{{r_1}}^{{r_2}} {\frac{{{R_1}sn_H^{m - 1}({R_1})}}{{{{\left( {vol_H^mB(t,{R_1})} \right) }^{1 + 1/p}}}}dt} } \right] ,\\ \end{array}\end{aligned}$$

where \({vol_{H}^m (R)}\) is the volume of B(oR) in the m-dimensional simply connected complete manifold with constant sectional curvature H and

$$\begin{aligned}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R}}: = \int _{ B({x_0},R)} {\delta _n^{}{{\left( {\left| {\nabla {f^A}} \right| } \right) }^p}} dvo{l_g}.\end{aligned}$$

Remark 4.4

If \(r \rightarrow 0\), then the integral

$$\begin{aligned}\int _r^R {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB(t)} \right) }^{1 + 1/p}}}}dt}, \end{aligned}$$

blows up.

Proof

(Proof of the Theorem 4.3). The proof is based on the same discussion as in the proof of Theorem 4.2 and inspiration of the proof of Lemma 2.18 of [24].

we know that

$$\begin{aligned} - \int _M {\left\langle {\nabla \varphi ,A\nabla r} \right\rangle dvo{l_g}} \le \frac{1}{C}\int _M {\frac{{s{{n'}_H}\left( {r(x)} \right) }}{{s{n_H}\left( {r(x)} \right) }}\varphi dvo{l_g}} + \int _M {{\partial _r}.{f^A}(r)\varphi dvo{l_g}} . \nonumber \\ \end{aligned}$$
(4.4)

Let \({\varphi _\varepsilon }(x)\) be the radial cut-off function,\({\varphi _\varepsilon }(x) = {\rho _\varepsilon }(r(x))sn_H^{1 - m}(r(x)),\) similar to the proof of Theorem 4.2, we conclude,

$$\begin{aligned}\begin{array}{*{20}{l}} {\frac{{vo{l^A}(\partial B({x_0},R))}}{{sn_H^{m - 1}(R)}} - \frac{{vo{l^A}(\partial B({x_0},r))}}{{sn_H^{m - 1}(r)}}}&{}{ \le \int _{\left( {B({x_0},R)\backslash B({x_0},r)} \right) } {{\partial _r}.{f^A}(r)sn_H^{ - m + 1}(r(x))dvo{l_g}} }\\ {}&{}{ \le sn_H^{ - m + 1}(r)\int _{\left( {B({x_0},R)\backslash B({x_0},r)} \right) } {\left| {\nabla {f^A}(r)} \right| } dvo{l_g}.} \end{array}\end{aligned}$$

Let \(p > 1\), using Hölder inequality, we obtain

$$\begin{aligned}&{sn_H^{m - 1}(r)vol_{}^A(\partial B(x_0,R)) - sn_H^{m - 1}(R)vol^A(\partial B(x_0,r))} \nonumber \\&\quad { \le \frac{1}{{{\delta _n}}}sn_H^{m - 1}(R){{\left( {vol^A\left( {B(x_0,R)} \right) } \right) }^{1 - \left( {1/p} \right) }}{{\left( {\int _{ B(x_0,R)} {{\delta _n}{{\left( {\left| {\nabla {f^A}} \right| } \right) }^p}dvo{l_g}} } \right) }^{1/p}}.}\nonumber \\ \end{aligned}$$
(4.5)

So, we have

$$\begin{aligned}&\frac{d}{{dR}}\left( {\frac{{vo{l^A}(B({x_0},R))}}{{vol_H^mB(R)}}} \right) \\&\quad =\frac{{vol_H^mB(R)vo{l^A}(\partial B({x_0},R)) - vol_H^m\left( {\partial B(R)} \right) vo{l^A}(B({x_0},R))}}{{{{\left( {vol_H^mB(R)} \right) }^2}}} \\&\quad \le {c_m}{{\left( {vol_H^mB(R)} \right) }^{ - 2}}\int _0^R {\frac{1}{{{\delta _n}}}sn_H^{m - 1}(R){{\left( {vo{l^A}\left( {B({x_0},R)} \right) } \right) }^{1 - \left( {1/p} \right) }}dr}\\&\qquad \times {{\left( {\int _{B({x_0},R)} {{\delta _n}{{\left( {\left| {\nabla {f^A}} \right| } \right) }^p}dvo{l_g}} } \right) }^{1/p}} \\&\quad \le \frac{1}{{{\delta _n}}}\frac{{{c_m}Rsn_H^{m - 1}(R)}}{{{{\left( {vol_H^mB(R)} \right) }^{1 + 1/p}}}}{{\left( {\frac{{vo{l^A}\left( {B({x_0},R)} \right) }}{{vol_H^mB(R)}}} \right) }^{1 - 1/p}}{{\left( {\int _{B({x_0},R)} {{\delta _n}{{\left( {\left| {\nabla {f^A}} \right| } \right) }^p}dvo{l_g}} } \right) }^{1/p}} \\&\quad \le \frac{1}{{{\delta _n}}}\frac{{{c_m}Rsn_H^{m - 1}(R)}}{{{{\left( {vol_H^mB(R)} \right) }^{1 + 1/p}}}}{{\left( {\frac{{vo{l^A}\left( {B({x_0},R)} \right) }}{{vol_H^mB(R)}}} \right) }^{1 - 1/p}}{{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| }_{p,R}}. \end{aligned}$$

Thus,

$$\begin{aligned} \frac{d}{{dR}}\left( {{{\left( {\frac{{vol^A(B(x_0,R))}}{{vol_H^mB(R)}}} \right) }^{1/p}}} \right)&= \frac{1}{p}{{\left( {\frac{{vol^A(B(x_0,R))}}{{vol_H^mB(R)}}} \right) }^{ - 1 + 1/p}}\frac{d}{{dR}}\left( {\frac{{vol^A(B(x_0,R))}}{{vol_H^mB(R)}}} \right) \\&\quad \le \frac{{{c_m}}}{{p{\delta _n}}}{{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| }_{p,R}}\frac{R}{{{{\left( {vol_H^mB(R)} \right) }^{1 + 1/p}}}}sn_H^{m - 1}(R). \end{aligned}$$

Consequently,

$$\begin{aligned}&{\left( {\frac{{vol^A(B(x_0,R))}}{{vol_H^mB(R)}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B(x_0,r))}}{{vol_H^mB(r)}}} \right) ^{1/p}}\\&\qquad \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R}}\int _r^R {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB(t)} \right) }^{1 + 1/p}}}}dt.} \end{aligned}$$

For the volume comparison for annular regoins we use the procedures of part (a) repeatedly. First note that

$$\begin{aligned}&\frac{d}{{dR}}\left( {\frac{{vol^A(B({x_0},r,R))}}{{vol_H^mB(r,R)}}} \right) \\&\quad = {c_m}\frac{{vol^A(\partial B({x_0},R))\int _r^R {sn_H^{m - 1}(t)dt} - sn_H^{m - 1}(R)vol^A(B({x_0},r,R))}}{{{{\left( {vol_H^mB(r,R)} \right) }^2}}}\\&\quad ={{\left( {vol_H^mB(r,R)} \right) }^{ - 2}}\int _r^R\big [ {{c_m}sn_H^{m - 1}(t)vol^A(\partial B({x_0},R)) - {c_m}sn_H^{m - 1}(R)vol^A(B({x_0},t))\big ]dt}\\&\quad \le \frac{1}{{{\delta _n}}}\frac{{{c_m}Rsn_H^{m - 1}(R)}}{{{{\left( {vol_H^mB(r,R)} \right) }^{1 + 1/p}}}}{{\left( {\frac{{vol^A\left( {B({x_0},r,R)} \right) }}{{vol_H^mB(r,R)}}} \right) }^{1 - 1/p}}{{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| }_{p,R}}, \end{aligned}$$

where we use the term \({vol^A\left( {B({x_0},r,R)} \right) }\) instead of \(vo{l^A}\left( {B({x_0},R)} \right) \) in the inequality (4.5). and similarly,

$$\begin{aligned}\frac{d}{{dR}}{\left( {\frac{{vol^A(B({x_0},r,R))}}{{vol_H^mB(r,R)}}} \right) ^{1/p}} \le \frac{1}{{p{\delta _n}}}\frac{{{c_m}Rsn_H^{m - 1}(R)}}{{{{\left( {vol_H^mB(r,R)} \right) }^{1 + 1/p}}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R}}.\end{aligned}$$

Thus,

$$\begin{aligned}&{\left( {\frac{{vol^A(B({x_0},{r_2},{R_2}))}}{{vol_H^mB({r_2},{R_2})}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B({x_0},{r_2},{R_1}))}}{{vol_H^mB({r_2},{R_1})}}} \right) ^{1/p}} \\ \nonumber&\,\,\,\,\,\,\,\,\,\, \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R_2}}\int _{{R_1}}^{{R_2}} {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB({r_2},t)} \right) }^{1 + 1/p}}}}} dt. \end{aligned}$$
(4.6)

Similarly,

$$\begin{aligned}&\frac{d}{{dr}}\left( {\frac{{vol^A(B({x_0},r,R))}}{{vol_H^mB(r,R)}}} \right) \\&\quad = {c_m}\frac{{ - vol^A(\partial B({x_0},r))\int _r^R {sn_H^{m - 1}(t)dt} - \left( { - sn_H^{m - 1}(r)} \right) vol^A(B({x_0},r,R))}}{{{{\left( {vol_H^mB(r,R)} \right) }^2}}}\\&\quad ={{\left( {vol_H^mB(r,R)} \right) }^{ - 2}}\int _r^R {{c_m}sn_H^{m - 1}(r)vol^A(\partial B({x_0},t)) - {c_m}sn_H^{m - 1}(t)vol^A(\partial B({x_0},r))dt}\\&\quad \le \frac{1}{{{\delta _n}}}\frac{{{c_m}sn_H^{m - 1}(R){{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| }_{p,R}}}}{{{{\left( {vol_H^mB(r,R)} \right) }^{1 + 1/p}}}}{{\left( {\frac{{vol^A\left( {B({x_0},r,R)} \right) }}{{vol_H^mB(r,R)}}} \right) }^{1 - 1/p}}. \end{aligned}$$

So,

$$\begin{aligned}\frac{d}{{dr}}\left( {{{\left( {\frac{{vol^A(B({x_0},r,R))}}{{vol_H^mB(r,R)}}} \right) }^{1/p}}} \right) \le \frac{{{c_m}Rsn_H^{m - 1}(R)}}{{p{\delta _n}}}\frac{{{{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| }_{p,R}}}}{{{{\left( {vol_H^mB(r,R)} \right) }^{1 + 1/p}}}}.\end{aligned}$$

And

$$\begin{aligned}&{\left( {\frac{{vol^A(B({x_0},{r_2},{R_1}))}}{{vol_H^mB({r_2},{R_1})}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B({x_0},{r_1},{R_1}))}}{{vol_H^mB({r_1},{R_1})}}} \right) ^{1/p}} \\ \nonumber&\,\,\,\,\,\, \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R_1}}\int _{{r_1}}^{{r_2}} {\frac{{{R_1}sn_H^{m - 1}({R_1})}}{{{{\left( {vol_H^mB(t,{R_1})} \right) }^{1 + 1/p}}}}dt} . \end{aligned}$$
(4.7)

By adding (4.7) and (4.6), we get

$$\begin{aligned}&{\left( {\frac{{vol^A(B({x_0},{r_2},{R_2}))}}{{vol_H^mB({r_2},{R_2})}}} \right) ^{1/p}} - {\left( {\frac{{vol^A(B({x_0},{r_1},{R_1}))}}{{vol_H^mB({r_1},{R_1})}}} \right) ^{1/p}} \\&\,\,\,\,\,\, \le \frac{{{c_m}}}{{p{\delta _n}}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R_2}} \left[ \int _{{R_1}}^{{R_2}} {\frac{{tsn_H^{m - 1}(t)}}{{{{\left( {vol_H^mB({r_2},t)} \right) }^{1 + 1/p}}}}} dt\right. \\&\qquad \left. + \int _{{r_1}}^{{r_2}} {\frac{{{R_1}sn_H^{m - 1}({R_1})}}{{{{\left( {vol_H^mB(t,{R_1})} \right) }^{1 + 1/p}}}}dt} \right] . \end{aligned}$$

\(\square \)

Proof of Theorem 1.4

(Extended volume comparison) The proof is done by using Theorem 1.2, part (b) and taking

$$\begin{aligned}m = C\left( {{\delta _n},n,{\delta _1},K,K',H} \right) = \left[ {\frac{{{\delta _n}(n - 1) + 4(K + K')}}{{{\delta _1}}}} \right] + 2,\end{aligned}$$

in Theorems 4.2 and 4.3. \(\square \)

Now we extended three famous results of the Bishop–Gromov volume comparison theorem for the extended Ricci tensor. First we extend Theorem 3.13 of [38].

Theorem 4.5

Let M be a complete manifold, A is a bounded self-adjoint (1,1)-tensor field on it and the following conditions are satisfied,

  1. (a)

    \(Ri{c_{ - TraceA}}\left( {X,AX} \right) \ge (n - 1){\delta _n}H{\left| X \right| ^2}\)

  2. (b)

    \(\left| {{f^A}} \right| \le K\)

  3. (c)

    \(\left| {\nabla {f^A}} \right| \le a,\)

  4. (d)

    \(diam(M) \le D\)

Then the first Betti number \(b_1\) satisfies the estimate \({b_1} \le C\left( {\delta _1},{\delta _n},n,K,H{D^2},\right. \left. aD \right) \).

The second is an extension of Anderson’s theorem [4].

Theorem 4.6

Let M be a complete manifold, A is a bounded self-adjoint (1,1)-tensor field on M and the following conditions are satisfied,

  1. (a)

    \(Ri{c_{ - TraceA}}\left( {X,AX} \right) \ge (n - 1){\delta _n}H{\left| X \right| ^2}\)

  2. (b)

    \(\left| {{f^A}} \right| \le K\)

  3. (c)

    \(\left| {\nabla {f^A}} \right| \le a\),

  4. (d)

    \(diam(M) \le D\), \(vol(M) \ge V\).

Then there are only finitely many isomorohism types of \({\pi _1}(M)\).

The proofs of Theorem 4.5 and 4.6 are obtained by the proofs of Theorem 3.13 and 3.14 of [38] by noting that the extended Ricci curvature can also give control on the first Betti number via the extended volume comparison theorem.

The third and last theorem of this section is an extension of Yau’s theorem, which was originally proved by Calabi and Yau in 1976 [37] via analytic methods in the Riemannian case for Ricci tensor. We adapt [33] for the proof.

Theorem 4.7

(Extension of Yau theorem). Let M be non compact, \( x_0 \) be a fixed point and \(Ri{c_{-Trace(A)}}\left( {{\partial _r},A{\partial _r}} \right) \ge 0\), then for any \(p > n\) and \(R \ge 2\), there is an \(\varepsilon = \varepsilon (m,p,A,R+1)\) such that if

$$\begin{aligned} \mathop {\sup }\limits _{x \in M} \frac{{\delta _n^{}}}{{{\delta _1}vol(B(x,R + 1))}}{\left( {\int _{B(x,R + 1)} {{{\left| {\nabla {f^A}} \right| }^p}dvo{l_g}} } \right) ^{1/p}} < \varepsilon \end{aligned}$$

then

$$\begin{aligned}vol\left( {B({x_0},R)} \right) \ge cR,\end{aligned}$$

where c is a constant.

Proof

Let \( x \in M \) be such that \( \mathrm{{dist}}\left( {{x_0},x} \right) \mathrm{{ }} = \mathrm{{ R }} \ge \mathrm{{ 2}} \). By the relative comparison Theorem 4.3 for annulus and letting \({r_1} = 0\,,\,{r_2} = R - 1\,,\,{R_1} = R\) and \({R_2} = R + 1\) we have,

$$\begin{aligned}&{\left( {\frac{{vo{l^A}\left( {B(x,R - 1,R + 1)} \right) }}{{{{\left( {1+ R} \right) }^m} - {{\left( {R - 1} \right) }^m}}}} \right) ^{1/p}} - {\left( {\frac{{vo{l^A}\left( {B(x,0,R)} \right) }}{{{R^m}}}} \right) ^{1/p}}\\&\qquad \le \frac{{2{c_m}}}{{p{\delta _n}}}{\left( {R + 1} \right) ^{m + 1}}{\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R + 1}}.\end{aligned}$$

So

$$\begin{aligned}\left( {\frac{{vo{l^A}\left( {B(x,R - 1,R + 1)} \right) }}{{{{\left( {R + 1} \right) }^m} - {{\left( {R - 1} \right) }^m}}}} \right) \le&\left( {\frac{{vo{l^A}\left( {B(x,R)} \right) }}{{{R^m}}}} \right) \\&+ C{\left( {R + 1} \right) ^{\left( {m + 1} \right) p}}\left\| {\left( {\delta _n^{1/p}\left( {\left| {\nabla {f^A}} \right| } \right) } \right) } \right\| _{p,R + 1}^p .\end{aligned}$$

By multiplying with \(\frac{{{{\left( {R + 1} \right) }^m} - {{\left( {R - 1} \right) }^m}}}{{vo{l^A}\left( {B(x,R + 1)} \right) }}\), we get

$$\begin{aligned}&\left( {\frac{{vo{l^A}\left( {B(x,R - 1,R + 1)} \right) }}{{vo{l^A}\left( {B(x,R + 1)} \right) }}} \right) \\&\quad \le \frac{D}{R} + D{\left( {R + 1} \right) ^{\left( {m + 1} \right) p}}\mathop {\sup }\limits _{x \in M} \frac{{\delta _n^{}{{\left( {\int _{B(x,R + 1)} {{{\left| {\nabla {f^A}} \right| }^p}dvo{l_g}} } \right) }^{1/p}}}}{{{\delta _1}vo{l^m}(B(x,R + 1))}},\end{aligned}$$

where D is some constant depends on \( m,p,\delta _1 \). We choose \(\varepsilon = \varepsilon (m,p,A,R+1)\) small enough such that

$$\begin{aligned}\frac{{vo{l^A}\left( {B(x,R - 1,R + 1)} \right) }}{{vo{l^A}\left( {B(x,R + 1)} \right) }} \le \frac{{2D}}{R}.\end{aligned}$$

For \(R \ge 2\) we have,

$$\begin{aligned}vo{l^A}\left( {B({x_0},2R + 1)} \right) \ge \frac{{vo{l^A}\left( {B({x_0},1)} \right) }}{{2D}}R.\end{aligned}$$

\(\square \)

5 Cheeger–Gromoll Splitting Theorem

One of the important applications of mean curvature comparison theorem is the Cheeger–Gromoll splitting theorem. In this section we extend the Cheeger–Gromoll splitting theorem by replacing Ric(XX) with \(Ric\left( {X,AX} \right) \). Our approach for the proof is similar to the original one. i.e, we show that the vector field \(\nabla b_\gamma ^ +\) is Killing and \(\left\| {\nabla b_\gamma ^ + } \right\| = 1\), where \(b_\gamma ^ + \) is the Bussemann function associated to the ray \( {\gamma _ + } \). So by the extended Bochner formula and the restriction on the extended Ricci tensor, we show \(b_\gamma ^ + \) is a harmonic function. First, one should provide the maximum principal for the operator \({\Delta _{A,X}} = {\Delta _A} - \left\langle {X,\nabla \,\,} \right\rangle \), thus we recall the following Lemma.

Lemma 5.1

(see [13]). Let \(f,h \in {C^2}(M)\) and \(p \in M\) and U be a neighborhood of p. If

  1. (a)

    \(f(p) = h(p)\),

  2. (b)

    \(f(x) \ge h(x)\) for all \(x \in U\),

then

  1. (a)

    \(\nabla f(p) = \nabla h(p)\),

  2. (b)

    \(Hessf(p) \ge Hessh(p)\),

  3. (c)

    \({\Delta _{A,X}}f(p) \ge {\Delta _{A,X}}h(p)\).

Proof

Parts (a), (b) are clear. For part (c) it is sufficient to show that \({\Delta _A}f(p) \ge {\Delta _A}h(p)\). We know that \(Hess\left( {f - h} \right) (p) \ge 0\). By assumption, A is positive definite, so \(A=B^2\) and

$$\begin{aligned} {\Delta _A}\left( {f - h} \right) (p)= & {} Trace\left( {{B^2} \circ hess\left( {f - h} \right) (p)} \right) \\= & {} Trace\left( {B \circ hess\left( {f - h} \right) (p) \circ B} \right) \\= & {} \sum \nolimits _i {\left\langle {hess\left( {f - h} \right) (p) \circ B{e_i},B{e_i}} \right\rangle } \ge 0. \end{aligned}$$

Thus \({\Delta _{A,X}}f(p) \ge {\Delta _{A,X}}h(p)\). \(\square \)

Now we extend the maximum principle for \({\Delta _A}\).

Theorem 5.2

(Extended maximum principle). Let \(f \in {C^0}(M)\) and \({\Delta _{A,X}}f \ge 0\) in barrier sense, then f is constant in a neighborhood of each local maximum of f. So, if f has a global maximum, then f is constant.

Proof

Let \(p \in M\) be a local maximum of f. If \({\Delta _{A,X}}f (p)> 0\), then we have a contradiction by Lemma 5.1 part (c). So we assume \({\Delta _{A,X}}f (p)\ge 0\). Without loose of generality we may assume that there is a sufficiently small \(r < inj(p)\), such that p is a maximum of the restricted function \(f:B(x_0,r) \rightarrow {\mathbb {R}} \) and there is some point \({x_0} \in \partial B(p,r)\) such that \(f({x_0}) \ne f(p)\). As usual, we define

$$\begin{aligned} V: = \left\{ {x \in \partial B(p,r):f(x) = f(p)} \right\} . \end{aligned}$$

Let U be an open neighborhood with the property that \(V \subseteq U \subseteq \partial B(p,r)\) and \(\phi \) be a function such that,

$$\begin{aligned}\phi (p) = 0\,\,,\,\,\,{\left. \phi \right| _U} < 0\,\,,\,\,\nabla \phi \ne 0.\end{aligned}$$

Then for the function \(h = {e^{\alpha \phi }} - 1\) we have,

$$\begin{aligned}{\Delta _{A,X}}h = \alpha {e^{\alpha \phi }}\left( {\alpha \left\langle {\nabla \phi ,A\nabla \phi } \right\rangle + {\Delta _{A,X}}\phi } \right) \ge \alpha {e^{\alpha \phi }}\left( {\alpha {\delta _1}{{\left| {\nabla \phi } \right| }^2} + {\Delta _{A,X}}\phi } \right) .\end{aligned}$$

So by choosing \(\alpha \) large enough we have,

$$\begin{aligned}{\Delta _{A,X}}h > 0\,\,,\,\,{\left. h \right| _V}\mathrm{{ < }}0\,\,,\,\,h(p) = 0.\end{aligned}$$

Now by defining \(F = f + \delta h\), for small enough \(\delta > 0\), we have \({\Delta _{A,X}}F > 0\) on \(\overline{B(x_0,r)} \) and F has a maximum point in \(B(x_0,r)\), which is a contradiction by the first part of the proof. \(\square \)

The second step in the proof of the Cheeger–Gromoll splitting theorem is the regularity property which states when \(\Delta f = 0\) in barrier sense, then f is smooth. In fact using this property one can show that, when the Ricci tensor is positive, the Bussemann functions are smooth. As the original proof, first we recall the regularity property for \({\Delta _{A,X}}\) by the following Proposition.

Proposition 5.3

If A is bounded ( \({\delta _1} > 0 \) ) and \({\Delta _{A,X}}f = 0\) in the barrier sense, then f is smooth.

Proof

Since A is bounded below, so \({\Delta _{A,X}}\) satisfies the elliptic conditions in sections 6.3–6.4 or Theorem 6.17 of [15], thus f is smooth. \(\square \)

Now we are ready to prove the extension of Cheeger–Gromoll splitting theorem. To this end, we use the so-called Busseman functions of a line in M and show that the gradient of a Busseman funtion \({b^ +}\) is Killing and \(\left\| {\nabla {b^ + }} \right\| = 1\). First we recall the definition of a Bussemann function.

Definition 5.4

[13, 31, 38]. Let \(\gamma :\left[ {0, + \infty } \right] \rightarrow M\) be a ray, the Bussemann function \({b^\gamma }\) associated to \(\gamma \) is defined as \({b^\gamma }(x): = \mathop {\lim }\nolimits _{t \rightarrow \infty } \left( {t - d(x,\gamma (t))} \right) \).

Now, we should prove that \({\Delta _{A,{f^A}}}\left( {{b^\gamma }} \right) \ge 0\) in the barrier sense.

Proposition 5.5

If \(Ric({\partial _r},A{\partial _r}) \ge 0\), then \({\Delta _{A,{f^A}}}\left( {{b^\gamma }} \right) \ge 0\) in the barrier sense.

Proof

For each point \(q \in M\) the family of functions defined as \({h_t}(x) = t - d(x,{\overline{\gamma }} (t)) + {b^\gamma }(q)\) are lower barrier functions for \({b^\gamma }\) at the point q, where \({\overline{\gamma }} (t)\) is one of the asymptotic rays to the ray \( \gamma \) at q [13, 31, 38]. So \({h_t}\) is smooth in a neighborhood U of q. Finally by Theorem 1.2 and the restriction on the extended Ricci tensor,

$$\begin{aligned}{\Delta _{A,{f^A}}}{h_t}(q) = - {\Delta _{A,{f^A}}}\left( {d(q,{\bar{\gamma }} (t))} \right) \ge - {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}\left( {n - 1} \right) }}} \right) \frac{1}{{d(q,{\bar{\gamma }} (t))}}.\end{aligned}$$

Since \(\mathop {\lim }\nolimits _{t \rightarrow \infty } d(q,{\overline{\gamma }} (t)) = \infty \), for each \(\varepsilon > 0\) one can find t such that \({\Delta _A}\left( {d(q,{\overline{\gamma }} (t))} \right) \ge - \varepsilon \) and this completes the proof. \(\square \)

Corollary 5.6

If \(Ric({\partial _r},A{\partial _r}) \ge 0\) and \({\gamma _ + }\) and \({\gamma _ - }\) are two rays derived from the line \(\gamma \), and \({b^ + }\) and \({b^ - }\) denote their Bussemann functions, then

  1. (a)

    \(b^ + + b ^ - = 0\),

  2. (b)

    \({\Delta _{A,{f^A}}}{b^ + } = {\Delta _{A,{f^A}}}{b^ - } = 0\) and the functions \(b^ +\) and \( b ^ - \) are smooth.

Proof

(a) By Proposition 5.5 and the restriction on the extended Ricci tensor, we know that \({\Delta _{A,{f^A}}}{b^ + },{\Delta _{A,{f^A}}}{b^ - } \ge 0\). So

$$\begin{aligned}{\Delta _{A,{f^A}}}\left( {{b^ + } + {b^ - }} \right) \ge 0.\end{aligned}$$

By triangle inequality, \(\left( {{b^ + } + {b^ - }} \right) (\gamma (0)) = 0\) is the maximum value of the sub harmonic function \({b^ + } + {b^ - }\), so by Theorem 5.2, \({b^ + } + {b^ - } = 0\). For part (b), by (a) we have \({\Delta _{A,{f^A}}}{b^ + } = - {\Delta _{A,{f^A}}}{b^ - }\), so Proposition 5.5 concludes that \({\Delta _{A,{f^A}}}{b^ + } = {\Delta _{A,{f^A}}}{b^ - } = 0\), finally by Proposition 5.3\({b^ + },{b^ - }\) are smooth. \(\square \)

Corollary 5.7

If \(Ric({\partial _r},A{\partial _r}) \ge 0\), then \(\left\| {\nabla {b^\gamma }} \right\| = 1\).

Proof

By Corollary 5.6, the Busseman function \({b^ + }\) is smooth, so \(\left\| {\nabla {b^ + }} \right\| = 1.\) For the complete proof see [11, 13, 14, 31, 38]. \(\square \)

Proof of Theorem 1.5

By Corollary 5.7, we have \(\left\| {\nabla {b^ + }} \right\| = 1\), so \({\nabla _{\nabla {b^ + }}}\nabla {b^ + } = 0\), since A is a Codazzi tensor, \( - \nabla {b^ + }.\nabla {b^ + }.Trace(A) + \left\langle {{\nabla _{\nabla {b^ + }}}div(A),\nabla {b^ + }} \right\rangle = 0\). Theorem 3.1 implies,

$$\begin{aligned}0 = Trace\left( {A \circ hes{s^2}\left( { {b^ + }} \right) } \right)&+ \nabla {b^ + }.({\Delta _A}{b^ + }) - \sum \nolimits _i \left\langle {{T^{\left( {{\nabla _{\nabla {b^ + }}}A} \right) }}(\nabla {b^ + },{e_i}),{e_i}} \right\rangle \\&+ Ric(\nabla {b^ + },A\nabla {b^ + }) \end{aligned}$$

Corollary 5.6 yields \({\Delta _{A,{f^A}}}{b^ + } = 0\), so \({\Delta _A}{b^ + } = \left\langle {\nabla {f^A},\nabla {b^ + }} \right\rangle \) and

$$\begin{aligned}\nabla {b^ + }.({\Delta _A}{b^ + }) = \nabla {b^ + }.\left\langle {\nabla {f^A},\nabla {b^ + }} \right\rangle = Hess{f^A}\left( {\nabla {b^ + },\nabla {b^ + }} \right) .\end{aligned}$$

By Definition 3.2 we know that

$$\begin{aligned}Hess{f^A}\left( {\nabla {b^ + },\nabla {b^ + }} \right) - \sum \nolimits _i {\left\langle {{T^{\left( {{\nabla _{\nabla {b^ + }}}A} \right) }}(\nabla {b^ + },{e_i}),{e_i}} \right\rangle } \ge 0,\end{aligned}$$

so

$$\begin{aligned}0 \ge Trace\left( {A \circ hes{s^2}\left( { {b^ + }} \right) } \right) + Ric(\nabla {b^ + },A\nabla {b^ + }).\end{aligned}$$

The restriction on the extended Ricci tensor concludes \(Trace\left( {A \circ hes{s^2}({b^ + })} \right) =0\). Since A is positive definite (note that A is invertible), \(hes{s^2}({b^ + }) \equiv 0\). Consequently \({\nabla {b^ + }}\) is a Killing vector field and its flows are isometries. Also \(\left\| {\nabla {b^ + }} \right\| = 1\), so the following function

$$\begin{aligned}\begin{array}{l} \psi :N \times {\mathbb {R}} \rightarrow M \\ \psi (x,t) = Fl^{\nabla {b^ + }}(x), \\ \end{array}\end{aligned}$$

splits M isometrically, where \(N = \left\{ {x:{b^ + }(x) = 0} \right\} \). \(A_N \) is also a Codazzi tensor. To see this note that,

$$\begin{aligned}\begin{array}{*{20}{l}} {\nabla _X^N\left( {{A_N}Y} \right) }&{}{ = \nabla _X^{}\left( {{A_N}Y} \right) = \nabla _X^{}\left( {AY - \left\langle {{\partial _t},AY} \right\rangle {\partial _t}} \right) }\\ {}&{}{ = \nabla _X^{}\left( {AY} \right) - \left\langle {{\partial _t},\nabla _X^{}\left( {AY} \right) } \right\rangle {\partial _t} = pro{j_{\partial _t^ \bot }}\left( {\nabla _X^{}\left( {AY} \right) } \right) ,} \end{array}\end{aligned}$$

and

$$\begin{aligned}{A_N}\left( {\nabla _X^NY} \right) = {A_N}\left( {\nabla _X^{}Y} \right) = pro{j_{\partial _t^ \bot }}\left( {A\left( {\nabla _X^{}Y} \right) } \right) .\end{aligned}$$

So

$$\begin{aligned}\begin{array}{*{20}{l}} {\left( {\nabla _X^N{A_N}} \right) Y}&{}{ = \nabla _X^N\left( {{A_N}Y} \right) - {A_N}\left( {\nabla _X^NY} \right) = pro{j_{\partial _t^ \bot }}\left( {\nabla _X^{}\left( {AY} \right) - A\left( {\nabla _X^{}Y} \right) } \right) }\\ {}&{}{ = pro{j_{\partial _t^ \bot }}\left( {\left( {\nabla _Y^{}A} \right) X} \right) = \left( {\nabla _Y^N{A_N}} \right) X.} \end{array}\end{aligned}$$

The last part is clear by properties of the Ricci tensor. \(\square \)

Lifting the extended Ricci tensor \(Ric\left( {X,AX} \right) \) to \(Ric\left( {X,{\bar{A}}X} \right) \) to the universal covering space of M as it is done in the proof of Theorems 1.3, 1.5 and with a similar argument as in [31] or in [38], we obtain the following results.

Theorem 5.8

If M is a compact Riemannian manifold with \(Ric\left( {X,AX} \right) \ge 0\) for any vector field X, then M is finitely covered by \({N^{\dim M - k}} \times {{\mathbb {T}}^k}\), where N is compact and simply connected and \({\mathbb {T}}^k\) is \(k-\)dimentional flat torus.

Theorem 5.8 has the following topological results.

Corollary 5.9

Let M be compact with \(Ric\left( {X,AX} \right) \ge 0\) for any vector field X, then

  1. (a)

    \(\,{b_1}(M) \le n\).

  2. (b)

    \({\pi _1}(M)\) has a free abelian subgroup of finite index and of rank \(\le n\).

  3. (c)

    If at one point \(Ric\left( {X,AX} \right) >0\) for any non zero vector X, then \({\pi _1}(M)\) is finite.

Similar to Theorem 6.8 of [31], the splitting Theorem 1.5 gives the following extension of Sormani’s theorem [26].

Theorem 5.10

Let M be a complete and non-compact manifold and \(Ric \left( {X,AX} \right) >0\) for any unit vector field \( X \in {\mathfrak {X}}(M) \), then

  1. (a)

    M has only one end.

  2. (b)

    M has the loops to infinity property. In particular, if M is simply connected at infinity then M is simply connected.

6 Excess Functions and Applications

Excess functions are important in the study of topology of manifolds, so finding their upper bounds are interesting. Let \(p,q \in M\), we recall that the excess function \( {e_{p,q}}(x) \) is defined as

$$\begin{aligned} {e_{p,q}}(x): = d(p,x) + d(q,x) - d(p,q). \end{aligned}$$
(6.1)

Similar to the classical one, to estimate the excess function we need the following extension of Abresch–Gromoll quantitative maximal principle (see [13, 38]). The proof of the following theorem is an adaptation of the ideas of [1] or [10]. First we recall the following definition.

Definition 6.1

[38] The dilation of a function f is denoted by dil(f) and is defined as

$$\begin{aligned}dil(f) = \mathop {\min }\limits _{x,y} \frac{{\left| {f(x) - f(y)} \right| }}{{d(x,y)}}.\end{aligned}$$

Proposition 6.2

(Quantitative Maximal Principle). Let \( U:B(y,R + \eta ) \rightarrow {\mathbb {R}} \) be a Lipschitz function on M. For \(H \le 0\), Assume that \(\left( {n - 1} \right) {\delta _n}H \le Ric\left( {{\partial _r},A{\partial _r}} \right) \), \(\left| {{f^A}} \right| \le K\) and

  1. (a)

    \(U \ge 0\),

  2. (b)

    \(dil(U) \le a\), \(U({y_0}) = 0\), where \({y_0}\in \overline{B(y,R)} \).

  3. (c)

    \({\Delta _{A,{f^A}}}(U) \le b\) in the barrier sense,

then \(U(y) \le ac +G(c)\) for all \(0< c < R \) where G(r(x)) is the unique function on \(M_H^n\) such that

  1. (1)

    \(G(r) > 0\) for \(0< r < R\),

  2. (2)

    \(G'(r) < 0\) for \(0< r < R\),

  3. (3)

    \(G(R) = 0\),

  4. (4)

    \(\left( {{\delta _1} - {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) } \right) G'' + {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) {\Delta _H}G = b\), where \({\Delta _H}\) is the Laplace operator on the space form \(M_H^n\).

Proof

We construct G explicitly. Since \({\Delta _H} = \frac{{{\partial ^2}}}{{\partial {r^2}}} + {m_H}(r)\frac{\partial }{{\partial r}} + {\tilde{\Delta }} \). It is sufficient to solve the ODE

$$\begin{aligned}\left( {{\delta _1} - {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) } \right) G'' + {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) \left( {G'' + {m_H}(r)G'} \right) = b,\end{aligned}$$

or equivalently, to solve

$$\begin{aligned} {\delta _1}G'' + {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) {m_H}(r)G' = b. \end{aligned}$$
(6.2)

For \( H=0 \) we know that \({m_H}(r) = \frac{{n - 1}}{r}\), so by (6.2) we have

$$\begin{aligned}{\delta _1}G'' + {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) \frac{{n - 1}}{r}G' = b,\end{aligned}$$

or equivalently

$$\begin{aligned}{\delta _1}G''{r^2} + \left( {n - 1} \right) {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) G'r = b{r^2},\end{aligned}$$

which is an Euler-type ODE. For \(n \ge 3\), the solutions of this ODE are,

$$\begin{aligned}G = \frac{b}{{2C}}{r^2} + {c_1} + {c_2}{r^D},\end{aligned}$$

where

$$\begin{aligned}&C = {\delta _1} + \left( {n - 1} \right) {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) \,\,\,\,\,\,and\\&D = 1 - \left( {n - 1} \right) \frac{{{\delta _n}}}{{{\delta _1}}}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) .\end{aligned}$$

Now \(G(R) = 0\), gives

$$\begin{aligned}\frac{b}{{2C}}{R^2} + {c_1} + {c_2}{R^D} = 0.\end{aligned}$$

By assumption \(G'(r) < 0\), so for \(0< r < R\) one should have,

$$\begin{aligned}\frac{b}{C}r + D{c_2}{r^{D - 1}} \le 0.\end{aligned}$$

Thus, \({c_2} \ge - \frac{b}{{CD}}{R^{ - D + 2}}\). Thus for \( H=0 \), one has,

$$\begin{aligned}G(r) = \frac{b}{{2C}}\left( {{r^2} + \left( { - 1 + \frac{2}{D}} \right) {R^2} - \frac{1}{D}{R^{ - D + 2}}{r^D}} \right) .\end{aligned}$$

For \( H<0 \), we have,\({m_H}(r) = \left( {n - 1} \right) \sqrt{ - H} \frac{{\cosh (\sqrt{ - H} r)}}{{\sinh (\sqrt{ - H} r)}}\). Hence by (6.2) we get,

$$\begin{aligned}{\delta _1}G'' + {\delta _n}\left( {n + \frac{{4K}}{{{\delta _n}}} - 1} \right) \sqrt{ - H} \frac{{\cosh (\sqrt{ - H} )}}{{\sinh (\sqrt{ - H} r)}}G' = b.\end{aligned}$$

Thus,

$$\begin{aligned}G(r) = \frac{b}{{{\delta _1}}}\int _r^R {\int _r^t {{{\left( {\frac{{\sinh (\sqrt{ - H} t)}}{{\sinh (\sqrt{ - H} s)}}} \right) }^{\frac{{{\delta _n}}}{{{\delta _1}}}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) }}dsdt} } .\end{aligned}$$

Now we return to prove the result of the theorem. By conditions on G and Lemma 3.8

$$\begin{aligned}{\Delta _{A,{f^A}}}G \ge \left( {{\delta _1} - {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) } \right) G'' + {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) {\Delta _H}G.\end{aligned}$$

Define \(V: = G -U\), so

$$\begin{aligned} {{\Delta _{A,{f^A}}}V}= & {} { {\Delta _{A,{f^A}}}G - {\Delta _{A,{f^A}}}U} \\\ge & {} \left( {{\delta _1} - {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) } \right) G'' \\&+ {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) {\Delta _H}G - {\Delta _{A,{f^A}}}U \ge 0. \end{aligned}$$

Theorem 5.2 implies that the function V on \(A(y,c,R) = \left\{ {x:c< d(y,x) < R} \right\} \) takes its maximum on \(\partial B(y,c) \cup \partial B(y,R)\). But \({\left. V \right| _{\partial B(y,R)}} \le 0\) and \(\,V({y_0}) \ge 0\), so if \({y_0} \in A(y,c,R)\), there exists some \({y_1} \in \partial B(y,c)\) such that \(\,V({y_1}) \ge \,V({y_0}) \ge 0\). Since

$$\begin{aligned}U(y) - U({y_1}) \le a\,d(y,{y_1}) = ac\end{aligned}$$

and

$$\begin{aligned}0 \le V({y_1}) = G({y_1}) - U({y_1}).\end{aligned}$$

One has

$$\begin{aligned}U(y) \le ac + U({y_1}) = ac + \left( {G({y_1}) - V({y_1})} \right) \le ac + G(c).\end{aligned}$$

If \({y_0} \in B(y,c)\) then

$$\begin{aligned}U(y) = U(y) - U({y_0}) \le a{} d(y,{y_0}) \le ac \le ac + G(c).\end{aligned}$$

\(\square \)

Proposition 6.2 gives the following upper estimate for the excess function. To obtaine the estimate, we recall the definition of the height function \(h(x): = dist(x,\gamma )\), where \(dist(x,\gamma )\) is any fixed minimal geodesic from p to q.

Theorem 6.3

Let \(Ric\left( {{\partial _r},A{\partial _r}} \right) \ge 0\), \(\left| {{f^A}} \right| \le K\) and \(h(x) \le \min \{ d(p,x),d(q,x)\}\), then

$$\begin{aligned}{e_{p,q}}(x) \le 2\left( {\frac{{{\delta _n}(n - 1) + 4K + {\delta _1}}}{{{\delta _n}(n - 1) + 4K - {\delta _1}}}} \right) {\left( {\frac{1}{2}C{h^{\frac{{{\delta _1} + {\delta _n}(n - 1) + 4K}}{{{\delta _1}}}}}} \right) ^{\frac{{{\delta _1}}}{{{\delta _n}(n - 1) + 4K}}}},\end{aligned}$$

where

$$\begin{aligned}C = \frac{{{\delta _n}(n - 1) + 4K}}{{2(n - 1)\left( {{\delta _1} + {\delta _n}(n - 1) + 4K} \right) }}\left( {\frac{1}{{d(p,x) - h(x)}} + \frac{1}{{d(q,x) - h(x)}}} \right) .\end{aligned}$$

Proof

We follow the original proof (see [38] Theorem 4.15), so we use the extended Abresch and Gromoll’s Quantitative Maximal Principle. Note that \(dil({e_{p,q}}) \le 2\). If we choose \( R=h(x) \), then for any \(y \in B(x,R)\) we have

$$\begin{aligned} {{\Delta _A}\left( {{e_{p,q}}(y)} \right) }\le & {} {{\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) \left( {\frac{1}{{d(p,y)}} + \frac{1}{{d(q,y)}}} \right) } \\\le & {} {{\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) \left( {\frac{1}{{d(p,x) - h(x)}} + \frac{1}{{d(q,x) - h(x)}}} \right) .} \end{aligned}$$

By choosing \( R=h(x) \) and \(b: = {\delta _n}\left( {1 + \frac{{4K}}{{{\delta _n}(n - 1)}}} \right) \left( {\frac{1}{{d(p,x) - h(x)}} + \frac{1}{{d(q,x) - h(x)}}} \right) \), the conditions of Proposition 6.2 are satisfied. So

$$\begin{aligned}{e_{p,q}}(x) \le \mathop {\min }\limits _{0 \le r \le R} \left( {2r + G(r)} \right) .\end{aligned}$$

The function \({2r + G(r)}\), for \(0< r < R\) is convex, hence its minimum is assumed at the unique point \(r_0\), where \(0< {r_0} < R\) and \(2 + G'({r_0})=0\), thus we conclude,

$$\begin{aligned} 2r_0^{1 - D} + \frac{b}{{2C}}\left( {2r_0^{2 - D} - {R^{2 - D}}} \right) = 0, \end{aligned}$$
(6.3)

where CD are defined in Proposition 6.2. By (6.3) we get,

$$\begin{aligned}r_0^{} \le {\left( {\frac{b}{{4C}}{R^{2 - D}}} \right) ^{\frac{1}{{1 - D}}}}.\end{aligned}$$

Consequently,

$$\begin{aligned} {{e_{p,q}}(x)}\le & {} {2{r_0} + G({r_0}) = \left( {1 - \frac{2}{D}} \right) \left[ {2{r_0} + \frac{b}{C}\left( {r_0^2 - {R^2}} \right) } \right] \le 2\left( {1 - \frac{2}{D}} \right) {r_0}} \\\le & {} {2\left( {1 - \frac{2}{D}} \right) {{\left( {\frac{b}{{4C}}{R^{2 - D}}} \right) }^{\frac{1}{{1 - D}}}}.} \end{aligned}$$

\(\square \)

Applying the estimate of the excess function Theorem, 6.3 gives an extension of theorems of Abresch–Gromoll [1] and Sormani [27] as follows (see [31]).

Theorem 6.4

Let M be a complete non compact manifold with \(Ric\left( {{\partial _r},A{\partial _r}} \right) \ge 0\) then,

  1. (a)

    If M has bounded diameter growth and its sectional curvature is bounded below then it has finite type topology, i.e, it is homeomorphic to the interior of a compact manifold with boundary.

  2. (b)

    If it has sub-linear diameter growth, then its fundamental group is finitely generated.

7 Number of Ends

In this section we give an estimate for the number of ends of a complete Riemannian manifold M by some restrictions on the extended Ricci tensor. Our approach is similar to [8] which is invented by Cai. In fact Cai in [8] had estimated the number of ends of a non compact manifold which its Ricci curvature is non-negative out-side of a compact set by means of a lower bound of the Ricci curvature in the compact set and the diameter of the set. Recently Wu applied this method to get an upper estimate for the number of ends of a weighted manifold by the similar conditions on the Bakry–Emery Ricci tensor \(Ri{c_f}\) and some conditions on the energy function [32]. First we recall the definition of an end of a manifold.

Definition 7.1

[8]. Let \({\gamma _1},{\gamma _2}\) be two rays starting from a fixed point \(p \in M\). \({\gamma _1},{\gamma _2}\) are co-final if for each \(R > 0\) and any \(t \ge R\), \({\gamma _1}(t)\) and \({\gamma _2}(t)\) are in the same component of \(M\backslash B(x_0,r)\). Each equivalence class of co-final rays is called an end of M. The end included the ray \(\gamma \) is noted by \(\left[ \gamma \right] \).

To get the estimate, we extend the following lemmas for the extended Ricci tensor. The proofs of the following lemmas are similar to the corresponding proof for Ricci [8] or weighted Ricci tensor [32], so they have been omitted.

Lemma 7.2

Let N be a \(\delta -\)tubular neighborhood of a line \( \gamma \). Suppose that from every point p in N, there are asymptotic rays to \({\gamma ^ \pm }\) such that \(Ri{c_{-Trace(A)}} (X,AX) \ge 0\) on both asymptotic rays. Then through every point in N, there is a line \(\alpha \) which if it is parameterized properly, then it satisfies

$$\begin{aligned}b_\gamma ^ + ({\alpha ^ + }(t)) = t\,\,\,\,\,\,\,\,and\,\,\,\,\,\,b_\gamma ^ - ({\alpha ^ - }(t)) = t.\end{aligned}$$

Lemma 7.3

With the same assumptions as in Theorem 1.6, M can not admit a line \(\gamma \) with the following property

$$\begin{aligned}d(\gamma (t),B(x_0,r)) \ge \left| t \right| + 2R\,\,\,\,\,\,\,for\,all\,\,t.\end{aligned}$$

Similar to [8] the following Proposition can be obtained.

Proposition 7.4

With the same assumption as in Theorem 1.6, if \(\left[ {{\gamma _1}} \right] \) and \(\left[ {{\gamma _2}} \right] \) are two different ends of M, then \(d({\gamma _1}(4R),{\gamma _2}(4R)) > 2R\).

Proof of Theorem 1.6

Let \(\left[ {{\gamma _1}} \right] ,\ldots ,\left[ {{\gamma _k}} \right] \) be k distinct ends of M where \({\gamma _1},\ldots ,{\gamma _k}\) are rays from the fixed point p. Let \(\left\{ {{p_j}} \right\} _{j = 1}^L\) be the maximal set of points on \(\partial B\left( {p,4R} \right) \) such that balls \(B\left( {{p_j},\frac{R}{2}} \right) \) are disjoint. As mentioned in [8, 32] Proposition 7.4 implies that \(k \le L\).

By considering

$$\begin{aligned} B\left( {p_j},\frac{R}{2}\right) \subseteq B\left( p,\frac{{9R}}{2}\right) \subseteq B\left( {p_j},\frac{{17R}}{2}\right) ,\end{aligned}$$

we get

$$\begin{aligned}N(A,M,R) \le \frac{{vo{l^A}\left( {B({p_j},\frac{{17R}}{2})} \right) }}{{vo{l^A}\left( {B({p_j},\frac{R}{2})} \right) }},\end{aligned}$$

and with the extended volume comparison Theorem 1.4, the following estimate is obtained,

$$\begin{aligned}N(A,M,R) \le \frac{{vol_{{h^A}}^A\left( {B({p_j},\frac{{17R}}{2})} \right) }}{{vol_{{h^A}}^A\left( {B({p_j},\frac{R}{2})} \right) }} \le {e^{17R/2}}\frac{{vol_{ - H}^{m'}B(17R/2)}}{{vol_{ - H}^{m'}B(R/2)}},\end{aligned}$$

where

$$\begin{aligned}\begin{array}{*{20}{c}} {m' = \left[ {\frac{{{\delta _n}(n - 1) + 4({K_1} + {{K'}_1})}}{{{\delta _1}}}} \right] + 2,} \\ {{K_1} = \mathop {\sup }\limits _{x \in B({p_j},17R/2)} \left| {{f^A}\left( x \right) } \right| ,} \\ {{{K'}_1}: = \mathop {\sup }\limits _{x \in B({p_j},17R/2)} \left| {Trace(A)\left( x \right) } \right| .} \\ \end{array}\end{aligned}$$

But for all j, we have \(B({p_j},17R/2) \subseteq B({x_0},25R/2)\), so the result follows by the following inequality,

$$\begin{aligned} \frac{{\int _0^{\alpha r} {{{\sinh }^{m - 1}}(\beta t)dt} }}{{\int _0^r {{{\sinh }^{m - 1}}(\beta t )dt} }} \le \frac{{2m}}{{m - 1}}{\left( {\beta r} \right) ^{ - m}}\exp \left( {\alpha \left( {m - 1} \right) \beta r} \right) . \end{aligned}$$

\(\square \)