1 Introduction and main results

In a series of work of Grigoryan, Lin and Yang [9,10,11], they systematically proposed and studied several difference equations on graphs. In particular, the Kazdan-Warner equation, the Yamabe equation, and some semilinear equations were discussed on finite or locally finite graphs. They established a functional framework to solve these problems. Since calculations on graphs are somewhat similar to those on Euclidean space or Riemannian manifolds, such difference equations are still called partial differential equations. Henceforth, the study of partial differential equations on graphs has received a lot of attention from many researchers, see for examples [4,5,6,7,8, 13, 19,20,21,22, 29, 30] and references therein.

Coming back to Euclidean space \({\mathbb {R}}^m\), the following semilinear heat equation

$$\begin{aligned} \partial _t u=\Delta u + u^{1+\alpha }, \quad \quad u(0,x)=a(x) \end{aligned}$$

has been considered by many authors [3, 14, 17, 24, 25]. One of the most celebrated results is due to Fujita [3], who showed that all non-negative solutions of the above equation blow up in a finite time if \(0<\alpha <\frac{2}{m}\), whereas there is a global solution of the above equation for a sufficiently small initial value if \(\alpha >\frac{2}{m}\). The value \(\alpha =\frac{2}{m}\) is called Fujita’s critical exponent. Several authors independently showed that the critical exponent belongs to the blow-up case in \({\mathbb {R}}^m\) [14, 17]. In [18], joined with Lin, the author extended Fujita’s results to the case of locally finite graphs, and they studied the existence and nonexistence of global solutions for the following semilinear heat equation

$$\begin{aligned} \left\{ \begin{array}{lc} \partial _t u=\Delta u + u^{1+\alpha }, &{}\, {(t,x) \in (0,+\infty )\times V,}\\ u(0,x)=a(x), &{}\, {x \in V,} \end{array} \right. \end{aligned}$$
(1.1)

where \(\alpha \) is a positive parameter, a(x) is bounded, non-negative and non-trivial in V. In particular, under the condition of curvature dimension \(CDE'(n,0)\) and uniform polynomial volume growth of degree m, they proved that if \(0<\alpha <\frac{2}{m}\), then all non-negative solutions of (1.1) blow up in a finite time, and if \(\alpha >\frac{2}{m}\), then there exists a non-negative global solution of (1.1) for any non-negative initial value dominated by a sufficiently small Gaussian. Besides, under the condition of polynomial volume growth of degree m, the author [27] proved that if \(0<\alpha < \frac{2}{m}\), then there is no non-negative global solution of (1.1). Evidently our results in [18, 27] do not include the case of the critical exponent \(\alpha =\frac{2}{m}\). As far as we know, there is only a small progress in dealing with this problem. In fact, Wu [28] proved the blow-up phenomenon for \(\alpha =\frac{2}{m}\) on finite graphs, but the question whether \(\alpha =\frac{2}{m}\) belongs to the blow-up case on locally finite graphs or not remains unsolved. Studying equations on locally finite graphs is much more difficult than that on finite graphs. This is mainly because the vertex set is infinite, we need to check more series convergence. And the diameter of locally finite graph is generally infinite, which also brings more difficulties in dealing with the corresponding problem.

In this paper, we study the blow-up problem for (1.1) on locally finite graphs. We first consider the blow-up phenomenon of (1.1) for \(\alpha =\frac{2}{m}\) on locally finite graphs satisfying uniform polynomial volume growth of degree m. Then we relax volume growth condition and restrain initial value condition of (1.1) to show the blow-up result for \(0<\alpha \le \frac{2}{m}\). Our main results are as follows:

Theorem 1.1

Suppose that \(D_\mu ,D_\omega <\infty \) and that G satisfies curvature condition \(CDE'(n,0)\) and the volume growth condition c.1(m) below. If \(\alpha =\frac{2}{m}\), then all non-negative solutions of (1.1) blow up in a finite time.

Condition c.1(m). There exist \(c>0\) and \(x_0\in V\) with \(a(x_0)>0\) such that the inequality

$$\begin{aligned} c^{-1}r^m\le {\mathcal {V}}(x_0,r) \le cr^m \quad \quad (m>0) \end{aligned}$$

holds for all large enough \(r\ge r_0>0\). The condition c.1(m) is called uniform polynomial volume growth of degree m.

Theorem 1.2

Suppose that \(D_\mu ,D_\omega <\infty \) and that G satisfies curvature condition \(CDE'(n,0)\) and the volume growth condition c.2(\(m, \zeta , \eta \)) below. If \(0<\alpha \le \frac{2}{m}\) and \(\inf _{x\in V} a(x)=a_0>0\), then all non-negative solutions of (1.1) blow up in a finite time.

Condition c.2(\(m, \zeta , \eta \)). There exist \(c',c''>0\) and \(x_0\in V\) such that the inequality

$$\begin{aligned} c'r^m \log ^{-\zeta } r \le {\mathcal {V}}(x_0,r)\le c''r^m \log ^\eta r \quad \quad (m>0,\,\, \zeta ,\eta \ge 0 ) \end{aligned}$$

holds for all large enough \(r\ge r_0>1\).

The remaining parts of this paper are organized as follows. In Sect. 2, we introduce some notations and related results on graphs which are essential to prove our main results. In Sect. 3, we give an auxiliary result on the representation of the solution in an integral equation in terms of its fundamental solution. In Sects. 4 and 5, we prove Theorems 1.1 and 1.2, respectively. Finally, in Sect. 6 we discuss a special case of Theorem 1.2 and illustrate that our results would provide a complement to earlier work on this topic.

2 Preliminaries

2.1 Weighted graphs

Let \(G=(V,E)\) be a locally finite, connected graph without loops or multiple edges. Here V is the vertex set and E is the edge set that can be viewed as a symmetric subset of \(V\times V\). We write \(y\sim x\) if an edge connects vertices x and y.

We allow the edges on the graph to be weighted. Let \(\omega : V \times V\rightarrow [0,\infty )\) be an edge weight function that satisfies \(\omega _{xy}=\omega _{yx}\) for all \(x,y \in V\) and \(\omega _{xy}>0\) if and only if \(x\sim y\). Moreover, we write \(m(x):=\sum _{y\sim x}\omega _{xy}\) and assume that this weight function satisfies \(\omega _{\min }:=\inf _{(x,y)\in E} \omega _{xy}>0\). Let \(\mu : V\rightarrow (0,\infty )\) be a positive measure on V and \(\mu _{\max }:=\sup _{x\in V} \mu (x)\). Given a weight and a measure, we define

$$\begin{aligned} D_\omega :=\frac{\mu _{\max }}{\omega _{\min }} \end{aligned}$$

and

$$\begin{aligned} D_\mu :=\sup _{x\in V}\frac{m(x)}{\mu (x)}. \end{aligned}$$

For any \(x,y\in V\), d(xy), the distance between x and y, is the number of edges in the shortest path connecting x and y. The connectedness of G ensures that \(d(x,y)<\infty \) for any two points x and y. For a given vertex \(x_0\in V\), let \(B_r=B_r(x_0)\) denote the connected subgraph of G consisting of those vertices in G that are at most distance r from \(x_0\) and all edges of G that such vertices span. It follows from local finiteness that the vertex number of \(B_r\) is finite. Further, the volume of \(B_r\) can be written as \({\mathcal {V}}(x_0,r)\), which is defined by \({\mathcal {V}}(x_0,r)=\sum _{y\in B_r} \mu (y)\).

2.2 Laplace operators on graphs

Let C(V) be the set of real-valued functions on V. We denote by

$$\begin{aligned} \ell ^p(V,\mu )=\left\{ f\in C(V):\sum _{x \in V} \mu (x)|f(x)|^p<\infty \right\} , \,\, 1\le p<\infty , \end{aligned}$$

the set of \(\ell ^p\) integrable functions on V with respect to the measure \(\mu \). For \(p=\infty \), let

$$\begin{aligned} \ell ^{\infty }(V,\mu )=\left\{ f\in C(V):\sup _{x\in V}|f(x)|<\infty \right\} \end{aligned}$$

be the set of bounded functions.

We define the \(\mu \)-Laplacian \(\Delta :C(V)\rightarrow C(V)\) by

$$\begin{aligned} \Delta f(x)=\frac{1}{\mu (x)}\sum _{y\sim x}\omega _{xy}\left( f(y)-f(x)\right) , \,\, x\in V. \end{aligned}$$

It is well-known that \(D_\mu <\infty \) is equivalent to the \(\mu \)-Laplacian \(\Delta \) being bounded on \(\ell ^p(V,\mu )\) for all \(p\in [1,\infty ]\) (see [12]).

Let \(C(B_r,\partial B_r)=\left\{ f\in C(B_r):f|_{\partial B_r}=0 \right\} \) denote the set of functions on \(B_r\) which vanish on \(\partial B_r\), where \(\partial B_r\) is the boundary of \(B_r\), i.e., \(\partial B_r=\left\{ x\in B_r: \text {there is }y\sim x \text { such that }y\notin B_r \right\} .\) The interior of \(B_r\) is denoted by \(\mathring{B_r}=B_r\setminus \partial B_r\). The Dirichlet Laplacian \(\Delta _r\) is defined as

$$\begin{aligned} \Delta _r f= \left\{ \begin{array}{lcc} \Delta f&{}\quad \text {for}\, x\in \mathring{B_r} \\ 0&{}\quad \text {otherwise} \end{array} \right. \end{aligned}$$

for any \(f\in C(B_r,\partial B_r)\).

2.3 The heat kernel on graphs

Definition 2.1

[26] The heat kernel \(p_r\) of \(\Delta _r\) with Dirichlet boundary conditions is given by

$$\begin{aligned} p_r(t,x,y)=e^{-t\Delta _r}\delta _x(y) \,\,\,\,\, \hbox {for } x,y\in B_r \hbox { and } t\ge 0. \end{aligned}$$

The maximum principle implies the monotonicity of the heat kernel, that is, \(p_{r}(t,x,y)\le p_{r+1}(t,x,y)\). Moreover, \(p_r(t,x,y)\) satisfies the following properties (see [26]):

Proposition 2.1

For \(t,s>0\) and \(x,y\in B_r\), we have

  1. (i)

    \(\partial _t p_r(t,x,y)=\Delta _r p_r(t,x,y)\), where \(\Delta _r\) denotes the Dirichlet Laplacian applied in either x or y;

  2. (ii)

    \(p_r(t,x,y)=0\) if \(x\in \partial B_r\) or \(y\in \partial B_r\);

  3. (iii)

    \(p_r(t,x,y)=p_r(t,y,x)\);

  4. (iv)

    \(p_r(t,x,y)\ge 0\);

  5. (v)

    \(\sum _{y\in B_r}\mu (y)p_r(t,x,y)< 1\);

  6. (vi)

    \(\sum _{z\in B_r}\mu (z)p_r(t,x,z)p_r(s,z,y)=p_r(t+s,x,y)\).

The heat kernel for an infinite graph can be constructed by taking an exhaustion of the graph. An exhaustion of G is a sequence \(U_1, U_2, \ldots , U_r, \ldots \) with \(U_r\subset U_{r+1}\) and \(\bigcup _{r\in {\mathbb {N}}}U_r=V\) (see [23, 26]). The connectedness of G implies \(\bigcup _{r\in {\mathbb {N}}}B_r=V\), so we can take \(U_r=B_r\).

Definition 2.2

( [23, 26]) For any \(t>0\), \(x,y\in V\), we define

$$\begin{aligned} p(t,x,y)=\lim _{r\rightarrow \infty }p_r(t,x,y). \end{aligned}$$
(2.1)

Dini’s Theorem implies that the convergence of (2.1) is uniform in t on every compact subset of \([0,\infty )\). Also, it is known that p(txy) is differentiable in t, satisfies the heat equation \(\partial _t u=\Delta u\), and has the following properties (see [23, 26]):

Proposition 2.2

For \(t,s>0\) and any \(x,y\in V\), we have

  1. (i)

    \(\partial _t p(t,x,y)=\Delta p(t,x,y)\), where \(\Delta \) denotes the \(\mu \)-Laplacian applied in either x or y;

  2. (ii)

    \(p(t,x,y)=p(t,y,x)\);

  3. (iii)

    \(p(t,x,y)> 0\);

  4. (iv)

    \(\sum _{y\in V}\mu (y)p(t,x,y)\le 1\);

  5. (v)

    \(\sum _{z\in V}\mu (z)p(t,x,z)p(s,z,y)=p(t+s,x,y)\);

  6. (vi)

    p(txy) is independent of the exhaustion used to define it.

Definition 2.3

A graph G is stochastically complete if

$$\begin{aligned} \sum _{y\in V}\mu (y)p(t,x,y)=1 \end{aligned}$$

for any \(t>0\) and \(x\in V\).

The heat kernel generates a bounded solution of the heat equation on G for any bounded initial condition. Precisely, for any bounded function \(u_0\in C(V)\), the function

$$\begin{aligned} u(t,x)=\sum _{y\in V}\mu (y)p(t,x,y)u_0(y) \quad (t>0, \,\, x\in V) \end{aligned}$$

is bounded and differentiable in t, satisfies \(\partial _tu=\Delta u\) and \(\lim _{t\rightarrow 0^+}u(t,x)=u_0(x)\) for any \(x\in V\).

We define

$$\begin{aligned} P_t f(x)=\sum _{y\in V}\mu (y)p(t,x,y)f(y) \end{aligned}$$

for any bounded function \(f\in C(V)\), then \(P_t f(x)\) is a bounded solution of the heat equation and \(P_tf(x)\) converges uniformly in [0, T] for any \(T>0\) (see [18]).

\(P_t\) is called the heat semigroup of the Laplace operator. Furthermore, the different definitions of the heat semigroup coincide when \(\Delta \) is a bounded operator, that is,

$$\begin{aligned} P_tf(x)=e^{t\Delta }f(x)=\sum _{y\in V}\mu (y)p(t,x,y)f(y). \end{aligned}$$

In the following proposition, we transcribe some useful properties of the heat semigroup \(P_t\).

Proposition 2.3

[15] For any bounded function \(f\in C(V)\) and \(t,s>0\), \(x\in V\), we have

  1. (i)

    \(P_tP_sf(x)=P_{t+s}f(x)\);

  2. (ii)

    \(\Delta P_tf(x)=P_t\Delta f(x)\);

  3. (iii)

    \(\lim _{t\rightarrow 0^+}P_t f(x)=P_0f(x)=f(x)\).

2.4 Curvature dimension condition and Gaussian heat kernel estimate

We recall the definitions of two natural bilinear forms associated with the \(\mu \)-Laplacian.

Definition 2.4

[1] The gradient form \(\Gamma \) is defined by

$$\begin{aligned} \begin{aligned} 2\Gamma (f,g)(x)=&(\Delta (fg)-f\Delta (g)-\Delta (f)g)(x)\\ =&\frac{1}{\mu (x)}\sum _{y\sim x}\omega _{xy}(f(y)-f(x))(g(y)-g(x)), \quad \quad f,g\in C(V). \end{aligned} \end{aligned}$$

The iterated gradient form \(\Gamma _2\) is defined by

$$\begin{aligned} 2\Gamma _2(f,g)(x)=(\Delta \Gamma (f,g)-\Gamma (f,\Delta g)-\Gamma (\Delta f,g))(x), \quad \quad f,g\in C(V). \end{aligned}$$

For simplicity, we write \(\Gamma (f)=\Gamma (f,f)\) and \(\Gamma _2(f)=\Gamma _2(f,f).\)

Using the bilinear forms above, one can define the following exponential curvature dimension inequality.

Definition 2.5

[2] We say that a graph G satisfies the exponential curvature dimension inequality \(CDE'(x,n,K)\), if for any positive function \(f\in C(V)\), we have

$$\begin{aligned} \Gamma _2(f)(x)-\Gamma \Bigg (f,\frac{\Gamma (f)}{f}\Bigg )(x)\ge \frac{1}{n}f(x)^2(\Delta \log f)(x)^2+K\Gamma (f)(x). \end{aligned}$$

We say that \(CDE'(n,K)\) is satisfied if \(CDE'(x,n,K)\) is satisfied for all \(x\in V\).

With the help of curvature dimension condition \(CDE'(n,0)\), Horn et al. [15] derived the Gaussian heat kernel estimate as follows:

Proposition 2.4

Suppose that \(D_\mu , D_\omega <\infty \) and G satisfies \(CDE'(n,0)\). Then there exists a positive constant \(c_1\) such that, for any \(x,y \in V\) and \(t>0\),

$$\begin{aligned} p(t,x,y)\le \frac{c_1}{{\mathcal {V}}(x,\sqrt{t})}. \end{aligned}$$

Furthermore, for any \(t_0>0\), there exist positive constants \(c_2\) and \(c_3\) such that

$$\begin{aligned} p(t,x,y)\ge \frac{c_2}{{\mathcal {V}}(x,\sqrt{t})}\exp \left( -c_3\frac{d^2(x,y)}{t}\right) \end{aligned}$$

for all \(x,y\in V\) and \(t>t_0\), where \(c_1\) depends on \(n,D_\omega ,D_\mu \) and is denoted by \(c_1=c_1(n,D_\omega ,D_\mu )\), \(c_2\) depends on n and is denoted by \(c_2=c_2(n)\)\(c_3\) depends on \(n,D_\omega ,D_\mu \) and is denoted by \(c_3=c_3(n,D_\omega ,D_\mu )\). In particular, for any \(t_0>0\), we have

$$\begin{aligned} p(t,x,x)\ge \frac{c_2(n)}{{\mathcal {V}}(x,\sqrt{t})} \end{aligned}$$

for all \(x\in V\) and \(t>t_0\), where \(c_2(n)\) is positive.

3 Auxiliary result

In order to prove our main results, we need an auxiliary result, which provides a representation to the solution of (1.1) via an integral equation with the heat kernel. We first introduce the definition of the solution for Eq. (1.1) on graphs.

Definition 3.1

Let T be a positive number. A non-negative function \(u=u(t,x)\) is said to be the non-negative solution of (1.1) in [0, T], if u is continuous with respect to t and satisfies (1.1) in \([0,T]\times V\).

Let \(T_\infty \) denote the supremum of all T satisfying the condition that for any \(T'<T\) the solution u(tx) of (1.1) is bounded in \([0,T']\times V\).

Definition 3.2

The solution u(tx) of (1.1) is said to blow up in a finite time, provided that \(T_\infty <\infty \). The solution u(tx) of (1.1) is global if \(T_\infty =\infty \).

Next, we employ the heat kernel to represent the solution of (1.1).

Theorem 3.1

Suppose that \(D_\mu <\infty \). For any \(0<T<T_{\infty }\), the bounded and non-negative solution u of (1.1) satisfies

$$\begin{aligned} u(t,x)=P_t a(x)+\int _0^t P_{t-s}u(s,x)^{1+\alpha }ds \quad \quad (0<t<T, \; x\in V). \end{aligned}$$
(3.1)

Let us introduce two lemmas before starting the proof of Theorem 3.1.

Lemma 3.1

[26] The following statements are equivalent:

  1. (i)

    The graph G is stochastically incomplete;

  2. (ii)

    There exists a non-zero, bounded function u satisfying

    $$\begin{aligned} \left\{ \begin{array}{lc} \partial _tu=\Delta u, &{}\, \hbox {} (t,x) \in \hbox {} (0,T)\times V,\\ u(0,x)=0, &{}\, \hbox {} x \in \hbox {} V. \end{array} \right. \end{aligned}$$
    (3.2)

Lemma 3.2

[16] The graph G is stochastically complete if \(D_\mu <\infty \).

Proof of Theorem 3.1

We consider the following Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{lc} \partial _t v=\Delta v + g(t,x), &{}\, {(t,x) \in (0,T)\times V,}\\ v(0,x)=a(x), &{}\, {x \in V,} \end{array} \right. \end{aligned}$$
(3.3)

where g is bounded and continuous with respect to t in [0, T).

Set

$$\begin{aligned} v_1(t,x)=\int _0^t P_{t-s}g(s,x)ds \quad \quad (0\le t<T, \; x\in V). \end{aligned}$$

Since \(P_{t-s}g(s,x)\) and \(\partial _tP_{t-s}g(s,x)\) are continuous with respect to t and s, differentiating \(v_1(t,x)\) with respect to t gives

$$\begin{aligned} \begin{aligned} \partial _tv_1(t,x)&=\int _0^{t}\partial _t P_{t-s}g(s,x)ds+P_0 g(t,x)\\&=\int _0^{t} \Delta P_{t-s}g(s,x)ds+P_0 g(t,x)\\&=\Delta v_1(t,x)+g(t,x). \end{aligned} \end{aligned}$$

Letting \(v(t,x)=P_t a(x)+v_1(t,x)\), it follows that for any \(t\in (0,T)\) and \(x\in V\),

$$\begin{aligned} \begin{aligned} \partial _t v(t,x)=\partial _tP_t a(x)+\partial _tv_1(t,x)=\Delta P_t a(x)+\Delta v_1(t,x)+g(t,x)=\Delta v(t,x)+g(t,x) \end{aligned} \end{aligned}$$

and \(v(0,x)=a(x)\). Thus, v(tx) is a solution of (3.3).

On the other hand, by the assumption \(D_\mu <\infty \) in Theorem 3.1, we deduce from Lemmas 3.1 and 3.2 that the equation (3.2) has only the zero solution, which implies that the solution of (3.3) is unique and satisfies

$$\begin{aligned} v(t,x)=P_t a(x)+v_1(t,x)=P_t a(x)+\int _0^t P_{t-s}g(s,x)ds \;\quad (0<t<T, \; x\in V). \end{aligned}$$
(3.4)

For an arbitrary bounded and non-negative solution u(tx) of (1.1), we take \(g(t,x)=u(t,x)^{1+\alpha }\) in (3.3). By (3.4), we deduce that \(P_t a(x)+\int _0^t P_{t-s}u(s,x)^{1+\alpha }ds\) is the unique solution of (3.3) for \(g(t,x)=u(t,x)^{1+\alpha }\).

In view of \(\partial _t u=\Delta u+u^{1+\alpha }\) and \(u(0,x)=a(x)\), we get u(tx) is a solution of (3.3) for \(g(t,x)=u(t,x)^{1+\alpha }\). Therefore, we have for any \((t,x)\in (0,T)\times V\),

$$\begin{aligned} \begin{aligned} u(t,x)&=P_t a(x)+\int _0^t P_{t-s}u(s,x)^{1+\alpha }ds. \end{aligned} \end{aligned}$$

This completes the proof of Theorem 3.1. \(\square \)

4 Proof of Theorem 1.1

Since a(x) is non-trivial in V, we may assume \(a(x_0)>0\) with \(x_0\in V\). In what follows we fix the vertex \(x_0\in V\) and let \(B_r=B_r(x_0)\).

We first prove two lemmas below.

Lemma 4.1

Suppose that \(D_\mu <\infty \). If the Eq. (1.1) has a non-negative global solution u(tx), then there exists a constant \(C'\) such that \(t^{\frac{1}{\alpha }}P_t a(x)\le C'\) for any \(t>0\) and \(x\in V\).

Proof

From Lemma 3.2 it follows that the graph is stochastically complete, that is,

$$\begin{aligned} \sum _{y\in V}\mu (y)p(t,x,y)=1 \quad \quad (t>0, \; x\in V). \end{aligned}$$

Applying Jensen’s inequality, we obtain for any \(t>s>0\) and \(x\in V\),

$$\begin{aligned} \begin{aligned} P_{t-s}(u(s,x)^{1+\alpha })&=\sum _{y\in V}\mu (y)p(t-s,x,y)u(s,y)^{1+\alpha }\\&\ge \left( \sum _{y\in V}\mu (y)p(t-s,x,y)u(s,y) \right) ^{1+\alpha }\\&=\left( P_{t-s}u(s,x)\right) ^{1+\alpha }. \end{aligned} \end{aligned}$$
(4.1)

Note that \(P_ta(x)> 0\), and using Theorem 3.1 along with (4.1), one has

$$\begin{aligned} u(t,x)> \int _0^tP_{t-s}u(s,x)^{1+\alpha }ds\ge \int _0^t \left( P_{t-s}u(s,x)\right) ^{1+\alpha }ds \quad \quad (t>0, \; x\in V). \end{aligned}$$
(4.2)

Again, from Theorem 3.1 and the fact that u is non-negative we get \(u(t,x)\ge P_ta(x)\), which, together with (4.2), yields

$$\begin{aligned} u(t,x)> \int _0^t \left( P_{t-s}P_sa(x)\right) ^{1+\alpha }ds = \int _0^t \left( P_t a(x)\right) ^{1+\alpha }ds =t\left( P_t a(x)\right) ^{1+\alpha } \quad (t>0, \; x\in V). \end{aligned}$$
(4.3)

Further, utilizing (4.3) and Jensen’s inequality gives

$$\begin{aligned} \begin{aligned} P_{t-s}u(s,x)&> sP_{t-s}\left( P_s a(x)\right) ^{1+\alpha }\\&=s\sum _{y\in V}\mu (y)p(t-s,x,y)\left( P_s a(y)\right) ^{1+\alpha }\\&\ge s \left( \sum _{y\in V}\mu (y)p(t-s,x,y)\left( P_s a(y)\right) \right) ^{1+\alpha }\\&=s \left( P_{t-s}P_s a(x)\right) ^{1+\alpha }\\&=s \left( P_t a(x)\right) ^{1+\alpha } \end{aligned} \end{aligned}$$
(4.4)

for \(0<s<t\).

Substituting (4.4) into (4.2), we get

$$\begin{aligned} u(t,x)> \int _0^t s^{1+\alpha }\left( P_t a(x)\right) ^{{(1+\alpha )}^2} ds =\frac{1}{2+\alpha }t^{2+\alpha }\left( P_t a(x)\right) ^{{(1+\alpha )}^2} \quad \quad (t>0, \; x\in V).\nonumber \\ \end{aligned}$$
(4.5)

It follows from (4.5) and Jensen’s inequality that

$$\begin{aligned} \begin{aligned} P_{t-s}u(s,x)&>\frac{1}{2+\alpha }s^{2+\alpha }P_{t-s}\left( P_s a(x)\right) ^{{(1+\alpha )}^2} \ge \frac{1}{2+\alpha }s^{2+\alpha }\left( P_t a(x)\right) ^{{(1+\alpha )}^2}. \end{aligned} \end{aligned}$$
(4.6)

Substituting (4.6) into (4.2), one finds

$$\begin{aligned} u(t,x)> \frac{t^{1+(1+\alpha )+(1+\alpha )^2}}{(1+(1+\alpha )+(1+\alpha )^2) (1+(1+\alpha ))^{1+\alpha }}\left( P_t a(x)\right) ^{{(1+\alpha )}^3}. \end{aligned}$$

Repeatedly, performing the substitution in (4.2) k times, we obtain that for any \(t>0\) and \(x\in V\),

$$\begin{aligned} \begin{aligned}&u(t,x)\\ >&\frac{t^{1+(1+\alpha )+(1+\alpha )^2+\cdots +(1+\alpha )^{k-1}}\left( P_t a(x)\right) ^{{(1+\alpha )}^k}}{\left( 1+(1+\alpha )+\cdots +(1+\alpha )^{k-1}\right) \cdots \left( 1+(1+\alpha )+(1+\alpha )^2\right) ^{{(1+\alpha )}^{k-3}} \left( 1+(1+\alpha )\right) ^{{(1+\alpha )}^{k-2}}}\\ =&\frac{t^{\frac{(1+\alpha )^k-1}{\alpha }}\left( P_t a(x)\right) ^{{(1+\alpha )}^k}}{\left( \frac{(1+\alpha )^k-1}{\alpha } \right) \cdots \left( \frac{(1+\alpha )^3-1}{\alpha } \right) ^{(1+\alpha )^{k-3}} \left( \frac{(1+\alpha )^2-1}{\alpha } \right) ^{(1+\alpha )^{k-2}}}. \end{aligned} \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned} t^{\frac{(1+\alpha )^k-1}{\alpha (1+\alpha )^k}} P_t a(x)< u(t,x)^{\frac{1}{(1+\alpha )^k}} \prod _{i=2}^{k}\left( \frac{(1+\alpha )^i-1}{\alpha } \right) ^{(1+\alpha )^{-i}}. \end{aligned} \end{aligned}$$

Taking the limit \(k\rightarrow \infty \) in both sides of the above inequality, we arrive at

$$\begin{aligned} \begin{aligned} t^{\frac{1}{\alpha }} P_t a(x)\le \prod _{i=2}^{\infty }\left( \frac{(1+\alpha )^i-1}{\alpha } \right) ^{(1+\alpha )^{-i}}<\infty . \end{aligned} \end{aligned}$$
(4.7)

Hence the assertion of Lemma  4.1 follows from (4.7). \(\square \)

Lemma 4.2

Suppose that \(\alpha =\frac{2}{m}\) and \(D_\mu ,D_\omega <\infty \). Let G satisfy \(CDE'(n,0)\) and \({\mathcal {V}}(x_0,t)\le ct^m\) for \(x_0\in V\), \(t\ge r_0\), where \(c, m, r_0\) are some positive constants and \(a(x_0)>0\). If the equation (1.1) has a non-negative global solution u(tx), then there exists a constant \(C''\) such that

$$\begin{aligned} \begin{aligned} \sum _{y\in B_r}\mu (y)a(y)\le C'' \end{aligned} \end{aligned}$$
(4.8)

for any \(r>\sqrt{\frac{\rho }{c_3}}\), where \(\rho =\max \{t_0, r_0^2\}\).

Proof

Using Proposition 2.4, we have for \(t>t_0>0\),

$$\begin{aligned} \begin{aligned} t^{\frac{m}{2}}P_t a(x_0)&=t^{\frac{m}{2}}\sum _{y\in V}\mu (y)p(t,x_0,y)a(y)\\&\ge t^{\frac{m}{2}}\sum _{y\in V}\mu (y)a(y) \frac{c_2}{{\mathcal {V}}(x_0,\sqrt{t})}\exp \left( -c_3\frac{d^2(x_0,y)}{t}\right) . \end{aligned} \end{aligned}$$

In view of \(\alpha =\frac{2}{m}\) and \({\mathcal {V}}(x_0,t)\le ct^m\), a simple calculation shows that for \(t>\rho =\max \{t_0, r_0^2\}\) and \(r>0\),

$$\begin{aligned} \begin{aligned} t^{\frac{1}{\alpha }}P_t a(x_0)&= t^{\frac{m}{2}}P_t a(x_0)\\&\ge \frac{c_2}{c}\sum _{y\in V}\mu (y)a(y)\exp \left( -c_3\frac{d^2(x_0,y)}{t}\right) \\&\ge \frac{c_2}{c}\exp \left( -\frac{c_3r^2}{t}\right) \sum _{y\in B_r}\mu (y)a(y). \end{aligned} \end{aligned}$$

Taking \(t=c_3r^2\) with a straightforward application of Lemma 4.1, we deduce that for any \(r>\sqrt{\frac{\rho }{c_3}}\),

$$\begin{aligned} \begin{aligned} \sum _{y\in B_r}\mu (y)a(y)\le C'', \end{aligned} \end{aligned}$$

where \(C''=e\frac{cC'}{c_2}\). This completes the proof of Lemma 4.2. \(\square \)

Remark 4.1

Note that for any t the non-negative global solution u(tx) of (1.1) can be regarded as the initial value of the Eq. (1.1), so the assertion of Lemma 4.2 implies that for any \(t>0\) and \(r>\sqrt{\frac{\rho }{c_3}}\),

$$\begin{aligned} \sum _{y\in B_r}\mu (y)u(t,y)\le C''. \end{aligned}$$
(4.9)

Proof of Theorem 1.1

We shall prove Theorem 1.1 by contradiction. Assume that there exists a non-negative global solution u(tx) of (1.1).

For any given \(r>0\), we define

$$\begin{aligned} G(t,r)=\sum _{y\in B_r}\mu (y)p(t,x_0,y)u(t,y). \end{aligned}$$

By using the upper estimate of the heat kernel in Proposition 2.4 and the hypothesis of Theorem 1.1, \({\mathcal {V}}(x_0,t)\ge c^{-1}t^m\), one has for \(t\ge r_0^2\),

$$\begin{aligned} \begin{aligned} G(t,r)&\le \sum _{y\in B_r}\mu (y)u(t,y)\frac{c_1}{{\mathcal {V}}(x_0,\sqrt{t})}\\&\le cc_1t^{-\frac{m}{2}}\sum _{y\in B_r}\mu (y)u(t,y). \end{aligned} \end{aligned}$$

Utilizing the claim of Remark 4.1, we derive from the above inequality that for \(t\ge r_0^2\) and \(r>\sqrt{\frac{\rho }{c_3}}\),

$$\begin{aligned} \begin{aligned} t^{\frac{m}{2}}G(t,r)\le cc_1C''. \end{aligned} \end{aligned}$$
(4.10)

On the other hand, since u(sx) and a(x) are non-negative, it follows from Theorem 3.1 that

$$\begin{aligned} u(t,x)\ge P_ta(x)=\sum _{y\in V}\mu (y)p(t,x,y)a(y)\ge C_1p(t,x_0,x), \end{aligned}$$
(4.11)

where \(C_1=\mu (x_0)a(x_0)\), and

$$\begin{aligned} \begin{aligned} u(t,x)&> \int _0^t P_{t-s}u(s,x)^{1+\alpha }ds=\int _0^t \sum _{y\in V}\mu (y)p(t-s,x,y)u(s,y)^{1+\alpha } ds. \end{aligned} \end{aligned}$$
(4.12)

Additionally, a direct application of Proposition 2.4 with \({\mathcal {V}}(x_0,t)\le ct^m\) gives

$$\begin{aligned} p(t,x_0,x)\ge \frac{c_2}{{\mathcal {V}}(x_0,\sqrt{t})}\exp \left( -\frac{c_3d^2(x_0,x)}{t}\right) \ge \frac{c_2}{c}t^{-\frac{m}{2}}\exp \left( -\frac{c_3d^2(x_0,x)}{t}\right) \end{aligned}$$
(4.13)

for \(t>\rho \) and \(x\in V\).

Combining (4.12), (4.13) and (4.11), we deduce that for \(x\in V\), \(t\ge c_3\alpha r^2\) and \(r>\sqrt{\frac{\rho }{c_3\alpha }}\),

$$\begin{aligned} \begin{aligned} u(t,x)&> C_1^{1+\alpha }\int _0^t \sum _{y\in V}\mu (y)p(t-s,x,y)p(s,x_0,y)^{1+\alpha }ds\\&> \frac{c_2^\alpha C_1^{1+\alpha }}{c^\alpha }\int _{c_3\alpha r^2}^t s^{-\frac{m\alpha }{2}} \sum _{y\in V}\mu (y)p(t-s,x,y)p(s,x_0,y)\exp \left( -\frac{c_3\alpha d^2(x_0,y)}{s}\right) ds\\&\ge \frac{c_2^\alpha C_1^{1+\alpha }}{c^\alpha }\int _{c_3\alpha r^2}^t s^{-\frac{m\alpha }{2}} \exp \left( -\frac{c_3\alpha r^2}{s}\right) \sum _{y\in B_r}\mu (y)p(t-s,x,y)p(s,x_0,y)ds\\&\ge C_2\int _{c_3\alpha r^2}^t s^{-\frac{m\alpha }{2}} \sum _{y\in B_r}\mu (y)p(t-s,x,y)p(s,x_0,y)ds\\&\ge C_2\int _{c_3\alpha r^2}^t s^{-\frac{m\alpha }{2}} \sum _{y\in B_r}\mu (y)p_r(t-s,x,y)p_r(s,x_0,y)ds, \end{aligned} \end{aligned}$$
(4.14)

where \(C_2=\frac{c_2^\alpha C_1^{1+\alpha }}{ec^\alpha }\), the last inequality follows from \(p(t,x,y)\ge p_r(t,x,y)\) which is due to the fact that \(p_{r}(t,x,y)\le p_{r+1}(t,x,y)\) and \(p(t,x,y)=\lim _{r\rightarrow \infty } p_r(t,x,y)\).

Applying \(m\alpha =2\) and \(\sum _{y\in B_r}\mu (y)p_r(t-s,x,y)p_r(s,x_0,y)=p_r(t,x_0,x)\) to (4.14) gives

$$\begin{aligned} \begin{aligned} u(t,x)&> C_2p_r(t,x_0,x)\int _{c_3\alpha r^2}^t s^{-1}ds\\&=C_2p_r(t,x_0,x)\log \left( \frac{t}{c_3\alpha r^2}\right) \end{aligned} \end{aligned}$$

for \(x\in V\), \(t\ge c_3\alpha r^2\) and \(r>\sqrt{\frac{\rho }{c_3\alpha }}\).

Hence, we deduce that for any \(t\ge c_3\alpha r^2\) and \(r>\sqrt{\frac{\rho }{c_3\alpha }}\),

$$\begin{aligned} \begin{aligned} G(t,r)&\ge \sum _{y\in B_r}\mu (y)p_r(t,x_0,y)u(t,y)\\&> C_2\log \left( \frac{t}{c_3\alpha r^2}\right) \sum _{y\in B_r}\mu (y)p_r(t,x_0,y)p_r(t,x_0,y)\\&=C_2\log \left( \frac{t}{c_3\alpha r^2}\right) p_r(2t,x_0,x_0). \end{aligned} \end{aligned}$$
(4.15)

In addition, by Proposition 2.4 and Definition 2.2, there exists a constant \(R\ge \sqrt{\frac{\rho }{c_3\alpha }}\), which is independent of t, such that for \(r>R\),

$$\begin{aligned} \begin{aligned} p_r(2t,x_0,x_0)\ge \frac{c_2}{{\mathcal {V}}(x_0,\sqrt{2t})} \quad \quad (t>t_0). \end{aligned} \end{aligned}$$
(4.16)

Combining (4.15), (4.16) with \({\mathcal {V}}(x_0,t)\le ct^m\), we obtain

$$\begin{aligned} \begin{aligned} G(t,r)&> \frac{C_2c_2}{c}(2t)^{-\frac{m}{2}}\log \left( \frac{t}{c_3\alpha r^2}\right) , \end{aligned} \end{aligned}$$

this yields

$$\begin{aligned} \begin{aligned} t^{\frac{m}{2}}G(t,r)&> C_3\log \left( \frac{t}{c_3\alpha r^2}\right) \end{aligned} \end{aligned}$$
(4.17)

for any \(t\ge c_3\alpha r^2\) and \(r>R\), where \(C_3=\frac{C_2c_2}{2^{\frac{m}{2}}c}\) and \(R\ge \sqrt{\frac{\rho }{c_3\alpha }}\).

Consequently, from (4.10) and (4.17) we derive that

$$\begin{aligned} \begin{aligned} C_3\log \left( \frac{t}{c_3\alpha r^2}\right)&< cc_1C'' \end{aligned} \end{aligned}$$
(4.18)

for \(t\ge c_3\alpha r^2\) and \(r>\max \left\{ R, \sqrt{\frac{\rho }{c_3}}\right\} \).

In fact, it is easy to observe that the reverse inequality of (4.18) holds for sufficiently large t. This leads to a contradiction. Hence, there is no non-negative global solution for the equation (1.1). The proof of Theorem 1.1 is complete.

5 Proof of Theorem 1.2

We first introduce and prove the following lemma.

Lemma 5.1

Suppose that \(0<\alpha \le \frac{2}{m}\) and \(D_\mu ,D_\omega <\infty \). Let G satisfy \(CDE'(n,0)\) and \({\mathcal {V}}(x_0,t)\le c''t^m \log ^\eta t\) for \(t\ge r_0>1\), where \(c'', m, \eta \) are some positive constants and \(a(x_0)>0\). If the Eq. (1.1) has a non-negative global solution u(tx), then there exists a constant \(C''\) such that

$$\begin{aligned} \begin{aligned} \sum _{y\in B_r}\mu (y)u(t,y)\le C''\left( \log \sqrt{c_3r^2}\right) ^\eta \end{aligned} \end{aligned}$$
(5.1)

for any \(t>0\) and \(r>\sqrt{\frac{\rho }{c_3}}\), where \(\rho =\max \{t_0, r_0^2\}\).

Proof

Using Proposition 2.4 with \({\mathcal {V}}(x_0,t)\le c''t^m \log ^\eta t\), we have for \(t>\rho \) and \(r>0\),

$$\begin{aligned} \begin{aligned} t^{\frac{m}{2}}P_t a(x_0)&\ge \frac{c_2}{c''}\exp \left( -\frac{c_3r^2}{t}\right) \left( \log \sqrt{t}\right) ^{-\eta }\sum _{y\in B_r}\mu (y)a(y), \end{aligned} \end{aligned}$$

which, along with \(0<\alpha \le \frac{2}{m}\) and the inequality asserted by Lemma 4.1, leads us to

$$\begin{aligned} \begin{aligned} \sum _{y\in B_r}\mu (y)a(y)\le \frac{c''C'}{c_2}\exp \left( \frac{c_3r^2}{t}\right) \left( \log \sqrt{t}\right) ^{\eta }. \end{aligned} \end{aligned}$$

Choosing \(t=c_3r^2\) in the above inequality gives

$$\begin{aligned} \begin{aligned} \sum _{y\in B_r}\mu (y)a(y)\le C''\left( \log \sqrt{c_3r^2}\right) ^{\eta }, \end{aligned} \end{aligned}$$
(5.2)

where \(C''=e\frac{c''C'}{c_2}\) and \(r>\sqrt{\frac{\rho }{c_3}}\).

Note that for any t the non-negative global solution u(tx) of (1.1) can be regarded as the initial value of the equation (1.1), thus inequality (5.2) implies that

$$\begin{aligned} \begin{aligned} \sum _{y\in B_r}\mu (y)u(t,y)\le C''\left( \log \sqrt{c_3r^2}\right) ^{\eta }. \end{aligned} \end{aligned}$$

\(\square \)

Proof of Theorem 1.2

We shall now prove Theorem 1.2 by contradiction. Assume that u(tx) is a non-negative global solution of (1.1).

Let

$$\begin{aligned} G(t,r)=\sum _{y\in B_r}\mu (y)p(t,x_0,y)u(t,y) \quad \quad (t>0,\;r>0). \end{aligned}$$

By using Proposition 2.4, Lemma 5.1 and \({\mathcal {V}}(x_0,t)\ge c't^m\log ^{-\zeta }t\), we obtain that for \(t\ge r_0^2\) and \(r>\sqrt{\frac{\rho }{c_3}}\),

$$\begin{aligned} \begin{aligned} G(t,r)&\le \frac{c_1}{c'}t^{-\frac{m}{2}}\left( \log \sqrt{t}\right) ^{\zeta }\sum _{y\in B_r}\mu (y)u(t,y)\\&\le \frac{c_1C''}{2^\zeta c'}t^{-\frac{m}{2}}\left( \log t\right) ^{\zeta }\left( \log \sqrt{c_3r^2}\right) ^\eta . \end{aligned} \end{aligned}$$
(5.3)

Applying Theorem 3.1 and the above-mentioned inequality \(p(t,x,y)\ge p_r(t,x,y)\), we deduce that for any \(t>0\) and \(r>0\),

$$\begin{aligned} \begin{aligned} G(t,r)&> \sum _{y\in B_r}\mu (y)p(t,x_0,y)\int _0^t P_{t-s}u(s,y)^{1+\alpha } ds\\&= \sum _{y\in B_r}\mu (y)p(t,x_0,y)\int _0^t \sum _{z\in V}\mu (z)p(t-s,y,z)u(s,z)^{1+\alpha } ds\\&\ge \int _0^t \sum _{y\in B_r}\sum _{z\in B_r}\mu (y)\mu (z)p(t,x_0,y)p(t-s,y,z)u(s,z)^{1+\alpha } ds\\&\ge \int _0^t \sum _{y\in B_r}\sum _{z\in B_r}\mu (y)\mu (z)p_r(t,x_0,y)p_r(t-s,y,z)u(s,z)^{1+\alpha } ds\\&=\int _0^t \sum _{z\in B_r}\mu (z)p_r(2t-s,x_0,z)u(s,z)^{1+\alpha } ds\\&\ge \int _0^t \left( \sum _{z\in B_r}\mu (z)p_r(2t-s,x_0,z)u(s,z)\right) ^{1+\alpha } ds, \end{aligned} \end{aligned}$$
(5.4)

where the last inequality follows from the inequality of Proposition 2.1\(\sum _{z\in B_r}\mu (z)p_r(2t-s,x_0,z)< 1\) and Jensen-type inequality \(\sum _{i=1}^n\lambda _i\varphi (\tau _i)\ge \varphi (\sum _{i=1}^n\lambda _i\tau _i),\) where \(\varphi \) is convex and satisfies \(\varphi (0)=0\), \(\sum _{i=1}^n\lambda _i<1\), \(\lambda _i\ge 0\), \(i=1,2,\ldots , n\) (see [19]).

Using inequality (4.3) and \(a_{0}=\inf _{x\in V}{a(x)}\), we deduce that

$$\begin{aligned} \begin{aligned} u(s,z)&> s\left( \sum _{x\in V}\mu (x)p(s,z,x)a(x)\right) ^{1+\alpha }\\&\ge s\left( a_0\sum _{x\in V}\mu (x)p(s,z,x)\right) ^{\alpha }\left( \sum _{x\in V}\mu (x)p(s,z,x)a(x)\right) .\\ \end{aligned} \end{aligned}$$

Further, it follows from the stochastic completeness of G that for any \(s>0\), \(z\in V\) and \(r>0\),

$$\begin{aligned} \begin{aligned} u(s,z)&> s a_0^{\alpha }\left( \sum _{x\in V}\mu (x)p(s,z,x)a(x)\right) \\&\ge s a_0^{\alpha }\mu (x_0)a(x_0)p(s,x_0,z) \\&\ge s a_0^{\alpha }\mu (x_0)a(x_0)p_r(s,x_0,z). \end{aligned} \end{aligned}$$
(5.5)

Substituting (5.5) into (5.4), we obtain that for \(t>0\) and \(r>0\),

$$\begin{aligned} \begin{aligned} G(t,r)&> \big (a_0^{\alpha }C_1\big )^{1+\alpha }\int _0^t s^{1+\alpha } \left( \sum _{z\in B_r}\mu (z)p_r(2t-s,x_0,z)p_r(s,x_0,z)\right) ^{1+\alpha } ds\\&= \big (a_0^{\alpha }C_1\big )^{1+\alpha }p_r(2t,x_0,x_0)^{1+\alpha }\int _0^t s^{1+\alpha } ds\\&= \frac{\big (a_0^{\alpha }C_1\big )^{1+\alpha }}{2+\alpha }t^{2+\alpha }p_r(2t,x_0,x_0)^{1+\alpha }, \end{aligned} \end{aligned}$$
(5.6)

where \(C_1=\mu (x_0)a(x_0)\).

By Proposition 2.4 and Definition 2.2, there exists a constant \(R'\ge \sqrt{\frac{\rho }{c_3}}\), which is independent of t, such that for \(r>R'\),

$$\begin{aligned} \begin{aligned} p_r(2t,x_0,x_0)\ge \frac{c_2}{{\mathcal {V}}(x_0,\sqrt{2t})} \quad \quad (t>t_0), \end{aligned} \end{aligned}$$

which, along with \({\mathcal {V}}(x_0,t)\le c''t^m \log ^\eta t\), yields that

$$\begin{aligned} \begin{aligned} p_r(2t,x_0,x_0)\ge \frac{2^\eta c_2}{c''}(2t)^{-\frac{m}{2}}\left( \log 2t\right) ^{-\eta } \quad \quad (t>\rho ). \end{aligned} \end{aligned}$$

Applying the above inequality to (5.6), we obtain that for \(t>\rho \) and \(r>R'\),

$$\begin{aligned} \begin{aligned} G(t,r)&> \frac{2^{\eta (1+\alpha )-\frac{m(1+\alpha )}{2}}}{2+\alpha }\left( \frac{a_0^{\alpha }c_2C_1}{c''}\right) ^{1+\alpha } t^{2+\alpha -\frac{m(1+\alpha )}{2}} \left( \log 2t\right) ^{-\eta (1+\alpha )}. \end{aligned} \end{aligned}$$
(5.7)

Combining (5.3) and (5.7), we get

$$\begin{aligned} \begin{aligned} \frac{2^{\eta (1+\alpha )-\frac{m(1+\alpha )}{2}}}{2+\alpha }\left( \frac{a_0^{\alpha }c_2C_1}{c''}\right) ^{1+\alpha } \frac{t^{2+\alpha -\frac{m\alpha }{2}}}{\left( \log 2t\right) ^{\eta (1+\alpha )}\left( \log t\right) ^{\zeta }} < \frac{c_1C''}{2^\zeta c'}\left( \log \sqrt{c_3r^2}\right) ^\eta . \end{aligned} \end{aligned}$$

In virtue of \(m\alpha \le 2\), we obtain

$$\begin{aligned} \begin{aligned} {\widetilde{C}} \frac{t^{1+\alpha }}{\left( \log 2t\right) ^{\eta (1+\alpha )}\left( \log t\right) ^{\zeta }} < 1, \end{aligned} \end{aligned}$$

this yields

$$\begin{aligned} \begin{aligned} {\widetilde{C}} \left( \frac{t^b}{\log 2t}\right) ^{\eta (1+\alpha )+\zeta } < 1 \end{aligned} \end{aligned}$$
(5.8)

for \(t>\rho \), where \(b=\frac{1+\alpha }{\eta (1+\alpha )+\zeta }>0\), \({\widetilde{C}}=\frac{c'2^{\left( \eta -\frac{1}{\alpha }\right) (1+\alpha )+\zeta }}{(2+\alpha )c_1C''} \left( \frac{a_0^{\alpha }c_2C_1}{c''}\right) ^{1+\alpha }\left( \log \sqrt{c_3r^2}\right) ^{-\eta }\) and \(r>R'\).

Actually, it is easy to see that the reverse inequality of (5.8) holds for sufficiently large t. This leads to a contradiction. Hence, the Eq. (1.1) has no non-negative global solution. This completes the proof of Theorem 1.2.

6 Concluding remarks

Let us consider a special case of Theorem 1.2. Taking \(\zeta =0\) in Theorem 1.2, we have

Claim 6.1

Suppose that \(\inf _{x\in V}a(x)>0\) and \(D_\mu ,D_\omega <\infty \). Let G satisfy \(CDE'(n,0)\) and \(c'r^m\le {\mathcal {V}}(x_0,r)\le c''r^m \log ^\eta r\) \((\eta >0)\) for \(r\ge r_0>1\) and \(x_0\in V\). If \(0<\alpha \le \frac{2}{m}\), then all non-negative solutions of (1.1) blow up in a finite time.

Recently, we have proved in [18] the following

Claim 6.2

Under the assumption of Claim 6.1, if \(\alpha >\frac{2}{m}\), then there exists a non-negative global solution of (1.1) when the initial value satisfies \(a(x)\le \delta p(\gamma , x_0,x)\) for some small \(\delta >0\) and any fixed \(\gamma \ge r_0^2\).

The results, described in Claims 6.1 and 6.2, provide a complete answer to the existence and nonexistence problem of non-negative global solutions for equation (1.1) with \(\inf _{x\in V}a(x)>0\) on locally finite graphs that satisfy \(CDE'(n,0)\) and \(c'r^m\le {\mathcal {V}}(x_0,r)\le c''r^m \log ^\eta r\) \((\eta >0)\) for \(r\ge r_0\).

Besides, under the condition that \(D_\mu , D_\omega <\infty \) and the graph satisfies \(CDE'(n,0)\) and uniform polynomial volume growth of degree m, we have obtained in [18] the following result:

Claim 6.3

If \(0<\alpha <\frac{2}{m}\), then all non-negative solutions of (1.1) blow up in a finite time and if \(\alpha >\frac{2}{m}\), then there exists a non-negative global solution of (1.1) for a sufficiently small initial value.

As a complement to Claim 6.3, the present result stated in Theorem 1.1 reveals that all non-negative solutions of (1.1) blow up in a finite time if \(\alpha =\frac{2}{m}\).