1 Introduction

The existence or nonexistence of global solutions to a simple system

$$\begin{aligned} \left\{ \begin{array}{lc} u_t=\Delta u + u^{1+\alpha } &{}\quad (t>0, x\in {\mathbb {R}}^m),\\ u(0,x)=a(x) &{}\quad (x\in {\mathbb {R}}^m) \end{array} \right. \end{aligned}$$
(1.1)

has been extensively studied since the 1960s. One of the most important results about it is from Fujita [5]. Fujita showed that, if \(0<m\alpha <2\), then there does not exist a non-negative global solution for any non-trivial non-negative initial data. On the other hand, if \(m\alpha >2\), then there exists a global solution for a sufficiently small initial data. It is clear that Fujita’s results do not include the critical exponent \(\alpha =\frac{2}{m}\). The nonexistence of global solutions for the critical exponent was proved in [10, 12].

Recently, the study of equations on graphs has attracted attention from many researchers in various fields (see [2, 6,7,8, 14, 16] and references therein). Grigoryan et al. [6,7,8] established existence results for Yamabe type equations and some nonlinear elliptic equations on graphs. The solutions of the heat equation and its variations on graphs have also been investigated by many authors due to its wide range of applications ranging from modelling of energy flows through a network to image processing [3, 4]. Chung et al. [2] considered the extinction and positivity of the solutions of the Dirichlet boundary value problem for \(u_t=\Delta u - u^q\) with \(q>0\) on a network.

In [16], Xin et al. studied the blow-up properties of the Dirichlet boundary value problem for \(u_t=\Delta u + u^q\) with \(q>0\) on a finite graph. They concluded that if \(q\le 1\), every solution is global, and if \(q> 1\) and under some suitable conditions, the nontrivial solutions blow up in finite time. Different from [16], in this paper we consider the sufficient conditions for existence or nonexistence of global solutions of the Cauchy problem for \(u_t=\Delta u + u^{1+\alpha }\) with \(\alpha >0\) on a finite or locally finite graph.

From another perspective, the problem discussed in this paper can be regarded as a discrete analogue of the problem (1.1), that is,

$$\begin{aligned} \left\{ \begin{array}{lc} u_t=\Delta u + u^{1+\alpha } \quad \text {in }(0,+\infty )\times V,\\ u(0,x)=a(x) \quad \text {in }V. \end{array} \right. \end{aligned}$$
(1.2)

Motivated by [5], we find that the key technical point to proving the existence of global solutions is to estimate the heat kernel. In [1], Bauer et al. obtained the Gaussian upper bound for a graph satisfying \(\textit{CDE}(n,0)\). Based on the results in [1], Horn et al. [11] derived the Gaussian lower bound for a graph satisfying \(\textit{CDE}'(n,0)\). In addition, Lin et al. [13] used the volume growth condition to obtain a weaker on-diagonal lower estimate of heat kernel on graphs for large time. Using these heat kernel estimates, we can prove the existence and nonexistence of global solutions for problem (1.2) on finite or locally finite graphs.

The results of Fujita [5] reveal that the dimension of the space and the degree of non-linearity of the equation have a combined effect on deciding whether a solution of (1.1) exists globally in Euclidean space. It is worth noting that the main results of this paper exactly show that, for a finite or locally finite graph satisfying \(D_\mu , D_\omega <\infty \), \(\textit{CDE}'(n,0)\) and \(V(x,r)\simeq r^m\), the behaviors of the solutions for problem (1.2) strongly depend on m and \(\alpha \). In particular, for the lattice \({\mathbb {Z}}^m\), we have similar results as Fujita [5] in Euclidean space \({\mathbb {R}}^m\).

The rest of the paper is organized as follows. In Sect. 2, we introduce some concepts, notations and known results which are essential to prove the main results of this paper. In Sect. 3, we formally state our main results. In Sects. 4 and 5, we respectively prove the nonexistence and existence of global solutions for problem (1.2). In Sect. 6, we study the behaviors of the solutions for problem (1.2) under the curvature condition \(\textit{CDE}'\). In Sect. 7, we give an example to explain our conclusions intuitively. Meanwhile, we also provide a numerical experiment to demonstrate the example.

2 Preliminaries

Throughout the paper, we assume that \(G=(V,E)\) is a finite or locally finite connected graph and contains neither loops nor multiple edges, where V denotes the vertex set and E denotes the edge set. We write \(y\sim x\) if y is adjacent to x, or equivalently \(\overline{xy}\in E\). For each vertex x, its degree is defined by

$$\begin{aligned} \deg (x)=\#\{y\in V:y\sim x\}. \end{aligned}$$

Let \(\mu : V\rightarrow (0,\infty )\) be a positive measure on the vertices of G and satisfy \(\mu _0:=\inf _{x\in V}\mu (x)>0\). Let \(\omega :V\times V\rightarrow [0,\infty )\) be an (edge) weight function satisfying \(\omega _{xy}=\omega _{yx}\) and \(\omega _{xy}>0\) if and only if \(x\sim y\). Furthermore, we assume \(\omega _{\min }:=\inf _{\overline{xy}\in E} \omega _{xy}>0\).

Given a weight and a measure, we define

$$\begin{aligned} D_\omega :=\frac{\mu _{\max }}{\omega _{\min }} \end{aligned}$$

and

$$\begin{aligned} D_\mu :=\sup _{x\in V}\frac{m(x)}{\mu (x)}, \end{aligned}$$

where \(\mu _{\max }:=\sup _{x\in V} \mu (x)\) and \(m(x):=\sum _{y\sim x}\omega _{xy}\).

In this paper, all the graphs in our concern are assumed to satisfy

$$\begin{aligned} D_\mu <\infty . \end{aligned}$$

2.1 The Laplacian on graphs

Let C(V) be the set of real functions on V. For any \(1\le p<\infty \), we denote by

$$\begin{aligned} \ell ^p(V,\mu )=\left\{ f\in C(V):\sum _{x \in V} \mu (x)|f(x)|^p<\infty \right\} \end{aligned}$$

the set of \(\ell ^p\) integrable functions on V with respect to the measure \(\mu \). For \(p=\infty \), let

$$\begin{aligned} \ell ^{\infty }(V,\mu )=\left\{ f\in C(V):\sup _{x\in V}|f(x)|<\infty \right\} . \end{aligned}$$

For any function \(f\in C(V)\), the \(\mu \)-Laplacian \(\Delta \) of f is defined by

$$\begin{aligned} \Delta f(x)=\frac{1}{\mu (x)}\sum _{y\sim x}\omega _{xy}(f(y)-f(x)). \end{aligned}$$

It can be checked that \(D_\mu <\infty \) is equivalent to the \(\mu \)-Laplacian \(\Delta \) being bounded on \(\ell ^p(V,\mu )\) for all \(p\in [1,\infty ]\) (see [9]). The special case of \(\mu \)-Laplacian are the case where \(\mu \equiv 1\), which is the standard graph Laplacian, and the case where \(\mu (x)=\sum _{y\sim x}\omega _{xy}=m(x)\), which yields the normalized graph Laplacian.

The gradient form \(\Gamma \) associated with a \(\mu \)-Laplacian is defined by

$$\begin{aligned} \Gamma (f,g)(x)=\frac{1}{2\mu (x)}\sum _{y\sim x}\omega _{xy}(f(y)-f(x))(g(y)-g(x)). \end{aligned}$$

We write \(\Gamma (f)=\Gamma (f,f)\).

The iterated gradient form \(\Gamma _2\) is defined by

$$\begin{aligned} 2\Gamma _2(f,g)=\Delta \Gamma (f,g)-\Gamma (f,\Delta g)-\Gamma (\Delta f,g). \end{aligned}$$

We write \(\Gamma _2(f)=\Gamma _2(f,f).\)

Besides, the integral of a function \(f\in \ell ^1(V,\mu )\) is defined by

$$\begin{aligned} \int _V fd\mu =\sum _{x\in V}\mu (x)f(x). \end{aligned}$$

The connected graph can be endowed with its graph distance d(xy), i.e., the smallest number of edges of a path between two vertices x and y, then we define balls \(B(x,r)=\{y\in V:d(x,y)\le r\}\) for any \(r\ge 0\). The volume of a subset U of V can be written as V(U) and \(V(U)=\sum _{x\in U}\mu (x)\), for convenience, we usually abbreviate \(V\big (B(x,r)\big )\) by V(xr). In addition, a graph G satisfies a uniform volume growth of positive degree m, if for all \(x\in V\), \(r> 0\),

$$\begin{aligned} V(x,r)\simeq r^m, \end{aligned}$$

that is, there exists a constant \(c'\ge 1\), such that \(\frac{1}{c'}r^m\le V(x,r) \le c'r^m\).

2.2 The heat kernel on graphs

Consider a function \(u:[0,+\infty )\times V\rightarrow {\mathbb {R}}\), where u(tx) represents the potential energy given at vertex \(x\in V\) and time \(t\in [0,+\infty )\). Assume that the energy flows from x to its adjacent vertex y through their edge. If we give a very general assumption that the flow rate from x to y is proportional to (i) the difference of potential energy in vertices x and y and (ii) the conductivity \(\omega _{xy}\), then it is easy to see that the function u satisfies the equation

$$\begin{aligned} u_t(t,x)=\sum _{y\sim x}\left( u(t,y)-u(t,x)\right) \frac{\omega _{xy}}{\mu (x)} \quad (t\ge 0, \, x\in V), \end{aligned}$$

which is the homogeneous heat equation \(u_t=\Delta u\).

We say that a function \(p:(0,+\infty )\times V \times V\rightarrow {\mathbb {R}}\) is a fundamental solution of the heat equation

$$\begin{aligned} u_t=\Delta u \end{aligned}$$

on G, if for any bounded initial condition \(u_0:V\rightarrow {\mathbb {R}}\), the function

$$\begin{aligned} u(t,x)=\sum _{y\in V}\mu (y)p(t,x,y)u_0(y) \quad (t>0, \, x\in V) \end{aligned}$$

is differentiable in t, satisfies the heat equation, and for any \(x\in V\), \(\lim _{t\rightarrow 0^+}u(t,x)=u_0(x)\) holds.

For completeness, we recall some important properties of the heat kernel p(txy) as follows:

Proposition 2.1

(see [11, 15]) For \(t,s>0\) and any \(x,y\in V\), we have

  1. (i)

      \(p(t,x,y)=p(t,y,x)\),

  2. (ii)

      \(p(t,x,y)> 0\),

  3. (iii)

      \(\sum _{y\in V}\mu (y)p(t,x,y)\le 1\),

  4. (iv)

      \(\partial _t p(t,x,y)=\Delta _xp(t,x,y)=\Delta _yp(t,x,y)\),

  5. (v)

      \(\sum _{z\in V}\mu (z)p(t,x,z)p(s,z,y)=p(t+s,x,y)\).

In [1], Bauer et al. introduced two slightly different curvature conditions which are called CDE and \(\textit{CDE}'\). Let us now recall the two definitions.

Definition 2.1

A graph G satisfies the exponential curvature dimension inequality \(\textit{CDE}(x,n,K)\), if for any positive function \(f:V\rightarrow {\mathbb {R}}^+\) such that \(\Delta f(x)<0\), we have

$$\begin{aligned} \Gamma _2(f)(x)-\Gamma \Bigg (f,\frac{\Gamma (f)}{f}\Bigg )(x)\ge \frac{1}{n}(\Delta f)(x)^2+K\Gamma (f)(x). \end{aligned}$$

We say that \(\textit{CDE}(n,K)\) is satisfied if \(\textit{CDE}(x,n,K)\) is satisfied for all \(x\in V\).

Definition 2.2

A graph G satisfies the exponential curvature dimension inequality \(\textit{CDE}'(x,n,K)\), if for any positive function \(f:V\rightarrow {\mathbb {R}}^+\), we have

$$\begin{aligned} \Gamma _2(f)(x)-\Gamma \Bigg (f,\frac{\Gamma (f)}{f}\Bigg )(x)\ge \frac{1}{n}f(x)^2(\Delta \log f)(x)^2+K\Gamma (f)(x). \end{aligned}$$

We say that \(\textit{CDE}'(n,K)\) is satisfied if \(\textit{CDE}'(x,n,K)\) is satisfied for all \(x\in V\).

The relation between \(\textit{CDE}(n,K)\) and \(\textit{CDE}'(n,K)\) is the following:

Remark 2.1

(see [1, 11]) \(\textit{CDE}'(n,K)\) implies \(\textit{CDE}(n,K)\).

Under the curvature condition \(\textit{CDE}(n,0)\), Bauer et al. [1] established a discrete analogue of the Li-Yau inequality in Theorem 4.20 and a Harnack-type inequality in Theorem 5.2. Using these results, Bauer et al. derived a heat kernel estimate on unweighted graphs (see Theorem 7.6 in [1]). According to Remark 5.1 in [1], for the heat kernel estimate on weighted graphs, we shall assume \(D_\omega <\infty \) instead of \(\max _{x\in V}\deg (x)<\infty \). Here, we show the relevant result on weighted graphs as follows:

Proposition 2.2

(see [1]) Suppose G satisfies \(D_\omega <\infty \) and \(\textit{CDE}(n,0)\). Then there exists a positive constant \(C_1\) such that, for any \(x,y \in V\) and \(t>0\),

$$\begin{aligned} p(t,x,y)\le \frac{C_1}{V(x,\sqrt{t})}, \end{aligned}$$
(2.1)

where \(C_1\) depends on \(n,D_\omega ,D_\mu \) and is denoted by \(C_1=C_1(n,D_\omega ,D_\mu )\). Furthermore, for any \(t>1\), there exist positive constants \(C_2\) and \(C_3\) such that

$$\begin{aligned} p(t,x,y)\ge C_2\frac{1}{t^n}\exp {\left( -C_3\frac{d^2(x,y)}{t-1}\right) }, \end{aligned}$$
(2.2)

where \(C_2\) depends on \(D_\omega ,D_\mu \) and is denoted by \(C_2=C_2(D_\omega ,D_\mu )\)\(C_3\) depends on \(n,D_\omega ,D_\mu \) and is denoted by \(C_3=C_3(n,D_\omega ,D_\mu )\).

Although the upper bound from Bauer et al. [1] is formulated with Gaussian form, the lower bound is not quite Gaussian form. Based on this, Horn et al. [11] improved some results in [1] and derived the Gaussian type lower bound via introducing the curvature condition \(\textit{CDE}'(n,0)\) in Theorem 5.1 of [11]. We transcribe it below.

Proposition 2.3

(see [11]) Suppose G satisfies \(D_\omega <\infty \) and \(\textit{CDE}'(n,0)\). Then for any \(t_0>0\), there exist positive constants C and c such that

$$\begin{aligned} p(t,x,y)\ge \frac{C}{V(x,\sqrt{t})}\exp \left( -c\frac{d^2(x,y)}{t}\right) \end{aligned}$$

for all \(x,y\in V\) and \(t>t_0\), where C depends on n and is denoted by \(C=C(n)\)c depends on \(n,D_\omega ,D_\mu \) and is denoted by \(c=c(n,D_\omega ,D_\mu )\). In particular, for any \(t_0>0\), we have

$$\begin{aligned} p(t,x,x)\ge \frac{C(n)}{V(x,\sqrt{t})} \end{aligned}$$
(2.3)

for all \(x\in V\) and \(t>t_0\), where C(n) is positive.

Without the use of the curvature condition \(\textit{CDE}'\), Lin et al. only utilized the volume growth condition to obtain a on-diagonal lower estimate of heat kernel on graphs for large time (see Theorem 3.2 in [13]). In fact, this estimate is enough to prove the nonexistence of global solutions for problem (3.1) stated in Sect. 3. Let us now recall the on-diagonal lower estimate.

Proposition 2.4

(see [13]) Assume that, for all \(x\in V\) and \(r\ge r_0\),

$$\begin{aligned} V(x,r)\le c_0 r^m, \end{aligned}$$

where \(r_0,c_0,m\) are positive constants. Then, for all large enough t,

$$\begin{aligned} p(t,x,x)\ge \frac{1}{4V(x,C_0 t\log t)}, \end{aligned}$$
(2.4)

where \(C_0>2D_\mu e\).

3 Main results

In this paper, we study whether or not there exist global solutions to the initial value problem for the semilinear heat equation

$$\begin{aligned} \left\{ \begin{array}{lc} u_t=\Delta u + u^{1+\alpha } \quad \text {in }(0,+\infty )\times V,\\ u(0,x)=a(x) \quad \text {in }V, \end{array} \right. \end{aligned}$$
(3.1)

where \(\alpha \) is a positive parameter, a(x) is bounded, non-negative and not trivial in V. Without loss of generality, we may assume \(a(\nu )>0\) with \(\nu \in V\). Throughout the present paper we shall only deal with non-negative solutions so that there is no ambiguity in the meaning of \(u^{1+\alpha }\). We shall also fix the vertex \(\nu \).

For convenience, we state the relevant definitions first.

Definition 3.1

Assume that \(T>0\). A non-negative function \(u=u(t,x)\) satisfying (3.1) in \([0,T]\times V\) is called a solution of (3.1) in [0, T], if u is bounded and continuous with respect to t in \([0,T]\times V\). Furthermore, a solution u of (3.1) in \([0,+\infty )\) is a function whose restriction to \([0,T]\times V\) is a solution of (3.1) in [0, T] for any \(T>0\). A solution u of (3.1) in \([0,+\infty )\) is also called a global solution of (3.1) in \([0,+\infty )\).

Definition 3.2

\({\mathcal {F}}[0,+\infty )\) is the set of all non-negative continuous (with respect to t) functions \(u=u(t,x)\) defined in \([0,+\infty )\times V\) satisfying

$$\begin{aligned} 0\le u(t,x) \le Mp(t+\gamma ,\nu ,x) \end{aligned}$$

with some constants \(M>0\) and \(\gamma >0\), where \(\nu \) is the vertex at which the function a is positive. Furthermore, if u is a solution of (3.1) in \([0,+\infty )\) and \(u\in {\mathcal {F}}[0,+\infty )\), then u is called a global solution of (3.1) in \({\mathcal {F}}[0,+\infty )\).

Definition 3.3

\({\mathcal {A}}\) is a set of numbers defined by

$$\begin{aligned} {\mathcal {A}}= & {} \left\{ \delta :\;\,0<\delta<1, \, \delta ^{\frac{\alpha }{4}}\big (1 +\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }<1, \, {\widetilde{C}}\delta ^{\frac{\alpha }{2}}<1, \,\right. \\&\quad \left. (1+\alpha )\big (\delta +\delta ^{1+\frac{\alpha }{2}} (1+\delta ^{\frac{\alpha }{4}})^{1+\alpha }\big )^\alpha {\widetilde{C}}<1 \right\} , \end{aligned}$$

where

$$\begin{aligned} {\widetilde{C}}=\frac{2\gamma }{m\alpha -2}\left( C_1(n,D_\omega ,D_\mu ) c_1^{-1}\right) ^\alpha \gamma ^{-\frac{m\alpha }{2}} \end{aligned}$$

and the constant \(C_1(n,D_\omega ,D_\mu )\) appearing in the definition of \({\widetilde{C}}\) is actually the same as the one in Proposition 2.2.

Remark 3.1

The set \({\mathcal {A}}\) defined in Definition 3.3 is nonempty in terms of the constraint conditions of \({\mathcal {A}}\). In fact, by

$$\begin{aligned} \lim _{\delta \rightarrow 0}\delta ^{\frac{\alpha }{4}}\big (1 +\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }=0,\quad \lim _{\delta \rightarrow 0}{\widetilde{C}}\delta ^{\frac{\alpha }{2}}=0, \quad \lim _{\delta \rightarrow 0} (1+\alpha )\big (\delta +\delta ^{1+\frac{\alpha }{2}}(1 +\delta ^{\frac{\alpha }{4}})^{1+\alpha }\big )^\alpha {\widetilde{C}}=0, \end{aligned}$$

we can conclude that there exists a \(\delta _0\,\, (0<\delta _0<1)\) such that the conditions given in \({\mathcal {A}}\) hold for all \(\delta \in (0,\delta _{0})\).

Our main results are stated in the following theorems.

Theorem 3.1

Assume that, for all \(x\in V\) and \(r\ge r_0\), the volume growth \(V(x,r)\le c_0 r^m\) holds, where \(r_0,c_0,m\) are positive constants. If \(0<m\alpha <1\), then there is no non-negative global solution of (3.1) in \([0,+\infty )\) for any bounded, non-negative and non-trivial initial value.

Theorem 3.2

Assume that G satisfies \(D_\omega <\infty \), \(\textit{CDE}(n,0)\) and \(V(x,r)\ge c_1 r^m\) for some \(c_1>0,\;m>0\) and all \(r>0,\;x\in V\). Suppose for \(\gamma >0\) and \(\delta \in {\mathcal {A}}\), the initial value satisfies

$$\begin{aligned} 0\le a(x)\le \delta p(\gamma ,\nu ,x) \end{aligned}$$
(3.2)

for all \(x\in V\). If \(m\alpha >2\), then (3.1) has a global solution \(u=u(t,x)\) in \({\mathcal {F}}[0,+\infty ),\) which satisfies \(0\le u(t,x)\le M(\delta )p(t+\gamma ,\nu ,x)\) for any \((t,x)\in [0,+\infty )\times V,\) where \(M(\delta )=\delta +\delta ^{1+\frac{\alpha }{2}}\big (1+\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }.\)

Theorem 3.3

Suppose G satisfies \(D_\omega <\infty \), \(\textit{CDE}'(n,0)\) and \(V(x,r)\simeq r^m\) for some \(m>0\) and all \(r>0,\;x\in V\).

  1. (i)

    If \(0<m\alpha <2\), then there is no non-negative global solution of (3.1) in \([0,+\infty )\) for any bounded, non-negative and non-trivial initial value.

  2. (ii)

    If \(m\alpha >2\), then there exists a global solution of (3.1) in \({\mathcal {F}}[0,+\infty )\) for a sufficiently small initial value.

4 Proof of Theorem 3.1

We first introduce a lemma which will be used in the proof of Theorem 3.1.

Lemma 4.1

Let \(T>0\), if \(u=u(t,x)\) is a non-negative solution of (3.1) in [0, T], then we have

$$\begin{aligned} J_0^{-\alpha }-u(t,\nu )^{-\alpha }\ge \alpha t \quad (0< t\le T), \end{aligned}$$

where

$$\begin{aligned} J_0=J_0(t)=\sum _{x\in V} \mu (x)p(t,\nu ,x)a(x). \end{aligned}$$

Proof

Let \(\varepsilon \) be a positive constant and for any fixed \(t\in (0,T]\), we put

$$\begin{aligned} v_\varepsilon (s,x)=p(t-s+\varepsilon , \nu , x) \quad (0\le s\le t, \; x\in V) \end{aligned}$$

and

$$\begin{aligned} J_\varepsilon (s)=\sum _{x\in V}\mu (x)v_\varepsilon (s,x)u(s,x) \quad (0\le s\le t). \end{aligned}$$

(i) We prove that \(J_\varepsilon \) is positive for all \(s\in [0,t]\).

Since \(u(s,\nu )\) is non-negative in [0, t], it follows that for all \(0\le s\le t\),

$$\begin{aligned} \frac{\partial u}{\partial s}(s,\nu )-\Delta u(s,\nu )\ge 0. \end{aligned}$$
(4.1)

Note that

$$\begin{aligned} \begin{aligned} \Delta u(s,\nu )&=\frac{1}{\mu (\nu )}\sum _{y\sim \nu } \omega _{\nu y}\big (u(s,y)-u(s,\nu )\big )\\&\ge -\frac{1}{\mu (\nu )}\sum _{y\sim \nu }\omega _{\nu y}u(s,\nu )\\&\ge -D_\mu u(s,\nu ), \end{aligned} \end{aligned}$$

then the inequality (4.1) gives

$$\begin{aligned} \frac{\partial u}{\partial s}(s,\nu )\ge -D_\mu u(s,\nu ), \end{aligned}$$

which, together with \(a(\nu )=u(0,\nu )>0\), yields

$$\begin{aligned} u(s,\nu )\ge u(0,\nu )\exp (-D_\mu s)>0, \quad s\in [0,t]. \end{aligned}$$

Hence, for all \(0\le s \le t\), we have

$$\begin{aligned} \sum _{x\in V}u(s,x)>0. \end{aligned}$$

In view of the fact that \(v_\varepsilon (s,x)\) is positive in \([0,t]\times V\), we obtain \(J_\varepsilon (s)>0\) in [0, t].

(ii) We prove that \(J_\varepsilon \) is differentiable with respect to s and satisfies the following equation

$$\begin{aligned} \frac{d}{ds}J_\varepsilon (s)=\sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)^{1+\alpha }. \end{aligned}$$

Case 1 We consider the case where G is a finite connected graph.

Since \(\omega _{xy}=\omega _{yx}\), according to the definition of \(\Delta \), for any function \(f,g\in C(V)\), we have

$$\begin{aligned} \sum _{x\in V}\mu (x)\Delta f(x) g(x)=\sum _{x\in V}\mu (x)f(x)\Delta g(x). \end{aligned}$$
(4.2)

From the property of the heat kernel, we know that

$$\begin{aligned} \frac{\partial }{\partial s}v_\varepsilon =-\Delta v_\varepsilon . \end{aligned}$$

Thus

$$\begin{aligned} \begin{aligned} \frac{d}{ds}J_\varepsilon (s)=&\sum _{x\in V} \left( \mu (x)\frac{\partial }{\partial s} v_\varepsilon (s,x)u(s,x)+\mu (x)v_\varepsilon (s,x) \frac{\partial }{\partial s}u(s,x) \right) \\ =&\sum _{x\in V} \left( - \mu (x)\Delta v_\varepsilon (s,x)u(s,x)+\mu (x)v_\varepsilon (s,x) \Big ( \Delta u(s,x)+ u(s,x)^{1+\alpha } \Big ) \right) \\ =&-\sum _{x\in V} \mu (x)\Delta v_\varepsilon (s,x)u(s,x) +\sum _{x\in V} \mu (x)v_\varepsilon (s,x)\Delta u(s,x)\\&+\sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)^{1+\alpha } \\ =&\sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)^{1+\alpha }. \end{aligned} \end{aligned}$$
(4.3)

Case 2 We consider the case where G is a locally finite connected graph.

Firstly, we claim that \(J_\varepsilon \) exists if G is locally finite.

Since u is bounded, we can assume that there exists a constant \(B>0\) such that for any \((s,x)\in [0,t]\times V\),

$$\begin{aligned} |u(s,x)|\le B. \end{aligned}$$

Hence, from the property of the heat kernel, we have

$$\begin{aligned} J_\varepsilon =\Bigg |\sum _{x\in V}\mu (x)v_\varepsilon (s,x)u(s,x)\Bigg |\le B\sum _{x\in V}\mu (x)v_\varepsilon (s,x) \le B<\infty . \end{aligned}$$

Secondly, we observe that, if G is locally finite, the exchange between summation and derivation in the first step of (4.3) is allowed because \(J_\varepsilon (s)\) and \(\frac{d}{d s}J_\varepsilon (s)\) both are uniformly convergent.

Indeed, when \(\Delta \) is a bounded operator, we have

$$\begin{aligned} P_t u(x)=e^{t\Delta }u(x)=\sum _{k=0}^{+\infty }\frac{t^k \Delta ^k}{k!}u(x)=\sum _{y\in V}\mu (y)p(t,x,y)u(y). \end{aligned}$$
(4.4)

Furthermore, we can prove that the summation (4.4) has a nice convergency when u(x) is a bounded function. The details are as follows:

Assuming that \(|u(x)|\le B\) in V, then

$$\begin{aligned} |\Delta u(x)|=\Bigg |\frac{1}{\mu (x)}\sum _{y\sim x}\omega _{xy}\big (u(y)-u(x)\big )\Bigg |\le 2D_\mu B. \end{aligned}$$

By iteration, we obtain for any \(k\in \mathbb {N}\) and \(x\in V\),

$$\begin{aligned} \big |\Delta ^ku(x)\big |\le 2^kD_\mu ^kB. \end{aligned}$$

Thus for any \(t\in (0,T]\) and \(x\in V\),

$$\begin{aligned} \Bigg | \frac{t^k\Delta ^k}{k!}u(x) \Bigg | \le \Bigg | \frac{T^k\Delta ^k}{k!}u(x) \Bigg | \le \frac{T^k}{k!}2^kD_\mu ^kB. \end{aligned}$$

In view of

$$\begin{aligned} \sum _{k=0}^{+\infty }\frac{T^k}{k!}2^kD_\mu ^kB=Be^{2D_\mu T}<\infty , \end{aligned}$$

we deduce that \(\sum _{y\in V}\mu (y)p(t,x,y)u(y)\) converges uniformly in (0, T] when u(x) is bounded in V.

Since u(sx) and \(u(s,x)^{1+\alpha }\) both are bounded, we obtain that \(J_\varepsilon (s)\) and \(\frac{d}{ds}J_\varepsilon (s)\) converge uniformly in [0, t].

Thirdly, we show that if G is locally finite, the equation

$$\begin{aligned} \sum _{y\in V} \mu (y)\Delta p(t,x,y)u(y)=\sum _{y\in V} \mu (y)p(t,x,y)\Delta u(y) \end{aligned}$$
(4.5)

holds for any bounded function u.

A direct computation yields

$$\begin{aligned} \begin{aligned} \sum _{y\in V} \mu (y)\Delta p(t,x,y)u(y)=&\sum _{y\in V}\sum _{z\in V}\omega _{yz}\big (p(t,x,z)u(y)-p(t,x,y)u(y)\big )\\ =&\sum _{y\in V}\sum _{z\in V}\omega _{yz}p(t,x,z)u(y) -\sum _{y\in V}\sum _{z\in V}\omega _{yz}p(t,x,y)u(y)\\ =&\sum _{z\in V}\sum _{y\in V}\omega _{yz}p(t,x,y)u(z) -\sum _{y\in V}\sum _{z\in V}\omega _{yz}p(t,x,y)u(y)\\ =&\sum _{y\in V}\sum _{z\in V}\omega _{yz}p(t,x,y)u(z) -\sum _{y\in V}\sum _{z\in V}\omega _{yz}p(t,x,y)u(y)\\ =&\sum _{y\in V} \mu (y)p(t,x,y)\Delta u(y). \end{aligned} \end{aligned}$$

Note that the above summation can be exchanged, since

$$\begin{aligned} \begin{aligned} \sum _{y\in V}\sum _{z\in V}\big |\omega _{yz}p(t,x,y)u(z)\big |&\le \sum _{y\in V}\mu (y)p(t,x,y)\left( \sum _{z\in V} \frac{\omega _{yz}}{\mu (y)}\left| u(z)\right| \right) \\&\le D_\mu B. \end{aligned} \end{aligned}$$

Finally, we need to show that if G is locally finite, the interchange of sums in the third step of (4.3) holds because the sums are convergent.

Noting that \(|\Delta u(s,x)| \le 2D_\mu B\),   \(|u(s,x)^{1+\alpha }|\le B^{1+\alpha }\) and

$$\begin{aligned} \sum _{x\in V} \mu (x)\Delta v_\varepsilon (s,x)u(s,x)=\sum _{x\in V} \mu (x)v_\varepsilon (s,x)\Delta u(s,x), \end{aligned}$$

we deduce that for any \((s,x)\in [0,t]\times V\),   \(\sum _{x\in V} \mu (x)v_\varepsilon (s,x)\Delta u(s,x)\),  \(\sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)^{1+\alpha }\) and \(\sum _{x\in V} \mu (x)\Delta v_\varepsilon (s,x)u(s,x)\) all are convergent.

(iii) Since \(v_\varepsilon > 0\) and

$$\begin{aligned} \sum _{x\in V}\mu (x)v_\varepsilon (s,x)\le 1, \end{aligned}$$

applying Jensen’s inequality to \(x^{1+\alpha }\) and owing to the convexity of \(x^{1+\alpha }\) \((\alpha >0)\), we obtain

$$\begin{aligned} \frac{\sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)^{1+\alpha }}{\sum _{x\in V}\mu (x)v_\varepsilon (s,x)}\ge \left( \frac{\sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)}{\sum _{x\in V}\mu (x)v_\varepsilon (s,x)}\right) ^{1+\alpha }, \end{aligned}$$

that is,

$$\begin{aligned} \begin{aligned}&\left( \sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)\right) ^{1+\alpha }\\&\quad \le \left( \sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)^{1+\alpha }\right) \left( \sum _{x\in V} \mu (x)v_\varepsilon (s,x)\right) ^\alpha \\&\quad \le \sum _{x\in V} \mu (x)v_\varepsilon (s,x)u(s,x)^{1+\alpha }. \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} \frac{d}{ds}J_\varepsilon \ge J_\varepsilon ^{1+\alpha }. \end{aligned}$$

Using the Mean-value theorem, we have

$$\begin{aligned} J_\varepsilon (0)^{-\alpha }-J_\varepsilon (t)^{-\alpha }\ge \alpha t. \end{aligned}$$
(4.6)

According to (4.4), we assert that for any bounded function u,

$$\begin{aligned} \lim _{t\rightarrow 0^+}P_t u(x)=\lim _{t\rightarrow 0^+}\sum _{y\in V} \mu (y)p(t,x,y)u(y)=u(x), \end{aligned}$$

from which we get

$$\begin{aligned} J_\varepsilon (t)\rightarrow u(t,\nu ) \quad \quad (\varepsilon \rightarrow 0^+). \end{aligned}$$
(4.7)

Moreover, it is not difficult to find that

$$\begin{aligned} J_\varepsilon (0)\rightarrow J_0 \quad \quad (\varepsilon \rightarrow 0^+). \end{aligned}$$
(4.8)

In fact, if G is a finite connected graph, then (4.8) is obvious. If G is a locally finite connected graph, we can exchange limitation with summation because \(J_\varepsilon (0)\) is uniformly convergent. This leads to (4.8).

Applying (4.7) and (4.8) to (4.6), for any \(t\in (0,T]\), we have

$$\begin{aligned} J_0^{-\alpha }-u(t,\nu )^{-\alpha }\ge \alpha t. \end{aligned}$$

This completes the proof of Lemma 4.1. \(\square \)

Proof of Theorem 3.1

With the help of Lemma 4.1 we can now prove Theorem 3.1 by contradiction.

Suppose that there exists a non-negative global solution \(u=u(t,x)\) of (3.1) in \([0,+\infty )\).

By Lemma 4.1, we have for any \(t>0\),

$$\begin{aligned} J_0^{-\alpha }\ge u(t,\nu )^{-\alpha }+\alpha t \ge \alpha t. \end{aligned}$$

From Proposition 2.4 and the given condition \(V(x,r)\le c_0 r^m\;(r\ge r_0)\), we have for all large enough t,

$$\begin{aligned} p(t,\nu ,\nu )\ge \frac{1}{4c_0C_0^m}\left( t\log t\right) ^{-m} \quad \, (C_0>2D_\mu e). \end{aligned}$$

Hence, for all sufficiently large t,

$$\begin{aligned} \begin{aligned} J_0&=\sum _{x\in V}\mu (x)p(t,\nu ,x)a(x)\\&\ge \mu (\nu )a(\nu )p(t,\nu ,\nu )\\&\ge \overline{C}\left( t\log t\right) ^{-m}, \end{aligned} \end{aligned}$$

where \(\overline{C}=\frac{\mu (\nu )a(\nu )}{4c_0C_0^m}>0\) and \(C_0>2D_\mu e\).

Combining \(J_0^{-\alpha }\ge \alpha t\) and \(J_0\ge \overline{C}\left( t\log t\right) ^{-m}\), for all large enough t, we get

$$\begin{aligned} \left( t\log t\right) ^{m\alpha }\ge \alpha \overline{C}^\alpha t. \end{aligned}$$
(4.9)

However, if \(0<m\alpha <1\), the inequality (4.9) is invalid for sufficiently large t. This leads to a contradiction.

This completes the Proof of Theorem 3.1. \(\square \)

5 Proof of Theorem 3.2

Before proving Theorem 3.2, we consider the following integral equations (5.1) associated with (3.1) and discuss its solution u(tx) in \({\mathcal {F}}(0,+\infty )\).

$$\begin{aligned} \left\{ \begin{aligned}&u(t,x)=u_0(t,x)+\int _0^t\sum \limits _{y\in V}\mu (y) p(t-s,x,y)u(s,y)^{1+\alpha } ds&\quad \text {in }(0,+\infty )\times V,\\&u_0(t,x)=\sum \limits _{y\in V}\mu (y)p(t,x,y)a(y)&\quad \text {in }(0,+\infty )\times V, \end{aligned} \right. \end{aligned}$$
(5.1)

where \(\alpha >0\), a(y) is bounded, non-negative and not trivial in V. Moreover, we assume \(0\le a(y)\le \delta p(\gamma ,\nu ,y)\) in V for \(\gamma >0\) and \(\delta \in {\mathcal {A}}\). It should be noted that \(\nu \) is the vertex, as stated previously, at which the function a is positive. Furthermore, the set \({\mathcal {A}}\) satisfies

$$\begin{aligned} {\mathcal {A}}= & {} \left\{ \delta : \,\, 0<\delta<1, \, \delta ^{\frac{\alpha }{4}}\big (1+\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }<1, \, {\widetilde{C}}\delta ^{\frac{\alpha }{2}}<1, \,\right. \\&\quad \left. (1+\alpha )\big (\delta +\delta ^{1+\frac{\alpha }{2}} (1+\delta ^{\frac{\alpha }{4}})^{1+\alpha }\big )^\alpha {\widetilde{C}}<1 \right\} , \end{aligned}$$

where \({\widetilde{C}}=\frac{2\gamma }{m\alpha -2}\left( C_1(n,D_\omega ,D_\mu ) c_1^{-1}\right) ^\alpha \gamma ^{-\frac{m\alpha }{2}}>0\).

For any function v(tx) with \(|v|\in {\mathcal {F}}(0,+\infty )\), we can define its norm

$$\begin{aligned} ||v||=\sup \limits _{t> 0, x\in V}\frac{|v(t,x)|}{\rho (t,x)}, \end{aligned}$$
(5.2)

where \(\rho (t,x)=p(t+\gamma ,\nu ,x)\).

Let

$$\begin{aligned} (\Phi u)(t,x)= \int _0^t\sum \limits _{y\in V}\mu (y)p(t-s,x,y)u(s,y)^{1+\alpha } ds. \end{aligned}$$

We first prove some lemmas which are essential to prove Theorem 3.2.

Lemma 5.1

Let G satisfy \(D_\omega <\infty \), \(\textit{CDE}(n,0)\) and \(V(x,r)\ge c_1 r^m\,\,(c_1>0, m>0, r> 0)\) for all \(x\in V\). If \(m\alpha >2\), then

$$\begin{aligned} \Phi \rho \in {\mathcal {F}}(0,+\infty ) \quad \text {and} \quad ||\Phi \rho ||\le {\widetilde{C}}, \end{aligned}$$

where \({\widetilde{C}}=\frac{2\gamma }{m\alpha -2}\left( C_1 c_1^{-1}\right) ^\alpha \gamma ^{-\frac{m\alpha }{2}}>0\).

Proof

For any \((t,x)\in (0,+\infty )\times V\), we have

$$\begin{aligned} \begin{aligned} (\Phi \rho )(t,x)&=\int _0^t\sum _{y\in V}\mu (y)p(t-s,x,y) \rho (s,y)^{1+\alpha } ds\\&=\int _0^t\sum _{y\in V}\mu (y)p(t-s,x,y)p(s+\gamma , \nu ,y)p(s+\gamma ,\nu ,y)^\alpha ds. \end{aligned} \end{aligned}$$

Obviously, \(\Phi \rho \) is non-negative and continuous with respect to t in \((0,+\infty )\times V\).

According to Proposition 2.2, for any \(s\ge 0\), there exists a positive constant \(C_1\) such that

$$\begin{aligned} p(s+\gamma ,\nu ,y)\le \frac{C_1}{V\big (\nu ,\sqrt{s+\gamma }\big )}. \end{aligned}$$

Since

$$\begin{aligned} V\big (\nu ,\sqrt{s+\gamma }\big )\ge c_1(s+\gamma )^{\frac{m}{2}}, \end{aligned}$$

we obtain

$$\begin{aligned} p(s+\gamma ,\nu ,y)\le C_1c_1^{-1}(s+\gamma )^{-\frac{m}{2}}. \end{aligned}$$
(5.3)

Hence,

$$\begin{aligned} \begin{aligned} (\Phi \rho )(t,x)&\le \int _0^t\sum _{y\in V}\mu (y)p(t-s,x,y) p(s+\gamma ,\nu ,y)\left( C_1c_1^{-1}\right) ^\alpha (s+\gamma )^{- \frac{m\alpha }{2}} ds\\&\le \left( C_1 c_1^{-1}\right) ^\alpha \int _0^t(s+\gamma )^{- \frac{m\alpha }{2}} \sum _{y\in V}\mu (y)p(t-s,x,y)p(s+\gamma ,\nu ,y)ds\\&=\left( C_1 c_1^{-1}\right) ^\alpha p(t+\gamma ,\nu ,x) \int _0^t(s+\gamma )^{-\frac{m\alpha }{2}}ds. \end{aligned} \end{aligned}$$

Furthermore,

$$\begin{aligned} \begin{aligned} \int _0^t(s+\gamma )^{-\frac{m\alpha }{2}}ds&\le \int _0^{ +\infty } (s+\gamma )^{-\frac{m\alpha }{2}}ds\\&=\frac{-2\gamma }{2-m\alpha }\gamma ^{-\frac{m\alpha }{2}}, \end{aligned} \end{aligned}$$
(5.4)

it is worth noting that the existence of the integral in (5.4) is based on the assumption \(m\alpha >2\).

Thus for any \((t,x)\in (0,+\infty )\times V\),

$$\begin{aligned} (\Phi \rho )(t,x)\le {\widetilde{C}}p(t+\gamma ,\nu ,x), \end{aligned}$$
(5.5)

where \({\widetilde{C}}=\frac{2\gamma }{m\alpha -2}\left( C_1 c_1^{-1}\right) ^\alpha \gamma ^{-\frac{m\alpha }{2}}>0\).

It follows that

$$\begin{aligned} \Phi \rho \in {\mathcal {F}}(0,+\infty )\quad \text {and} \quad ||\Phi \rho ||\le {\widetilde{C}}. \end{aligned}$$

This completes the proof of Lemma 5.1.

Lemma 5.2

Under the conditions of Lemma 5.1 and \(u\in {\mathcal {F}}(0,+\infty )\), we have

$$\begin{aligned} \Phi u\in {\mathcal {F}}(0,+\infty ) \quad \text {and} \quad ||\Phi u||\le {\widetilde{C}}||u||^{1+\alpha }. \end{aligned}$$

Proof

Since \(u\in {\mathcal {F}}(0,+\infty )\), we can define its norm and then we have \(u(t,x)\le ||u||\rho (t,x)\) for any \((t,x)\in (0,+\infty )\times V\).

A simple calculation shows that

$$\begin{aligned} \begin{aligned} 0\le (\Phi u)(t,x)&=\int _0^t\sum \limits _{y\in V}\mu (y) p(t-s,x,y)u(s,y)^{1+\alpha } ds\\&\le ||u||^{1+\alpha }\int _0^t\sum \limits _{y\in V}\mu (y) p(t-s,x,y)\rho (s,y)^{1+\alpha } ds\\&= ||u||^{1+\alpha }(\Phi \rho )(t,x). \end{aligned} \end{aligned}$$
(5.6)

Combining (5.6) with (5.5), we get

$$\begin{aligned} \Phi u\in {\mathcal {F}}(0,+\infty ) \quad \text {and} \quad ||\Phi u||\le {\widetilde{C}}||u||^{1+\alpha }. \end{aligned}$$

This completes the proof of Lemma 5.2. \(\square \)

Lemma 5.3

Under the conditions of Lemma 5.1, we suppose that u and v are in \({\mathcal {F}}(0,+\infty )\) and satisfy \(||u||\le M\) and \(||v||\le M\) with a positive number M. Then we have

$$\begin{aligned} ||\Phi u-\Phi v||\le (1+\alpha )M^\alpha {\widetilde{C}}||u-v||. \end{aligned}$$

Proof

Since \(u,v\in {\mathcal {F}}(0,+\infty )\), for any \((t,x)\in [0, \infty )\times V\), we get

$$\begin{aligned} \big |u(t,x)-v(t,x)\big | \le |u(t,x)|+|v(t,x)| \le 2M \rho (t,x), \end{aligned}$$

which implies \(|u-v|\in {\mathcal {F}}(0,+\infty )\).

By using the elementary inequality

$$\begin{aligned} |p^{1+\alpha }-q^{1+\alpha }|\le (1+\alpha )|p-q|\max \{p^\alpha , q^\alpha \} \quad \,\, (q\ge 0, p\ge 0), \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned} \big |u(s,y)^{1+\alpha }-v(s,y)^{1+\alpha }\big |&\le (1+\alpha )|u(s,y)-v(s,y)|\max \{u(s,y)^{\alpha },v(s,y)^{\alpha }\}\\&\le (1+\alpha )M^\alpha \rho (s,y)^\alpha \big |u(s,y)-v(s,y)\big |\\&\le (1+\alpha )M^\alpha \rho (s,y)^\alpha ||u-v||\rho (s,y)\\&=(1+\alpha )M^\alpha \rho (s,y)^{1+\alpha }||u-v||. \end{aligned} \end{aligned}$$

Case 1 When G is a finite connected graph, for any \((t,x)\in (0,+\infty )\times V\), we find that

$$\begin{aligned} \begin{aligned}&\big |\Phi u(t,x)-\Phi v(t,x)\big |\\&\quad =\Bigg |\int _0^t\sum \limits _{y\in V}\mu (y)p(t-s,x,y) \Big (u(s,y)^{1+\alpha }-v(s,y)^{1+\alpha }\Big ) ds\Bigg |\\&\quad \le \int _0^t\sum \limits _{y\in V}\mu (y)p(t-s,x,y) \Big |u(s,y)^{1+\alpha }-v(s,y)^{1+\alpha }\Big | ds\\&\quad \le (1+\alpha )M^\alpha ||u-v||\int _0^t\sum \limits _{y\in V}\mu (y)p(t-s,x,y)\rho (s,y)^{1+\alpha } ds\\&\quad =(1+\alpha )M^\alpha ||u-v||(\Phi \rho )(t,x)\\&\quad \le (1+\alpha )M^\alpha ||u-v||{\widetilde{C}}\rho (t,x), \end{aligned} \end{aligned}$$
(5.7)

thus

$$\begin{aligned} ||\Phi u-\Phi v||\le (1+\alpha )M^\alpha {\widetilde{C}}||u-v||. \end{aligned}$$
(5.8)

Case 2 When G is a locally finite connected graph, we have the same inequality as (5.8) above. In fact, the inequality (5.8) will be obtained if we can show that the first equation of (5.7) is true. The details are as follows:

Since \(u\in {\mathcal {F}}(0,+\infty )\) and \(||u||\le M\), we have

$$\begin{aligned} 0\le u(t,x)\le Mp(t+\gamma ,\nu ,x). \end{aligned}$$

By (5.3), we know that

$$\begin{aligned} p(t+\gamma , \nu ,x)\le C_1c_1^{-1}\gamma ^{-\frac{m}{2}}, \end{aligned}$$

hence for any \((t,x)\in (0,+\infty )\times V\), we deduce that

$$\begin{aligned} 0\le u(t,x)\le B, \end{aligned}$$

where \(B=MC_1c_1^{-1}\gamma ^{-\frac{m}{2}}\).

Similarly, v also satisfies \(0\le v(t,x)\le B\).

Hence, \(\sum _{y\in V}\mu (y)p(t-s,x,y)u(s,y)^{1+\alpha }\) and \(\sum _{y\in V}\mu (y)p(t-s,x,y)v(s,y)^{1+\alpha }\) both are convergent, which shows that

$$\begin{aligned} \begin{aligned}&\sum \limits _{y\in V}\mu (y)p(t-s,x,y)u(s,y)^{1+\alpha }-\sum \limits _ {y\in V}\mu (y)p(t-s,x,y)v(s,y)^{1+\alpha }\\&\quad =\sum \limits _{y\in V}\mu (y)p(t-s,x,y)\Big (u(s,y)^{1+\alpha }-v(s,y)^{1+\alpha }\Big ). \end{aligned} \end{aligned}$$

Based on the above discussion, we verify the validity of inequalities (5.7) and (5.8) under the condition that G is locally finite.

The proof of Lemma 5.3 is complete. \(\square \)

Proof of Theorem 3.2

(i) We construct the solution of (5.1) in \({\mathcal {F}}(0,+\infty )\).

Set a iteration relation

$$\begin{aligned} u_{n+1}=u_0+\Phi u_n \quad \quad (n=0,1,\ldots ) \end{aligned}$$
(5.9)

with \(u_0\) given by (5.1) and \(u_n\in {\mathcal {F}}(0,+\infty )\)  \((n=1,2,\ldots )\).

Recall the definition of the set \({\mathcal {A}}\):

$$\begin{aligned} {\mathcal {A}}= & {} \left\{ \delta : \,\, 0<\delta<1, \, \delta ^{\frac{\alpha }{4}}\big (1+\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }<1, \, {\widetilde{C}}\delta ^{\frac{\alpha }{2}}<1, \,\right. \\&\quad \left. (1+\alpha )\big (\delta +\delta ^{1+\frac{\alpha }{2}} (1+\delta ^{\frac{\alpha }{4}})^{1+\alpha }\big )^\alpha {\widetilde{C}}<1 \right\} . \end{aligned}$$

On account of the assumption of Theorem 3.2 that the initial value satisfies

$$\begin{aligned} 0\le a(y)\le \delta p(\gamma ,\nu ,y), \end{aligned}$$

where \(\delta \in {\mathcal {A}}\), we have for any \((t,x)\in (0,+\infty )\times V\),

$$\begin{aligned} \begin{aligned} 0\le u_0(t,x)&\le \delta \sum _{y\in V}\mu (y)p(t,x,y)p(\gamma ,\nu ,y)\\&=\delta p(t+\gamma ,\nu ,x), \end{aligned} \end{aligned}$$

which shows \(u_0\in {\mathcal {F}}(0,+\infty )\) and \(||u_0||\le \delta \), where \(\delta \in {\mathcal {A}}\).

According to Lemma 5.2, we obtain the inequalities

$$\begin{aligned} \begin{aligned} ||u_{n+1}||\le ||u_0||+||\Phi u_n||\le ||u_0||+{\widetilde{C}}||u_n||^{1+\alpha }, \end{aligned} \end{aligned}$$

that is,

$$\begin{aligned} ||u_{n+1}|| \le \delta +{\widetilde{C}}||u_n||^{1+\alpha } \quad \quad (n=0,1,\ldots ). \end{aligned}$$
(5.10)

From recurrent inequalities (5.10), we have for any \(\delta \in {\mathcal {A}}\),

$$\begin{aligned} \begin{aligned} ||u_0||&\le \delta ,\\ ||u_1||&\le \delta +{\widetilde{C}}||u_0||^{1+\alpha } \le \delta +\delta ^{1+\alpha }{\widetilde{C}}<\delta +\delta ^{1+\frac{\alpha }{2}},\\ ||u_2||&\le \delta +{\widetilde{C}}||u_1||^{1+\alpha }\\&\le \delta +\big (\delta +\delta ^{1+\frac{\alpha }{2}} \big )^{1+\alpha }{\widetilde{C}}\\&<\delta +\delta ^{1+\frac{\alpha }{2}}\big (1+\delta ^{\frac{\alpha }{2}} \big )^{1+\alpha }\\&<\delta +\delta ^{1+\frac{\alpha }{2}}\big (1+\delta ^{\frac{\alpha }{4}} \big )^{1+\alpha },\\ ||u_3||&\le \delta +{\widetilde{C}}||u_2||^{1+\alpha }\\&\le \delta +\big [\delta +\delta ^{1+\frac{\alpha }{2}} \big (1+\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }\big ]^{1+\alpha } {\widetilde{C}}\\&<\delta +\delta ^{1+\frac{\alpha }{2}}\big [1+\delta ^{\frac{\alpha }{2}} \big (1+\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }\big ]^{1+\alpha }\\&<\delta +\delta ^{1+\frac{\alpha }{2}}\big (1+\delta ^{\frac{\alpha }{4}} \big )^{1+\alpha },\\&\cdots \\ ||u_n||&<\delta +\delta ^{1+\frac{\alpha }{2}}\big (1+\delta ^{\frac{\alpha }{4}} \big )^{1+\alpha },\\&\cdots \end{aligned} \end{aligned}$$

It follows that \(||u_n||< M(\delta )=\delta +\delta ^{1+\frac{\alpha }{2}}\big (1+\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }\; (n=0,1,\ldots )\) with \(\delta \in {\mathcal {A}}\).

Note that

$$\begin{aligned} u_{n+2}-u_{n+1}=\Phi u_{n+1}-\Phi u_{n}. \end{aligned}$$

From Lemma 5.3, we deduce that

$$\begin{aligned} ||u_{n+2}-u_{n+1}||\le \kappa ||u_{n+1}-u_n|| \quad \, (n=0,1,\ldots ), \end{aligned}$$
(5.11)

where \(\kappa =(1+\alpha )M(\delta )^\alpha {\widetilde{C}}\).

Since \(\delta \in {\mathcal {A}}\), we have

$$\begin{aligned} \kappa =(1+\alpha )M(\delta )^\alpha {\widetilde{C}}=(1+\alpha )\big (\delta + \delta ^{1+\frac{\alpha }{2}} (1+\delta ^{\frac{\alpha }{4}})^{1+\alpha }\big )^\alpha {\widetilde{C}}<1. \end{aligned}$$

In view of \(\kappa <1\), the inequality (5.11) implies the convergence of \(\sum _{n=0}^{\infty }||u_{n+1}-u_n||\). Thus \(\{u_n\}\) is a Cauchy sequence in \({\mathcal {F}}(0,+\infty )\), that is, for any \(\epsilon >0\), we may choose a constant \(N(\epsilon )\) such that, for any \(m,n \ge N(\epsilon )\),

$$\begin{aligned} ||u_m-u_n||=\sup _{t> 0, x\in V}\frac{|u_m(t,x)-u_n(t,x)|}{\rho (t,x)}<\epsilon . \end{aligned}$$

Hence, for any \(t\in (0,+\infty )\) and \(x\in V\),

$$\begin{aligned} \left| \frac{u_m(t,x)}{\rho (t,x)}-\frac{u_n(t,x)}{\rho (t,x)} \right| <\epsilon \quad \, \text {for any }m,n \ge N(\epsilon ). \end{aligned}$$
(5.12)

On the other hand, the inequality (5.3) shows that \(\rho (t,x)\) is uniformly bounded with respect to t and x. Thus for any \(t\in (0,+\infty )\) and \(x\in V\), we have

$$\begin{aligned} \left| u_m(t,x)-u_n(t,x)\right| <B'\epsilon \quad \, \text {for any } m,n \ge N(\epsilon ), \end{aligned}$$
(5.13)

where \(B'=C_1c_1^{-1}\gamma ^{-\frac{m}{2}}\).

It follows from (5.13) that the sequence \(\{u_n\}\) is a Cauchy sequence in \({\mathbb {R}}\). Thus we assume \(\lim _{n\rightarrow +\infty }u_n(t,x)=u(t,x)\). Taking the limit \(m\rightarrow +\infty \) in (5.12), we get for any \(n\ge N(\epsilon )\),

$$\begin{aligned} \left| \frac{u(t,x)}{\rho (t,x)}-\frac{u_n(t,x)}{\rho (t,x)}\right| \le \epsilon , \end{aligned}$$

from which we deduce that \(\frac{u_n}{\rho }\) converges uniformly to \(\frac{u}{\rho }\) in \((0,+\infty )\times V\). So \(u_n\) converges with respect to the norm \(||\cdot ||\).

In addition,

$$\begin{aligned} \sup _{t>0,x\in V}\frac{|u(t,x)|}{\rho (t,x)}\le \sup _{t>0,x\in V}\left( \left| \frac{u(t,x)}{\rho (t,x)}-\frac{u_n(t,x)}{\rho (t,x)} \right| +\left| \frac{u_n(t,x)}{\rho (t,x)}\right| \right) <\epsilon +M(\delta ), \end{aligned}$$

that is, \(u(t,x)\in {\mathcal {F}}(0,+\infty )\). Furthermore, letting \(\epsilon \rightarrow 0\), we have \(||u||\le M(\delta )\).

In conclusion, there exists a function u which satisfies \(0\le u(t,x)\le M(\delta )\rho (t,x)\), such that

$$\begin{aligned} ||u_n-u||\rightarrow 0 \quad \, ( n\rightarrow +\infty ). \end{aligned}$$
(5.14)

Utilizing (5.9) and (5.14) leads us to the assertion that u is a solution of (5.1) in \({\mathcal {F}}(0,+\infty )\) and satisfies \(0\le u(t,x)\le M(\delta )p(t+\gamma ,\nu ,x)\), where \(M(\delta )=\delta +\delta ^{1+\frac{\alpha }{2}}\big (1+\delta ^{\frac{\alpha }{4}}\big )^{1+\alpha }\).

(ii) We prove that the solution u(tx) of (5.1) constructed above satisfies (3.1).

For any \(T>0\), since \(u\in {\mathcal {F}}(0,T]\), we derive from (5.3) that u is bounded and continuous with respect to t in \((0,T]\times V\).

Taking a small positive number \(\varepsilon \), we put

$$\begin{aligned} (\Phi _\varepsilon u)(t,x)=\int _0^{t-\varepsilon } \sum _{y\in V}\mu (y)p(t-s,x,y)u(s,y)^{1+\alpha } ds, \end{aligned}$$

where \(0<\varepsilon < t\le T\) and \(x\in V\).

Obviously, \(\Phi _\varepsilon u\) tends to \(\Phi u\) in \([\sigma ,T]\times V\) as \(\varepsilon \rightarrow 0^+\), here \(\sigma \) is an arbitrary positive number and \(\sigma >\varepsilon \).

Case 1 If G is a finite connected graph, recalling an important property of the heat kernel:

$$\begin{aligned} p_t(t,x,y)=\Delta _x p(t,x,y)=\Delta _y p(t,x,y), \end{aligned}$$
(5.15)

we have

$$\begin{aligned} \begin{aligned} \frac{\partial }{\partial t}(\Phi _\varepsilon u)=&\sum _{y\in V}\mu (y)p(\varepsilon ,x,y)u(t-\varepsilon ,y)^{1+\alpha }\\&+\int _0^{t-\varepsilon } \sum _{y\in V} \mu (y)p_t(t-s,x,y)u(s,y)^{1+\alpha } ds\\ =&\sum _{y\in V}\mu (y)p(\varepsilon ,x,y)u(t-\varepsilon ,y)^{1+\alpha }\\&+\int _0^{t-\varepsilon } \sum _{y\in V} \mu (y)\Delta _x p(t-s,x,y)u(s,y)^{1+\alpha } ds\\ \equiv&I_1+I_2. \end{aligned} \end{aligned}$$
(5.16)

Since \(u^{1+\alpha }\) is bounded and continuous with respect to t in \((0,T]\times V\), it follows immediately from (4.4) that \(I_1\) tends to \(u(t,x)^{1+\alpha }\) in \([\sigma ,T]\times V\) as \(\varepsilon \rightarrow 0^+\). On the other hand, when \(\varepsilon \rightarrow 0^+\), \(I_2\) converges in \([\sigma ,T]\times V\) to a function

$$\begin{aligned} \varphi (t,x)=\int _0^{t} \sum _{y\in V}\mu (y) \Delta _x p(t-s,x,y)u(s,y)^{1+\alpha }ds. \end{aligned}$$

Letting \(\varepsilon \rightarrow 0^+\) in (5.16), we obtain for any \((t,x)\in [\sigma ,T]\times V\),

$$\begin{aligned} \begin{aligned} \frac{\partial }{\partial t}(\Phi u)(t,x)&=u(t,x)^{1+\alpha }+\varphi (t,x)\\&=u(t,x)^{1+\alpha }+\Delta _x(\Phi u)(t,x). \end{aligned} \end{aligned}$$
(5.17)

Since \(u=u_0+\Phi u\) and \(\frac{\partial }{\partial t}u_0=\Delta u_0\), for any \((t,x)\in [\sigma ,T]\times V\), we conclude that

$$\begin{aligned} \begin{aligned} u_t&=\Delta u_0+\frac{\partial }{\partial t}\Phi u\\&=\Delta u_0+\Delta (\Phi u)+u^{1+\alpha }\\&=\Delta u+u^{1+\alpha }. \end{aligned} \end{aligned}$$
(5.18)

Because \(\sigma \) is arbitrary, (5.18) is true for all \((t,x)\in (0,T]\times V\).

Furthermore, we can prove that the initial-value condition is satisfied in the sense that

$$\begin{aligned} u(t,x)\rightarrow a(x) \quad \, (t\rightarrow 0^+), \end{aligned}$$

from which we can extend u(tx) to \(t=0\) and set \(u(0,x)=a(x)\).

By the arbitrariness of T, we deduce that the solution u(tx) of (5.1) constructed above is the required global solution of (3.1) in \({\mathcal {F}}[0,+\infty )\).

Case 2 If G is a locally finite connected graph, as before, the following assertions need to be verified.

  1. (a)
    $$\begin{aligned} \frac{\partial }{\partial t}\left( \sum _{y\in V}\mu (y)p(t-s,x,y)u(s,y)^{1+\alpha }\right) =\sum _{y\in V}\mu (y)p_t(t-s,x,y)u(s,y)^{1+\alpha }; \end{aligned}$$
  2. (b)
    $$\begin{aligned}&\int _0^{t} \sum _{y\in V}\mu (y) \Delta _x p(t-s,x,y)u(s,y)^{1+\alpha }ds \\&\quad =\Delta \left( \int _0^{t} \sum _{y\in V}\mu (y) p(t-s,x,y)u(s,y)^{1+\alpha }ds\right) . \end{aligned}$$

Noting that u is bounded and \(D_\mu <\infty \), we deduce that \(\Delta u^{1+\alpha }\) is bounded too.

Following (4.5) and (5.15), we find that

$$\begin{aligned} \begin{aligned} \sum _{y\in V}\mu (y)p_t(t-s,x,y)u(s,y)^{1+\alpha }&=\sum _{y\in V}\mu (y)\Delta _yp(t-s,x,y)u(s,y)^{1+\alpha }\\&=\sum _{y\in V}\mu (y)p(t-s,x,y)\Delta u(s,y)^{1+\alpha }, \end{aligned} \end{aligned}$$

thus \(\sum _{y\in V}\mu (y)p_t(t-s,x,y)u(s,y)^{1+\alpha }\) converges uniformly, from which we conclude that the assertion (a) is valid.

Similar to the proof of (4.5), the validity of assertion (b) can be proved due to the absolute convergence of sums.

In view of the assertions (a) and (b), for a locally finite graph we have the same conclusion as for a finite graph.

This completes the Proof of Theorem 3.2. \(\square \)

6 Proof of Theorem 3.3

Proof of the assertion (i) of Theorem 3.3

We proceed as in the Proof of Theorem 3.1, instead of using the heat kernel estimate in Proposition 2.4, we use the one in Proposition 2.3.

Actually, by Proposition 2.3, for any \(t_0>0\), we have

$$\begin{aligned} p(t,\nu ,\nu )\ge \frac{C(n)}{V(\nu ,\sqrt{t})} \end{aligned}$$
(6.1)

for all \(t>t_0\). Taking \(t_0=\frac{1}{2}\) and combining (6.1) with \(V(\nu ,t)\simeq t^m\; (m>0)\), we obtain

$$\begin{aligned} p(t,\nu ,\nu )\ge \frac{C(n)}{c'}t^{-\frac{m}{2}} \end{aligned}$$

for all \(t>\frac{1}{2}\).

Hence, for any \(t>\frac{1}{2}\), we have

$$\begin{aligned} \begin{aligned} J_0&= \sum _{x\in V}\mu (x)p(t,\nu ,x)a(x)\\&\ge \mu (\nu )a(\nu )p(t,\nu ,\nu )\\&\ge \overline{C'}t^{-\frac{m}{2}}, \end{aligned} \end{aligned}$$
(6.2)

where \(\overline{C'}=\frac{C(n)\mu (\nu )a(\nu )}{c'}>0\) and the definition of \(J_0\) is the same as in Lemma 4.1.

Let us now prove the assertion (i) of Theorem 3.3 by contradiction. Suppose that there exists a non-negative global solution of (3.1) in \([0,+\infty )\), then by Lemma 4.1 we have

$$\begin{aligned} J_0^{-\alpha }\ge \alpha t \end{aligned}$$
(6.3)

for any \(t>0\).

Combining (6.2) and (6.3), for any \(t>\frac{1}{2}\), we have

$$\begin{aligned} t^{\frac{m\alpha }{2}}\ge \alpha \overline{C'}^\alpha t. \end{aligned}$$
(6.4)

However, if \(0<m\alpha <2\), the above inequality (6.4) is distinctly not true for a sufficiently large t. This proves the assertion (i) of Theorem 3.3. \(\square \)

Proof of the assertion (ii) of Theorem 3.3

From Remark 2.1 we conclude that \(\textit{CDE}'(n,0)\) implies \(\textit{CDE}(n,0)\), thus the assertion of Theorem 3.2 implies the assertion (ii) of Theorem 3.3. This proves the assertion (ii) of Theorem 3.3. \(\square \)

7 Example and numerical experiments

In this section, we give an example to illustrate our result asserted in Theorem 3.3.

It is well known that the integer grid \({\mathbb {Z}}^m\) admits the uniform volume growth of positive degree m. Moreover, Bauer et al. [1] proved that \({\mathbb {Z}}^m\) satisfies \(\textit{CDE}(2m,0)\) and \(\textit{CDE}'(4.53m,0)\) for the normalized graph Laplacian, which, along with Theorem 3.3 enable us to deduce the existence and non-existence of global solutions to problem (3.1) in \({\mathbb {Z}}^m\) with the normalized graph Laplacian, i.e., we have the following result:

Proposition 7.1

Let G be \({\mathbb {Z}}^m\) with \(\mu (x)= m(x)\) and \(D_\omega <\infty \).

  1. (i)

    If \(0<m\alpha <2\), then there is no non-negative global solution of (3.1) in \([0,+\infty )\) for any bounded, non-negative and non-trivial initial value.

  2. (ii)

    If \(m\alpha >2\), then there exists a global solution of (3.1) in \({\mathcal {F}}[0,+\infty )\) for a sufficiently small initial value.

Fig. 1
figure 1

\(\mathrm {C}_\mathrm {6}\)

For example, we consider a circle \(\mathrm {C}_\mathrm {6}\) (as shown in Fig. 1) which satisfies \(\textit{CDE}'(4.53,0)\). And then the problem (3.1) can be written as

$$\begin{aligned} \left\{ \begin{array}{lc} u_t(t,x_1)=\frac{1}{2}\big (u(t,x_6)+u(t,x_2)\big ) -u(t,x_1)+u(t,x_1)^{1+\alpha },\\ u_t(t,x_2)=\frac{1}{2}\big (u(t,x_1)+u(t,x_3)\big ) -u(t,x_2)+u(t,x_2)^{1+\alpha },\\ u_t(t,x_3)=\frac{1}{2}\big (u(t,x_2)+u(t,x_4)\big ) -u(t,x_3)+u(t,x_3)^{1+\alpha },\\ u_t(t,x_4)=\frac{1}{2}\big (u(t,x_3)+u(t,x_5)\big ) -u(t,x_4)+u(t,x_4)^{1+\alpha },\\ u_t(t,x_5)=\frac{1}{2}\big (u(t,x_4)+u(t,x_6)\big ) -u(t,x_5)+u(t,x_5)^{1+\alpha },\\ u_t(t,x_6)=\frac{1}{2}\big (u(t,x_5)+u(t,x_1)\big ) -u(t,x_6)+u(t,x_6)^{1+\alpha },\\ u(0,x_1)=a(x_1),\\ u(0,x_2)=a(x_2),\\ u(0,x_3)=a(x_3),\\ u(0,x_4)=a(x_4),\\ u(0,x_5)=a(x_5),\\ u(0,x_6)=a(x_6), \end{array} \right. \end{aligned}$$
(7.1)

where we take \(\omega _{xy}=1\) for any \(x\sim y\) and \(\mu (x)=m(x)=2\) for all \(x\in V\).

If we choose \(\alpha =1, a(x_1)=1, a(x_2)=2, a(x_3)=3, a(x_4)=4, a(x_5)=5, a(x_6)=6\), respectively. It is easy to verify that the above choices satisfy the condition of non-existence of global solutions to the equations (7.1). The numerical experiment result is shown in Fig. 2.

Fig. 2
figure 2

Non-existence of global solutions to the equations (7.1)

Besides, if we choose \(\alpha =3, a(x_1)=1\times 10^{-4}, a(x_2)=2\times 10^{-4}, a(x_3)=3\times 10^{-4}, a(x_4)=4\times 10^{-4}, a(x_5)=5\times 10^{-4}, a(x_6)=6\times 10^{-4}\), respectively, then the above choices satisfy the condition of existence of global solutions to the equations (7.1). The numerical experiment result is shown in Fig. 3.

Fig. 3
figure 3

Existence of global solutions to the equations (7.1)