1 Introduction

We consider the Cauchy problem for semilinear wave equations with scale-invariant dampings, mass and general nonlinear memory terms

$$\begin{aligned} \left\{ \begin{aligned}&u_{tt}-\Delta u+\frac{\mu _1}{1+t}u_t+\frac{{\mu _2}^2}{(1+t)^{2}}u=g *|u|^p,&t>0,x \in {\mathbb {R}}^n,\\&(u,u_t)(0,x)=(u_0,u_1)(x),&x \in {\mathbb {R}}^n, \\ \end{aligned} \right. \end{aligned}$$
(1.1)

where \(u=u(t,x)\) is the unknown function, t is the time variable and \(x \in {\mathbb {R}}^n\), the parameters \(\mu _1 >0,\mu _2\geqslant 0\), the index \(p>1\) and the convolution nonlinear term with respect to time variable is denoted by

$$\begin{aligned} (g*|u|^p)(t,x)\triangleq \int _{0}^{t} g(t-\eta )|u(\eta ,x)|^p \text {d} \eta , \end{aligned}$$

where the time-dependent memory kernel (or called relaxation function) \(g=g(t)>0\).

Equation (1.1) is scale-invariant as the corresponding linear model is invariant under the \(hyperbolic \, scaling\) (see [18])

$$\begin{aligned} {\widetilde{u}}(t,x)=u(\lambda (1+t)-1,\lambda x),\quad \lambda >0. \end{aligned}$$

In physics, the equation

$$\begin{aligned} m{\dot{V}}(t)=-\int _{-\infty }^{t}k(t-\tau )V(\tau )\text {d}\tau + F_{R}(t) \end{aligned}$$

models the motion of a macroparticle in a solvent, where m is the mass, k is the memory kernel, and \(F_{R}(t)\) is a random force. A stationary solution V(t) can be regarded as a centered Gaussian process, whose autocorrelation function r(t) satisfies the deterministic delay differential equation

$$\begin{aligned} m{\dot{r}}(t)=-\int _{0}^{t}k(t-\tau )r(\tau )\text {d}\tau , \quad r(0)=I, \end{aligned}$$

where the right term is a special nonlinear memory term, more details can be found in [16].

The equation

$$\begin{aligned} \begin{aligned} u_{tt}-\Delta u=\frac{1}{\Gamma (1-\gamma )} \int _0^t (t-s)^{-\gamma } |u(s)|^p \text {d}s, t>0,x \in {\mathbb {R}}^n, \end{aligned} \end{aligned}$$
(1.2)

where \(0<\gamma <1,\ p>1\), \(\Gamma \) is the Euler–Gamma function, can be regarded as an approximation of the classical semilinear wave equation

$$\begin{aligned} u_{tt}-\Delta u=|u|^p, \end{aligned}$$
(1.3)

as it is easy to see that the limit

$$\begin{aligned} \lim _{\gamma \rightarrow 1} \frac{1}{\Gamma (1-\gamma )} s^{-\gamma }_+ = \delta (s) \end{aligned}$$

holds in the distribution sense. Equation (1.3) determines a critical index which divides small-data global existence, blowup has been studied extensively, it is referred as the Strauss conjecture. That is, if \(1<p\leqslant p_S(n)\), the solution with nonnegative initial values blows up in finite time; otherwise, if \(p>p_S(n)\), there exists a global solution with small initial data. The critical index \(p=p_S(n)\) is the Strauss exponent which is the positive root of the quadratic equation

$$\begin{aligned} (n-1)r^2 - (n+1) r -2 =0. \end{aligned}$$

See [7,8,9, 11, 12, 15, 20,21,22, 24,25,26] for more details.

Let us review some known results on the semilinear wave equations. Wakasugi [23] studied the following scale-invariant damped wave equation

$$\begin{aligned} \left\{ \begin{aligned}&u_{tt}-\Delta u+\frac{\mu _1}{1+t}u_t=|u|^p,&t>0,x \in {\mathbb {R}}^n,\\&(u,u_t)(0,x)=(u_0,u_1)(x),&x \in {\mathbb {R}}^n, \\ \end{aligned} \right. \end{aligned}$$
(1.4)

where \(p>1\), \(\mu _1>0\). A blowup result was formulated under the condition that \(1<p\leqslant p_F(n)\) and \(\mu _1>1\) or \(1<p\leqslant p_F(n+\mu _1-1)\) and \(0<\mu _1\leqslant 1\), where \(p_F(n)=1+\frac{2}{n}\) is the Fujita exponent. In addition, the well-posedness and asymptotic behavior of solution to (1.4) have been extensively studied in [4, 10, 14].

If Eq. (1.4) involves a mass term, it becomes

$$\begin{aligned} \left\{ \begin{aligned}&u_{tt}-\Delta u+\frac{\mu _1}{1+t}u_t+\frac{{\mu _2}^2}{(1+t)^{2}}u=|u|^p,&t>0,x \in {\mathbb {R}}^n,\\&(u,u_t)(0,x)=(u_0,u_1)(x),&x \in {\mathbb {R}}^n, \\ \end{aligned} \right. \end{aligned}$$
(1.5)

where \(p>1\), \(\mu _1\), \(\mu _2\) are nonnegative constants. Set

$$\begin{aligned} \delta =(\mu _1-1)^2-4\mu _2^2, \end{aligned}$$
(1.6)

which is useful to describe the interplay between damping and mass. On the one hand, for \(\delta \geqslant (n+1)^2\), the damping term is predominant and the shift of the Fujita index \(p_F\big (n+\frac{\mu _1-1-\sqrt{\delta }}{2}\big )\) is shown to be the critical exponent [5], so in this case the equation behaves like parabolic equation. On the other hand, if \(\delta \geqslant 0\), \(1<p<p_S(n+\mu _1)\) or \(0\leqslant \delta <n^2\), \(p=p_S(n+\mu _1)\), then the energy solution to (1.5) blows up in a finite time and an upper life span estimate was established in [19], and Equation (1.5) seems to be wave-like at least concerning blowup.

Let’s turn to the semilinear wave equations/systems with nonlinear memory term. The simplest case is the undamped equation

$$\begin{aligned} \left\{ \begin{aligned}&u_{tt}-\Delta u=g *|u|^p,&t>0,x \in {\mathbb {R}}^n,\\&(u,u_t)(0,x)=(u_0,u_1)(x),&x \in {\mathbb {R}}^n. \\ \end{aligned} \right. \end{aligned}$$
(1.7)

For the special memory term as in (1.2), Chen and Palmieri [2] found a generalized exponent \(p_{0,S}(n,\gamma )\) which is the positive root of the following equation

$$\begin{aligned} (n-1)r^2-(n+3-2\gamma )r-2=0. \end{aligned}$$

Moreover, they proved that in the subcritical case (i.e., \(1 < p\leqslant p_{0,S}(n,\gamma )\) for \(n\geqslant 2\) and \(p > 1\) for \(n=1\)) and in the critical case (i.e., \(p=p_{0,S}(n,\gamma )\) and \(n\geqslant 2\)), the energy solution to (1.7) blows up in finite time. Some blowup conditions for a simplest weakly coupled system with general nonlinear memory terms

$$\begin{aligned} \left\{ \begin{aligned}&u_{tt}-\Delta u=g_1 *|v|^p, \ t>0,\\&v_{tt}-\Delta v=g_2 *|u|^p, \ t>0,\\&(u,u_t,v,v_t)(0,x)=(u_0,u_1,v_0,v_1)(x), \end{aligned} \right. \end{aligned}$$
(1.8)

with \(p,q>1\) were obtained by Chen in [1]. He showed that if the kernels \(g_k~(k=1,2)\) of the general memory terms satisfy two different growth conditions ([1], Theorem 2.2), then the energy solution (uv) to Eq. (1.8) must blow up in finite time. It should be point out that the skill of constructing G(t) in [1] enables us to deduce the first lower bound estimates for U(t), see (4.22) in this paper.

Up to now, to our knowledge there is few results on the blowup to semilinear wave equations with scale-invariant dampings, mass and general nonlinear memory terms. We will study this problem in this paper.

In order to investigate blowup, it requires a local (in time) existence result (Theorem 2.1) as prerequisite. We will employ the Banach’s fixed point theorem to prove Theorem 2.1. In this context, Palmieri’s decay estimates on the solution to the corresponding linear homogeneous equations are necessary to construct the working space (see Lemmas 3.2, 3.3). However, the sign of the interplay quantity \(\delta \) is uncertain due to different parameters \(\mu _1\) and \(\mu _2\); thus, we have to make classifications according to the value of \(\delta \).

An efficient way to study blowup for semilinear wave equations is the test function combined with Kato-type differential inequalities, see [21, 24]. However, due to the effect of the memory terms, we cannot get sharp lower estimates on the blowup functional (cf. (2.7) in [24]). Inspired by [2] and [24], we will employ the iteration argument together with test functions (see Chen and Reissig [3], Palmieri and Tu [19] and references therein) to study blowup. As Eq.(1.1) is more complicated than (1.7), the testing function used in [1] cannot work here. Instead, we select

$$\begin{aligned} \Phi :=\lambda (t)\varphi (x) \end{aligned}$$

as the testing function, where

$$\begin{aligned} \varphi := {\left\{ \begin{array}{ll} \int _{{\mathbb {S}}^{n-1}}e^{x\cdot \omega }\text {d} \omega , &{}n\geqslant 2,\\ e^{x}+e^{-x},&{} n=1 \end{array}\right. } \end{aligned}$$

with \( {\mathbb {S}}^{n-1}\) is the \(n-1\) dimensional unit sphere and

$$\begin{aligned} \lambda (t):=(1+t)^{\frac{\mu _1+1}{2}}K_{\frac{\sqrt{\delta }}{2}}(t+1) \end{aligned}$$

with \(K_{\frac{\sqrt{\delta }}{2}}(t+1)\) is the modified Bessel function of the second kind (\(t\geqslant 0\)). Note the structure of the memory terms being general, the classical Kato’s type lemma ([13]) does not work well in our model; however, thanks to Chen’s idea on constructing the function G(t) ([1]), the iteration procedure can be deduced.

This paper is organized as follows. In Sect. 2, we state our main results including the local well-posedness (Theorem 2.1) and blowup result (Theorem 2.2). The local well-posedness result will be proved in Sect. 3 after some preliminaries for the corresponding linear homogeneous equation being introduced. Finally, the main result Theorem 2.2 will be proved by the combination of the test function method and the iteration argument in Sect. 4.

\(\textbf{Notation}\). \(f\lesssim g\) means there exists a constant \(C>0\) such that \(f\le Cg\), \(f > rsim g\) means there exists a constant \(C>0\) such that \(f\ge Cg\) and \(f\sim g\) when \(g\lesssim f \lesssim g \). \(N_0=N\cup \{0\}\) is the set of all nonnegative integer. All spaces of functions are assumed to be over \({\mathbb {R}}^n\) and \({\mathbb {R}}^n\) is dropped in function space notation if there is no ambiguity.

2 Main Results

In this section, we will give the main results of the paper. The local well-posedness result to (1.1) is

Theorem 2.1

(Local existence) Suppose that \((u_0,u_1) \in H^1 \times L^2\) compactly supported in a ball \(B_R(0)=\{x:|x|\leqslant R\}\) with some radius \(R>0\). Suppose further that

$$\begin{aligned} 1<p\leqslant \frac{n}{n-2},~n\geqslant 3;~1<p<\infty ,~n=1,~2, \end{aligned}$$
(2.1)

\(\mu _1 >0\), \(\mu _2\geqslant 0\) such that

$$\begin{aligned} \delta =(\mu _1-1)^2-4\mu _2^2>0 \end{aligned}$$

and the nonnegative relaxation function \(g(t)\in L^1_{loc}([0,\infty ))\). Then, there exist a maximal existence time \(T_m \in [0,\infty )\) and a unique (mild) solution

$$\begin{aligned} u \in C([0,T_m),H^1)\cap C^1([0,T_m),L^2) \end{aligned}$$

to (1.1) satisfying \( supp\, u(t,\cdot )\subset B_{R+t}\) for all \(t \in [0,T_m).\)

Our blowup result is concerned with the so-called energy solution, which is defined by

Definition 2.1

Let \((u_0,u_1) \in H^1 \times L^2\), u is an energy solution to (1.1) on [0, T) if

$$\begin{aligned} \begin{aligned} u\in C([0,T),H^1)\cap C^1([0,T),L^2)~with~g(t)*|u|^p \in L_{loc}^1([0,\infty )\times R^n)\\ \end{aligned}\nonumber \\ \end{aligned}$$
(2.2)

satisfies \(u(0,\cdot )=u_0\in H^1(R^n)\) and the integral relation

$$\begin{aligned} \begin{aligned}&\int _{R^n}u_t(t,x)\phi (t,x)\text {d}x-\int _{R^n}u_t(0,x)\phi (0,x)\text {d}x\\&\qquad +\int _0^t\int _{R^n}(-u_s(s,x)\phi _s(s,x)+\nabla u(s,x)\nabla \phi (s,x))\text {d}x \text {d}s\\&\qquad +\int _0^t\int _{R^n}\bigg (\frac{\mu _1}{1+s}u_s(s,x)+\frac{{\mu _2}^2}{(1+s)^{2}}u(s,x)\bigg )\phi (s,x)\text {d}x \text {d}s\\&\quad =\int _0^t\int _{R^n}(g*|u|^p)(s,x)\phi (s,x)\text {d}x \text {d}s \end{aligned} \end{aligned}$$
(2.3)

for any test function \(\phi \in C_0^{\infty }([0,\infty )\times R^n)\) and any \(t\in (0,T)\).

The main result in this paper is concerning blowup, which is stated as

Theorem 2.2

(Blowup) Let \(p>1\) such that \(1<p<\infty ,~n=1,~2;~1<p\leqslant \frac{n}{n-2},~n\geqslant 3\) and \(\mu _1 >0\), \(\mu _2\geqslant 0\) such that \(\delta > 0\). Suppose that the nonnegative relaxation \(g(t)\in L^1_{loc}([0,\infty ))\cap C^1([0,\infty ))\). Suppose further that

$$\begin{aligned} 1<p<p_S(n+\mu _1) \end{aligned}$$
(2.4)

and \((u_0,u_1) \in H^1(R^n) \times L^2(R^n)\) are nonnegative and compactly supported in the ball \(B_R(0)\), \(u_1\) is not identically zero and satisfy

$$\begin{aligned} u_0(x)\geqslant 0~and~u_1(x)+\frac{\mu _1-1-\sqrt{\delta }}{2}u_0(x)\geqslant 0. \end{aligned}$$
(2.5)

Let u be the local (in time) energy solution to (1.1) on [0, T) according to Theorem 2.1, where T is the life span. Then,

$$\begin{aligned} supp~u \subset \{(t,x)\in [0,T)\times R^n ~|~|x|\leqslant t+R\} \end{aligned}$$

and the solution u blows up in finite time.

Example 2.1

Let us consider Eq. (1.1) with nonlinear memory terms of polynomial decay, i.e., \(g(t)=(1+t)^{-\gamma }\), namely

$$\begin{aligned} \left\{ \begin{aligned}&u_{tt}-\Delta u+\frac{\mu _1}{1+t}u_t+\frac{{\mu _2}^2}{(1+t)^{2}}u=\int _{0}^{t} \frac{|u(\eta ,x)|^p}{(1+t-\eta )^{\gamma }} \text {d} \eta ,&t>0,x \in {\mathbb {R}}^n,\\&(u,u_t)(0,x)=(u_0,u_1)(x),&x \in {\mathbb {R}}^n. \end{aligned} \right. \qquad \end{aligned}$$
(2.6)

Under the assumptions in Theorem 2.2, we know that the solution u to (2.6) blows up in finite time.

3 Proof of Theorem 2.1

3.1 Preliminaries

The solution of the linear homogeneous equation

$$\begin{aligned} \left\{ \begin{aligned}&w_{tt}-\Delta w+\frac{\mu _1}{1+t}w_t+\frac{{\mu _2}^2}{(1+t)^{2}}w=0,&t>0,x \in {\mathbb {R}}^n,\\&(w,w_t)(s,x)=(w_0,w_1)(x),&x \in {\mathbb {R}}^n, \\ \end{aligned} \right. \end{aligned}$$
(3.1)

is given by

$$\begin{aligned} w(x,t)=E_0(t,s,x)*_{(x)} w_0(x)+E_1(t,s,x)*_{(x)} w_1(x), \end{aligned}$$

where \(\mu _1 >0\), \(\mu _2\geqslant 0\), \(E_0(t,s,x)\) and \(E_1(t,s,x)\) are the distributional solutions with data \((w_0,~w_1)=(\delta _0,0)\) and \((0,\delta _0)\), respectively, \(\delta _0\) is the Dirac distribution in the x variable and \(*_{(x)}\) denotes the convolution with respect to the x variable. In addition, we require the initial data belong to the classical energy pace \(H^1\) with \(L^1\) regularity, namely

$$\begin{aligned} (w_0,w_1) \in (H^1 \cap L^1)\times (L^2\cap L^1). \end{aligned}$$

For any \(k \in [0,1]\), denote

$$\begin{aligned} D^k({\mathbb {R}}^n):=(H^k\cap L^1)\times (L^2\cap L^1),\ D({\mathbb {R}}^n):= D^1 ({\mathbb {R}}^n). \end{aligned}$$

The following decay results are useful to prove Theorem 2.1. One may check for more details in [17]. If the data are taken at \(s=0\), one has

Lemma 3.1

( [17]) Let \(\mu _1 >0\), \(\mu _2\geqslant 0\) such that \(\delta =(\mu _1-1)^2-4\mu _2^2>0\). Let us consider \((w_0,w_1) \in D(R^n)\). Then for all \(k \in [0,1]\), the energy solution w to (3.1) satisfies

$$\begin{aligned} \Vert w\Vert _{{\dot{H}}^k}\lesssim \Vert (w_0,w_1)\Vert _{D^k}\quad \quad \quad \quad \quad \quad \quad \quad \quad \end{aligned}$$
figure a

where \(\ell (t)=1+(\ln (1+t))^{\frac{1}{2}}\). Moreover, \(||w_t(t,\cdot )||_{L^2(R^n)}\) and \(||\nabla w(t,\cdot )||_{L^2(R^n)}\) satisfy the same estimates as (3.2) after taking \(k=1\).

If the data are taken at initial time \(s\geqslant 0\) and the first datum is zero, we have

Lemma 3.2

( [17])  Let \(\mu _1 >0\), \(\mu _2\geqslant 0\) such that \(\delta =(\mu _1-1)^2-4\mu _1^2>0\). Let us assume \(w_0=0\) and \(w_1\in L^2\cap L^1\). Then, for \(t\geqslant s\) and \(k \in [0,1]\), the energy solution w to (3.1) satisfies

$$\begin{aligned} \Vert w\Vert _{{\dot{H}}^k}\lesssim \big (\Vert w_1\Vert _{L^1} +(1+s)^{\frac{n}{2}} \Vert w_1\Vert _{L^2} \big ) (1+s)^{\frac{1+\mu _1}{2}-\frac{\sqrt{\delta }}{2}} \end{aligned}$$
figure b

where \({\tilde{\ell }}(t,s)=1+(\ln (\frac{1+t}{1+s}))^{\frac{1}{2}}\). Moreover, \(||w_t(t,\cdot )||_{L^2(R^n)}\) and \(||\nabla w(t,\cdot )||_{L^2(R^n)}\) satisfy the same estimates as (3.3) after taking \(k=1\).

Lemma 3.3

(Gagliardo–Nirenberg Inequality, [6]) Let \(p,q,r~(1\leqslant p,q,r\leqslant \infty )\) and \(\sigma \in [0,1] \) satisfy

$$\begin{aligned} \frac{1}{p}=\sigma (\frac{1}{r}-\frac{1}{n})+(1-\sigma )\frac{1}{q} \end{aligned}$$
(3.4)

except for \(p=\infty \) or \(r=n\) when \(n\geqslant 2.\) Then for some constant \(C=C(p,q,r,n)>0,\) the inequality

$$\begin{aligned} ||h||_{L^p}\leqslant C ||h||_{L^q}^{1-\sigma }||\nabla h||_{L^r}^{\sigma } \end{aligned}$$
(3.5)

holds for any \(h \in C_0^1\).

We now prove Theorem 2.1. Define

$$\begin{aligned} \begin{aligned} X(T):=\bigg \{u&\in C([0,T],H^1(R^n))\cap C^1([0,T],L^2(R^n)) \\&\text {such that}~~~\textrm{supp}~u(t,\cdot ) \subset B_{R+t},~\forall ~t \in [0,T]\bigg \} \end{aligned} \end{aligned}$$
(3.6)

with

$$\begin{aligned} ||u||_{X(T)}\triangleq \underset{t\in [0,T]}{\max }M[u](t), \end{aligned}$$

where

$$\begin{aligned} M[u](t)\triangleq ||u||_{L^2(R^n)}+||u_t||_{L^2(R^n)}+ ||\nabla u||_{L^2(R^n)}. \end{aligned}$$

It is easy to see that \(\big (X(T),||\cdot ||_{X(T)}\big )\) is a Banach space. Let \(T,K>0\), define

$$\begin{aligned} \begin{aligned} X(T,K):=\bigg \{u\in C([0,T],H^1)\cap C^1([0,T],L^2)~\bigg |~||u||_{X(T)}\leqslant K\bigg \}. \end{aligned} \end{aligned}$$
(3.7)

By Duhamel’s principle, let us define

$$\begin{aligned} N:~u \in X(T)\longrightarrow Nu=u^l+u^n, \end{aligned}$$

where

$$\begin{aligned}{} & {} u^l=E_0(t,s,x)*u_0(x)+E_1(t,s,x)*u_1(x), \\{} & {} u^n=\int _0^t E_1(t-\tau ,0,x)*_{(x)} (g *|u|^p)(\tau ,x) d \tau \end{aligned}$$

and \(Nu=u^l+u^n\) is the unique solution to

$$\begin{aligned} \left\{ \begin{aligned}&w_{tt}-\Delta w+\frac{\mu _1}{1+t}w_{t}+\frac{{\mu _2}^2}{(1+t)^{2}}w=g*|u|^p,&t>0,x \in {\mathbb {R}}^n,\\&w(0,x)=u_0(x), w_{t}(0,x)=u_1(x),&x \in {\mathbb {R}}^n. \\ \end{aligned} \right. \end{aligned}$$
(3.8)

Our goal is to prove that, for suitable choices of \(T,\ K\), the mapping N is contractive from X(TK) into itself such that the local (in time) solution of (1.1) is the fixed point of N. For this purpose, it suffices to prove

$$\begin{aligned}{} & {} \Vert Nu\Vert _{X(T)}\leqslant C_{0,T}+C_{T}\Vert u\Vert _{X(T)}^p, \end{aligned}$$
(3.9)
$$\begin{aligned}{} & {} \begin{aligned} \Vert Nu-N{\overline{u}}\Vert _{X(T)}\leqslant&C_{T}^{'} \Vert u-{\overline{u}}\Vert _{X(T)} \big (\Vert u\Vert _{X(T)}^{p-1}+\Vert {\overline{u}}\Vert _{X(T)}^{p-1}\big ), \end{aligned} \end{aligned}$$
(3.10)

where \(C_{0,T}\) depends on the norm of the initial data \(u_0,u_1,v_0,v_1\) and is bounded as \(T\rightarrow 0\) and \(C_{T},\, C_{T}^{'}\rightarrow 0 \,\, as\,\, T\rightarrow 0.\) In fact, by (3.9), for sufficiently large K, we can choose T small enough such that both terms in the right side of (3.9) are less than \(\frac{K}{2}\), so N maps X(TK) into itself. N being contractive for an appropriately small T can be easily followed from (3.10) as \(C_T'\rightarrow 0\) when \(T\rightarrow 0\). Thus, the local (in time) existence and uniqueness of the solution in X(T) can be guaranteed by Banach’s fixed point theorem.

Note \(Nu = u^l + u^n\), in order to prove (3.9) and (3.10), it suffices to estimate \(u^l\) and \(u^n\), respectively.

3.2 Proof of (3.9)

Taking \(k=0,1\) in Lemma 3.1 for \(w=u^l\), we have

$$\begin{aligned} \Vert u^l(t,\cdot )\Vert _{L^2(R^n)}\leqslant C \Vert (u_0,u_1)\Vert _{D^0(R^n)}\quad \quad \quad \quad \quad \quad \quad \quad \quad \end{aligned}$$
figure c
$$\begin{aligned} \Vert u^l_t(t,\cdot )\Vert _{L^2(R^n)},\Vert \nabla u^l(t,\cdot )\Vert _{L^2(R^n)}\leqslant C\Vert (u_0,u_1)\Vert _{D^1(R^n)}\quad \quad \quad \quad \quad \quad \quad \end{aligned}$$
figure d

where \(\ell (t)=1+(\ln (1+t))^{\frac{1}{2}}\).

The remainder proof will be divided into five cases according to the different values of \(\delta \), see Fig. 1 at the end of the paper.

Fig. 1
figure 1

The values of \(\delta \)

\(\mathbf {Case~1}\)   \(\frac{1+\sqrt{\delta }-n}{2}>1>0\), namely \(\delta>(n+1)^2>(n-1)^2\).

By (3.11a) and (3.12a), we get

$$\begin{aligned}{} & {} \Vert u^l\Vert _{L^2}\leqslant C (1+t)^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}}\Vert (u_0,u_1)\Vert _{D^0}~, \end{aligned}$$
(3.13)
$$\begin{aligned}{} & {} \begin{aligned} \Vert u_t^l\Vert _{L^2},\Vert \nabla u^l\Vert _{L^2}&\leqslant C (1+t)^{-1-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}} \Vert (u_0,u_1)\Vert _{D} \\&\leqslant C (1+t)^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}}\Vert (u_0,u_1)\Vert _{D},\\ \end{aligned} \end{aligned}$$
(3.14)

then

$$\begin{aligned} \begin{aligned} M[u^l](t)&\leqslant C(1+t)^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}} (\Vert (u_0,u_1)\Vert _{D^0}+\Vert (u_0,u_1)\Vert _{D}). \end{aligned} \end{aligned}$$
(3.15)

Thus,

$$\begin{aligned} \begin{aligned} \Vert u^l\Vert _{X(T)}&\leqslant C(1+T)^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}} (\Vert (u_0,u_1)\Vert _{D^0}+\Vert (u_0,u_1)\Vert _{D})\leqslant C_{0,T}, \end{aligned} \end{aligned}$$
(3.16)

where \(C_{0,T}\) is dependent on the norm of initial data and bounded as \(T\rightarrow 0\). Applying the same argument, we can deal with

Case 2: \(0<\frac{1+\sqrt{\delta }-n}{2}<1\), i.e., \((n-1)^2<\delta <(n+1)^2\) (by (3.11a) and (3.12c));

Case 3: \(\frac{1+\sqrt{\delta }-n}{2}<0<1\), i.e., \(\delta<(n-1)^2 <(n+1)^2\) (by (3.11c) and (3.12c));

Case 4: \(0=\frac{1+\sqrt{\delta }-n}{2}<1\), i.e., \((n-1)^2=\delta <(n+1)^2\) (by (3.11b) and (3.12c));

Case 5: \(0<\frac{1+\sqrt{\delta }-n}{2}=1\), i.e., \((n-1)^2<\delta =(n+1)^2\) (by (3.11a) and (3.12b)).

Hence,

$$\begin{aligned} \begin{aligned} \Vert u^l\Vert _{X(T)}\leqslant C_{0,T}. \end{aligned} \end{aligned}$$
(3.17)

Next, we estimate \(u^n\). Using the classical Gagliardo–Nirenberg inequality (Lemma 3.3), one gets

$$\begin{aligned} \begin{aligned} \Vert w(\eta ,\cdot )\Vert ^r_{L^{2r}(R^n)}\leqslant C\Vert w(\eta ,\cdot )\Vert _{L^2(R^n)}^{(1-\frac{n}{2}(1-\frac{1}{r}))r} \Vert \nabla w(\eta ,\cdot )\Vert _{L^2(R^n)}^{\frac{n}{2}(1-\frac{1}{r})r}\leqslant C(M[w](\eta ))^r \end{aligned}\nonumber \\ \end{aligned}$$
(3.18)

for all \(\eta \in [0,T]\), where \(r>1\) if \(n=1,2\) and \(1<r\leqslant \frac{n}{n-2}\) if \(n\geqslant 3\).

Let’s take \(s=0\), \(k=0,1\) in Lemma 3.2, respectively, then

$$\begin{aligned} \Vert w(t,\cdot )\Vert _{L^2(R^n)}\leqslant C (\Vert w_1(t,\cdot )\Vert _{L^1(R^n)}+\Vert w_1(t,\cdot )\Vert _{L^2(R^n)})\quad \quad \quad \quad \end{aligned}$$
figure e

and

$$\begin{aligned} \Vert w_t(t,\cdot )\Vert _{L^2(R^n)},\Vert \nabla w(t,\cdot )\Vert _{L^2(R^n)}\leqslant C \left( \Vert w_1(t,\cdot )\Vert _{L^1(R^n)}+\Vert w_1(t,\cdot )\Vert _{L^2(R^n)}\right) \end{aligned}$$
figure f

\(\mathbf {Case~1}\) \(\frac{1+\sqrt{\delta }-n}{2}>1>0\), namely \(\delta>(n+1)^2>(n-1)^2\).

By Hölder’s  inequality and the property of compact support of u, we have

$$\begin{aligned} \Vert g *|u|^p\Vert _{L^1}\leqslant (R+\tau )^{\frac{n}{2}} ||g *|u|^p||_{L^2}. \end{aligned}$$

Applying the Gagliardo–Nirenberg inequality (G-N Ineq for short) yields

$$\begin{aligned} \Vert |u(\eta ,\cdot )|^p\Vert _{L^2}&=\Vert u(\eta ,\cdot )\Vert ^p_{L^{2p}}\overset{G-N\ \text {Ineq}}{\leqslant } C\Vert u(\eta ,\cdot )\Vert _{L^2(R^n)}^{(1-\frac{n}{2}(1-\frac{1}{p}))p} \Vert \nabla u(\eta ,\cdot )\Vert _{L^2(R^n)}^{\frac{n}{2}(1-\frac{1}{p})p}\nonumber \\&\leqslant C(M[u](\eta ))^p \leqslant C\Vert u\Vert _{X(T)}^p, \end{aligned}$$
(3.21)

where \(1<p\leqslant \frac{n}{n-2}\), for \(n\geqslant 3\) and \(1<p<\infty \), for \(n=1,~2\) . In view of (3.19a), by Duhamel’s principle, we get

$$\begin{aligned} \begin{aligned} \Vert u^n\Vert _{L^2}&= \Vert \int _0^t E_1(t-\tau ,0,x)*_{(x)} (g *|u|^p)(\tau ,x) \text {d} \tau \Vert _{L^2}\\&\leqslant C\int _0^t (1+t-\tau )^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}} (\Vert g *|u|^p\Vert _{L^1}+\Vert g *|u|^p\Vert _{L^2}) \text {d}\tau .\\&\leqslant C\int _0^t (1+t-\tau )^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}} (1+(R+\tau )^{\frac{n}{2}})\Vert g *|u|^p\Vert _{L^2} \text {d}\tau \\&\leqslant C\int _0^t (1+t-\tau )^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}}(R+\tau )^{\frac{n}{2}} \\&\quad \quad \times \int _{0}^{\tau }g(\tau -\eta )\Vert |u(\eta ,\cdot )|^p\Vert _{L^2} \text {d}\eta \text {d}\tau \\&\leqslant C\int _0^t (1+t)^{\frac{1+\sqrt{\delta }+n}{2}} \Vert u\Vert _{X(T)}^p \text {d}\tau \leqslant CT(1+T)^{\frac{1+\sqrt{\delta }+n}{2}}\Vert u\Vert _{X(T)}^p, \end{aligned} \end{aligned}$$
(3.22)

where we used

$$\begin{aligned} \bigg | \int _{0}^{\tau } g(\tau -\eta ) \text {d}\eta \bigg |=\int _{0}^{\tau } g(\eta )\text {d}\eta<\int _{0}^{T} g(\eta )\text {d}\eta <\infty , \end{aligned}$$
(3.23)

due to \(g(t) \in L_{loc}^1([0,\infty ))\).

Denote \(\nabla ^j\partial _t^m u^n=\nabla ^j\partial _t^m\int _0^t E_1(t-\tau ,0,x)*(g *|u|^p)(\tau ,x) \text {d} \tau \), where \(j+m=1\) and \(j,~m\in N_0\). It is easy to see that for \(j=0,~m=1\), \(\nabla ^j\partial _t^m~u^n=u_t^n\) and \(j=1,~m=0\), \(\nabla ^j\partial _t^mu^n=\nabla u^n\).

By (3.20a), (3.21) and (3.23), one has

$$\begin{aligned} \begin{aligned}&\Vert \nabla ^j\partial _t^m u^n\Vert _{L^2(R^n)}=\Vert \nabla ^j\partial _t^m\int _0^t E_1(t-\tau ,0,x)*_{(x)} (g *|u|^p)(\tau ,x) \text {d} \tau \Vert _{L^2}\\&\quad \leqslant C \int _0^t(1+t-\tau )^{-1-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}}(1+(R+\tau )^{\frac{n}{2}})\Vert g *|u|^p\Vert _{L^2} \text {d}\tau \\&\quad \leqslant C \int _0^t(1+t-\tau )^{-1-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}}\\&\qquad \cdot (R+\tau )^{\frac{n}{2}}\bigg | \int _{0}^{\tau } g(\tau -\eta ) \Vert u(\eta ,\cdot )|^p\Vert _{L^2} \text {d}\eta \bigg | \text {d}\tau \\&\quad \quad \leqslant C t (1+t)^{\frac{-1+\sqrt{\delta }}{2}}\Vert u\Vert ^{p}_{X(T)} \leqslant CT(1+T)^{\frac{-1+\sqrt{\delta }}{2}}\Vert u\Vert ^{p}_{X(T)}, \end{aligned} \end{aligned}$$
(3.24)

where

$$\begin{aligned} \begin{aligned} (1+t-\tau )^{-1-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}}<(1+t-\tau )^{\frac{1+\sqrt{\delta }-n}{2}-1}<(1+t)^{\frac{1+\sqrt{\delta }-n}{2}-1}=(1+t)^{\frac{-1-n+\sqrt{\delta }}{2}} \end{aligned} \end{aligned}$$

has been used as \(\delta >(n+1)^2\) in

\(\mathbf {Case~1}\).

Combining (3.22) and (3.24), we derive

$$\begin{aligned} \begin{aligned} \Vert u^n\Vert _{X(T)}&\leqslant CT(1+T)^{\frac{1+\sqrt{\delta }+n}{2}}\Vert u\Vert ^{p}_{X(T)}.\\ \end{aligned} \end{aligned}$$
(3.25)

Moreover, we can use the same argument to deal with \(u^n\) in the other Cases 2–5. Hence,

$$\begin{aligned} \begin{aligned} \Vert u^n\Vert _{X(T)} \leqslant C_{T}\Vert u\Vert ^{p}_{X(T)}, \end{aligned} \end{aligned}$$
(3.26)

where \(C_{T}\rightarrow 0\) as \(T\rightarrow 0.\)

The desired inequality (3.9) can be derived from (3.17) and (3.26).

3.3 Proof of (3.10)

Suppose that \(u,\ {\overline{u}}\in X(T)\), then

$$\begin{aligned} \Vert Nu-N{\overline{u}}\Vert _{X(T)}=\bigg | \bigg |\int _0^t E_1(t-\tau ,0,x)*_{(x)} I_p(\tau ,x) \text {d}\tau \bigg | \bigg |_{X(T)}, \end{aligned}$$
(3.27)

where

$$\begin{aligned} I_p(\tau ,x)=g*(|u|^p-|{\overline{u}}|^p)(\tau ,x)=\int _{0}^{\tau } g(\tau -\eta )(|u(\eta ,x)|^p-|{\overline{u}}(\eta ,x)|^p) \text {d}\eta . \nonumber \\ \end{aligned}$$
(3.28)

Note

$$\begin{aligned} |~|u|^p-|{\overline{u}}|^p|\leqslant C|u-{\overline{u}}|\big (|u|^{p-1}+|{\overline{u}}|^{p-1}\big ), \end{aligned}$$

then

$$\begin{aligned} \begin{aligned}&\Vert ~|u|^p-|{\overline{u}}|^p\Vert _{L^2}\\&\quad \leqslant C\Vert u-{\overline{u}}\Vert _{L^{2p}}\cdot \Vert ~|u|^{p-1}+|{\overline{u}}|^{p-1}\Vert _{L^{2p'}}\\&\quad \leqslant C\Vert u-{\overline{u}}\Vert _{L^{2r}}\cdot \big (\Vert ~|u|^{p-1}\Vert _{L^{2r'}}+\Vert ~|{\overline{u}}|^{r-1}\Vert _{L^{2p'}}\big )\\&\quad \leqslant C M[u-{\overline{u}}](t)\big ((M[u])^{p-1}+(M[{\overline{u}}])^{p-1}\big ), \end{aligned} \end{aligned}$$
(3.29)

where \(p'\) is the conjugate index of p and we used the fact

$$\begin{aligned}{} & {} \Vert u-{\overline{u}}\Vert _{L^{2p}}\overset{G-N\ \text {Ineq}}{\leqslant } M[u-{\overline{u}}](\eta ), \\{} & {} \Vert ~|u|^{p-1}\Vert _{L^{2p'}}=\Vert u\Vert _{L^{2p}}^{p-1}\overset{G-N \ \text {Ineq}}{\leqslant }C\big (M[u](\eta )\big )^{p-1} \end{aligned}$$

and

$$\begin{aligned} \Vert ~|{\overline{u}}|^{p-1}\Vert _{L^{2p'}}=\Vert {\overline{u}}\Vert _{L^{2p}}^{p-1}\overset{G-N \ \text {Ineq}}{\leqslant }C\big (M[{\overline{u}}](\eta )\big )^{p-1}. \end{aligned}$$

for all \(\eta \in [0,T]\) with \(p>1\) if \(n=1,2\) and \(1<p\leqslant \frac{n}{n-2}\) if \(n\geqslant 3\). \(I_p\) defined by (3.29) can be estimated as

$$\begin{aligned} \Vert I_p(\tau ,\cdot )\Vert _{L^2}\leqslant CM[u-{\overline{u}}]\big ((M[u])^{p-1}+(M[{\overline{u}}])^{p-1}\big ). \end{aligned}$$

Therefore, from (3.27) it follows that

$$\begin{aligned} \begin{aligned} \Vert Nu-N{\overline{u}}\Vert _{L^2}\leqslant&\int _0^t \Vert E_1(t-\tau ,0,x)*_{(x)} I_p(\tau ,x)\Vert _{L^2} \text {d}\tau \\ \leqslant&\int _0^t (1+t-\tau )^{-\frac{n+\mu _1}{2}+\frac{1+\sqrt{\delta }}{2}} \Vert I_p(\tau ,x)\Vert _{L^2} \text {d}\tau \\ \leqslant&C_{T}^{'}\Vert u-{\overline{u}}\Vert _{X(T)} \bigg (\Vert u\Vert _{X(T)}^{p-1}+\Vert {\overline{u}}\Vert _{X(T)}^{p-1}\bigg ), \end{aligned} \end{aligned}$$
(3.30)

where \(C_{T}^{'}\rightarrow 0\) as \(T\rightarrow 0.\) \(\Vert \nabla ^j\partial _t^m(Nu-N{\overline{u}})\Vert _{L^2}\) can be estimated in the same way. Thus, we complete the proof of (3.10).

4 Proof of Theorem 2.2

4.1 Preliminaries for Test Function

Before starting with the construction of the test function, we recall the modified Bessel function of the second kind of order \(\zeta \),

$$\begin{aligned} K_{\zeta }(t)=\int _0^{\infty }\text {exp}(-t \text {cos}h~z)\text {cos}h~(\zeta z)\text {d}z,~\zeta \in {\mathbb {R}}, \end{aligned}$$

which is a solution of the equation

$$\begin{aligned} \left( t^2 \frac{\text {d}^2}{\text {d}t^2}+t\frac{\text {d}}{\text {d}t}-(t^2+\zeta ^2) \right) K_{\zeta }(t)=0,~t>0. \end{aligned}$$

The following are some useful properties about \(K_{\zeta }(t)\) with \(\zeta \) being a real parameter. More details can be found in [19].

  • The limiting behavior

    $$\begin{aligned} K_{\zeta }(t)=\sqrt{\frac{\pi }{2t}}e^{-t}\big [1+O(t^{-1})\big ]~~~\textrm{as}~ t\rightarrow \infty . \end{aligned}$$
    (4.1)
  • The derivative identity

    $$\begin{aligned} \frac{\text {d}}{\text {d}t}K_{\zeta }(t)=-K_{\zeta +1}(t)+\frac{\zeta }{t}K_{\zeta }(t). \end{aligned}$$
    (4.2)

    Set

    $$\begin{aligned} \lambda (t):=(1+t)^{\frac{\mu _1+1}{2}}K_{\frac{\sqrt{\delta }}{2}}(t+1),~~~\text {for}~t\geqslant 0. \end{aligned}$$

    which satisfies

    $$\begin{aligned} \left( \frac{\text {d}^2}{\text {d}t^2}-\frac{\mu _1}{1+t}\frac{\text {d}}{\text {d}t}+\frac{\mu _1+\mu _2^2}{(1+t)^2}-1 \right) \lambda (t)=0,~t>0. \end{aligned}$$

    Following Yordanov and Zhang [24], we introduce the function

    $$\begin{aligned} \varphi (x):= {\left\{ \begin{array}{ll} \int _{s^{n-1}}e^{x\cdot \omega }d \omega ,&{}n\geqslant 2,\\ e^{x}+e^{-x},&{}n=1, \end{array}\right. } \end{aligned}$$
    (4.3)

    where \( {\mathbb {S}}^{n-1}\) is the \(n-1\)-dimensional unit sphere. The function \(\varphi \) satisfies

    $$\begin{aligned} \Delta \varphi (x)=\varphi (x),~~~x \in {\mathbb {R}}^n \end{aligned}$$

    and

    $$\begin{aligned} \varphi (x)\thicksim C_n|x|^{-\frac{n-1}{2}}e^{|x|}~~as~~|x|\rightarrow \infty . \end{aligned}$$

    The function

    $$\begin{aligned} \Phi (x,t):=\lambda (t)\varphi (x) \end{aligned}$$
    (4.4)

    constitutes a solution to

    $$\begin{aligned} \begin{aligned}&\Phi _{tt}-\Delta \Phi +\partial _t(\frac{\mu _1}{1+t}\Phi )+\frac{{\mu _2}^2}{(1+t)^{2}}\Phi =0, ~~~~~~t>0,~x \in {\mathbb {R}}^n. \end{aligned} \end{aligned}$$
    (4.5)

    We choose \(\Phi (x,t)\) as the test function to prove Lemma 4.1. Indeed, note the nonlinearity term in (1.1) being nonnegative, applying the same method to prove ( [19], Lemma 2.1), one has

Lemma 4.1

Assume that \(u_0,u_1 \) are nonnegative and compactly supported in the ball \(B_R(0)\) and satisfy

$$\begin{aligned} u_0(x)\geqslant 0~and~u_1(x)+\frac{\mu _1-1-\sqrt{\delta }}{2}u_0(x)\geqslant 0, \end{aligned}$$
(4.6)

then the local solution u to (1.1) fulfills

$$\begin{aligned} supp~u \subset \{(t,x)\in [0,T)\times {\mathbb {R}}^n ~|~|x|\leqslant t+R\} \end{aligned}$$

and there exists a large \(T_0\), which is independent of \(u_0,u_1\) such that for any \(t>T_0\) and \(p>1\), the following estimate holds

$$\begin{aligned} \int _{{\mathbb {R}}^n}|u(t,x)|^p \text {d}x\geqslant C_1(1+t)^{n-1-\frac{n+\mu _1-1}{2}p}. \end{aligned}$$
(4.7)

4.2 Iteration Argument

In order to investigate blowup, we will track the development of the time-dependent functional

$$\begin{aligned} U(t)\triangleq \int _{R^n}u(t,x)\text {d}x \end{aligned}$$
(4.8)

related to the solution. It can be see that this functional will become infinity as the variable t approaches a finite time; therefore, the solution u must blow up in a finite time. We divide the remainder proof into four steps.

Step1 Deriving the iteration frame. Choose a test functions \(\phi \) which satisfies

$$\begin{aligned} \phi \equiv 1 ~\text {on}~\big \{~(x,s)\in [0,t]\times R^n:|x|\leqslant s+R\big \} \end{aligned}$$

in (2.3), we get

$$\begin{aligned} \begin{aligned}&\int _{R^n}u_t(t,x)\text {d}x-\int _{R^n}u_t(0,x)\text {d}x\\&\qquad +\int _0^t\int _{R^n}\bigg (\frac{\mu _1}{1+s}u_s(s,x)+\frac{{\mu _2}^2}{(1+s)^{2}}u(s,x)\bigg )\text {d}x \text {d}s\\&\quad =\int _0^t\int _{R^n}\int _0^s g(s-\tau )|u(\tau ,x)|^p\text {d}\tau \text {d}x\text {d}s, \end{aligned} \end{aligned}$$
(4.9)

i.e.,

$$\begin{aligned} \begin{aligned}&U^{'}(t)- U^{'}(0)+\int _0^t \frac{\mu _1}{1+s}U^{'}(s)\text {d}s+\int _0^{t}\frac{{\mu _2}^2}{(1+s)^{2}}U(s)\text {d}s\\&\quad =\int _0^t\int _{R^n}\int _0^s g(s-\tau )|u(\tau ,x)|^p\text {d}\tau \text {d}x\text {d}s. \end{aligned} \end{aligned}$$
(4.10)

Differentiating with respect to t gives

$$\begin{aligned} \begin{aligned} U^{''}(t)+\frac{\mu _1}{1+t}U^{'}(t)+\frac{{\mu _2}^2}{(1+t)^{2}}U(t)=\int _{R^n}\int _0^t g(t-\tau )|u(\tau ,x)|^p\text {d}\tau \text {d}x. \end{aligned} \end{aligned}$$
(4.11)

The quadratic equation

$$\begin{aligned} r^2-(\mu _1-1)r+\mu _2^2=0 \end{aligned}$$

has a pair of real roots

$$\begin{aligned} r_{1,2}=\frac{\mu _{1}-1\mp \sqrt{\delta }}{2}, \end{aligned}$$
(4.12)

since \(\delta =(\mu _1-1)^2-4\mu _2^2> 0\). Clearly,

$$\begin{aligned} \left. \begin{aligned}&\quad \mu _{1}>1~\Rightarrow & {} \quad r_{1,2}>0\\&0\leqslant \mu _{1}<1\Rightarrow & {} -1<r_{1,2},<0\\&\quad \mu _{1}=1\Rightarrow & {} \quad r_{1,2}=0 \\ \end{aligned} \right\} \Rightarrow r_{1,2}+1>0. \end{aligned}$$
(4.13)

Rewrite (4.11) as

$$\begin{aligned}{} & {} \bigg (U^{'}(t)+\frac{r_1}{1+t}U(t)\bigg )^{'}+\frac{{r_2}+1}{(1+t)}\bigg (U^{'}(t)+\frac{r_1}{1+t}U(t)\bigg ) \nonumber \\{} & {} \quad =\int _{R^n}\int _0^t g(t-\tau )|u(\tau ,x)|^p\text {d}\tau \text {d}x. \end{aligned}$$
(4.14)

Multiplying by \((1+t)^{r_2+1}\), integrating over [0, t] and using (2.5) yield

$$\begin{aligned}{} & {} U^{'}(t)+\frac{r_1}{1+t}U(t)\nonumber \\{} & {} \quad \geqslant (1+t)^{-r_2-1}\int _0^t(1+s)^{r_2+1}\int _{R^n}\int _0^s g(s-\tau )|u(\tau ,x)|^p\text {d}\tau \text {d}x\text {d}s. \end{aligned}$$
(4.15)

Another multiplying by \((1+t)^{r_1}\) and integrating over [0, t] give

$$\begin{aligned}{} & {} (1+t)^{r_1}U(t)-U(0)\nonumber \\{} & {} \quad \geqslant \int _0^t(1+\tau )^{r_1-r_2-1}\int _0^{\tau }(1+s)^{r_2+1}\int _{R^n}\int _0^s g(s-\eta )|u(\eta ,x)|^p \text {d}\eta \text {d}x \text {d}s \text {d}\tau , \nonumber \\ \end{aligned}$$
(4.16)

where \(U(0)=\int _{R^n}u(0,x)\text {d}x=\int _{R^n}u_0\text {d}x\) is nonnegative due to \(u_0(x)\geqslant 0\), thus

$$\begin{aligned} U(t)\geqslant & {} (1+t)^{-r_1} \nonumber \\{} & {} \cdot \int _0^t(1+\tau )^{r_1-r_2-1}\int _0^{\tau }(1+s)^{r_2+1}\int _0^s g(s-\eta )\int _{R^n} |u(\eta ,x)|^p \text {d}x \text {d}\eta \text {d}s \text {d}\tau .\nonumber \\ \end{aligned}$$
(4.17)

Note \(r_1-r_2-1<0\), substituting

$$\begin{aligned} \int _{R^n}|u(\eta ,x)|^p \text {d}x\geqslant C_0|U(\eta )|^p \left( \int _{B_{R+\eta }}\text {d}x ~\right) ^{-(p-1)}=C_0(\eta +R)^{-n(p-1)}|U(\eta )|^p\nonumber \\ \end{aligned}$$
(4.18)

into (4.17) gives that

$$\begin{aligned} U(t)\geqslant C_0(1+t)^{-r_2-1}\int _0^t\int _0^{\tau }(1+s)^{r_2+1-n(p-1)}\int _0^s g(s-\eta )|U(\eta )|^p \text {d}\eta \text {d}s \text {d}\tau . \qquad \end{aligned}$$
(4.19)

This is the iteration frame to be used in the sequel.

Step2 Deducing the first lower bound estimate for U(t). For \(g\in C^1([0,\infty ))\), we introduce

$$\begin{aligned} G(t):=\int _0^t g(\eta )\text {d}\eta , \end{aligned}$$
(4.20)

which has the following properties

  • \(G^{'}(t)=g(t)>0,G(0)=0.\)

  • G(t) is strictly increasing and \(G(t)\geqslant 0\).

  • \(\int _0^tg(t-\eta )\eta ^{\alpha }\text {d}\eta = \int _0^tg(\eta )(t-\eta )^{\alpha }\text {d}\eta \)

    $$\begin{aligned} \begin{aligned} =[G (\eta )(t-\eta )^{\alpha }]\big |_{\eta =0}^{\eta =t}+\alpha \int _0^t G (\eta )(t-\eta )^{\alpha -1}\text {d} \eta \quad \quad \quad \quad \quad \quad \,\,\\ \geqslant \, {\left\{ \begin{array}{ll} G (t_0),&{} \alpha =0,\\ \alpha \int _{t_0}^{t} G (\eta )(t-\eta )^{\alpha -1}\text {d} \eta , &{}\alpha >0,\,\,\\ \end{array}\right. } \geqslant G (t_0)(t-t_0)^{\alpha }\quad \quad \, \end{aligned} \end{aligned}$$
    (4.21)

    for all \(t\geqslant t_0>0\), \(\alpha \geqslant 0\).

From Lemma (4.1), (4.17) and (4.21), for \(t\geqslant t_0\), it follows that

$$\begin{aligned}&U(t)\geqslant C_1(1+t)^{-r_1}\nonumber \\&\qquad \times \int _0^t(1+\tau )^{r_1-r_2-1}\int _0^{\tau }(1+s)^{r_2+1}\int _0^s g(s-\eta )(1+\eta )^{n-1-\frac{n+\mu _1-1}{2}p} \text {d}\eta \text {d}s \text {d}\tau \nonumber \\&\quad \geqslant C_1 (1+t)^{-r_2-1-\frac{n+\mu _1-1}{2}p} \int _0^t \int _0^{\tau }(1+s)^{r_2+1} \int _0^s g(s-\eta )\eta ^{n-1} \text {d}\eta \text {d}s \text {d}\tau \nonumber \\&\quad \geqslant C_1 (1+t)^{-r_2-1-\frac{n+\mu _1-1}{2}p} G(t_0)\int _{t_0}^t \int _{t_0}^{\tau }(s-t_0)^{r_2+1}(s-t_0)^{n-1}\text {d}s \text {d}\tau \nonumber \\&\quad \geqslant \frac{C_1G(t_0)}{(n+r_2+2)(n+r_2+1)} (1+t)^{-r_2-1-\frac{n+\mu _1-1}{2}p} (t-t_0)^{r_2+n+2}. \end{aligned}$$
(4.22)

Step3 Formulating the iteration procedure. Assuming inductively that

$$\begin{aligned} U(t)\geqslant Q_j (1+t)^{-\theta _{j}} (t-L_{j}t_0)^{\sigma _j},~\text {for}~t\geqslant L_{j}t_0, \end{aligned}$$
(4.23)

where \(\{Q_j\}_{j\geqslant 1}, \{\theta _{j}\}_{j\geqslant 1}, \{\alpha _j\}_{j\geqslant 1}\) are sequences of nonnegative real numbers that will be determined inductively. When \(j=1\),

$$\begin{aligned} \begin{aligned}&Q_1=\frac{C_1G_1(t_0)}{(n+r_2+2)(n+r_2+1)},\\&\theta _{1}=r_2+1+\frac{(n+\mu _1-1)}{2}p~,\\&\sigma _1=r_2+n+2. \end{aligned} \end{aligned}$$
(4.24)

Inspired by Chen [1], we construct \(\{L_j\}_{j\geqslant 1}\) as follows

$$\begin{aligned} \begin{aligned}&L_j:=\prod _{k=0}^{j}l_k~\text {for}~\text {any}~j\geqslant 1,\\&l_k:=1+p^{-(k-1)}>1~\text {for}~\text {any}~k\geqslant 1, \end{aligned} \end{aligned}$$
(4.25)

where \(L_j\) is monotonically increasing and \(\prod _{k=0}^{\infty }l_k\) is convergent due to the ratio test method and the fact that \(\lim \limits _{k\rightarrow \infty }(\ln l_{k+1})/(\ln l_k)=p^{-1} < 1\); then using induction, we know that \(L\triangleq \prod _{k=0}^{\infty }l_k>1\) and \(L_j \in [1,L]\).

Substituting (4.23) into (4.19), one has

$$\begin{aligned} \begin{aligned} U(t)&\geqslant C_0(1+t)^{-r_2-1}\int _0^t \int _0^{\tau }(1+s)^{r_2+1-n(p-1)}\\&\quad \int _0^s g(s-\eta )\big |Q_{j} (1+\eta )^{-\theta _{j}} (\eta -L_{j}t_0)^{\sigma _{j}}\big |^p \text {d}\eta \text {d}s \text {d}\tau \\&\geqslant C_0Q_{j}^p(1+t)^{-r_2-1-n(p-1)-\theta _{j}p}\\&\quad \times \int _{L_j t_0}^t \int _{L_j t_0}^{\tau } (s-L_{j+1}t_0)^{r_2+1} \int _{L_j t_0}^s g(s-\eta )(\eta -L_{j}t_0)^{\sigma _{j}p}\text {d}\eta \text {d}s \text {d}\tau . \end{aligned} \end{aligned}$$
(4.26)

Making change of the variable and using integration by parts yield

$$\begin{aligned} \begin{aligned} \int _{L_j t_0}^s g(s-\eta )&(\eta -L_{j}t_0)^{\sigma _{j}p}\text {d}\eta =\int _{0}^{s-L_jt_0}g(\eta )(s-L_jt_0-\eta )^{\sigma _{j}p}\text {d}\eta \\ =&\sigma _{j}p \int _{0}^{s-L_jt_0}G(\eta )(s-L_jt_0-\eta )^{\sigma _{j}p-1}\text {d}\eta \\ \geqslant&\sigma _{j}p\int _{L_jt_0(l_{j+1}-1)}^{s-L_jt_0}G(\eta )(s-L_jt_0-\eta )^{\sigma _{j}p-1}\text {d}\eta \\ \geqslant&G\big (L_jt_0(l_{j+1}-1)\big )\cdot (s-L_jt_0 l_{j+1})^{\sigma _{j}p} \end{aligned} \end{aligned}$$
(4.27)

for any \(s\geqslant L_{j+1}t_0\), where we used the fact

$$\begin{aligned} s\geqslant L_{j+1}t_0\Rightarrow s-L_{j}t_0\geqslant L_jt_0(l_{j+1}-1)=L_jt_0p^{-j}>0. \end{aligned}$$

In view of \(g(t) \in C^1([0, \infty ))\), and \(L>1\), \(L_j \in [1,L]\) for all \(j\geqslant 1\), there exists a positive real integer \(j_0\) such that if \(j\geqslant j_0\), it holds

$$\begin{aligned} 0<L_jt_0(l_{j+1}-1)=L_jt_0p^{-j}\ll 1. \end{aligned}$$

Thus,

$$\begin{aligned} \begin{aligned}&G(L_jt_0(l_{j+1}-1))=G(0)+g(\xi )L_jt_0(l_{j+1}-1)\\&\quad = G(0)+g(0)L_jt_0p^{-j}+g^{'}(\zeta )O\bigg (~\big (~p^{-j}L_jt_0~\big )^2\bigg )\\&\quad = p^{-2j}L_jt_0\bigg (p^{j} g(0)+g^{'}(\zeta )O(1) \bigg )\geqslant C_3p^{-2j},~~~ \end{aligned} \end{aligned}$$
(4.28)

where \(\xi \in (0,L_jt_0p^{-j}),~\zeta \in (0,~\xi ) \), for any \(j\geqslant \max \{j_0,j_1\}\), \(j_1\) is chosen to guarantee that \(p^{j} g(0)+g^{'}(\zeta )O(1)\geqslant C>0\) which due to \(g(0)>0\). Summing up (4.27), (4.28), we conclude that

$$\begin{aligned} \begin{aligned}&U(t)\geqslant C_0C_3 Q_{j}^p(1+t)^{-r_2-1-n(p-1)-\theta _{j}p}\\&\quad \quad \quad \int _{L_j t_0}^t \int _{L_j t_0}^{\tau } (s-L_{j+1}t_0)^{r_2+1+\sigma _{j}p} p^{-2j}\text {d}s\text {d}\tau \\&\quad \geqslant \frac{C_0C_3p^{-2j}Q_{j}^p}{(r_2+3+\sigma _{j}p)(r_2+2+\sigma _{j}p)} (1+t)^{-r_2-1-n(p-1)-\theta _{j}p}(t-L_{j+1}t_0)^{r_2+3+\sigma _{j}p} \end{aligned} \end{aligned}$$
(4.29)

for any \(t\geqslant L_{j+1}t_0~,~j\geqslant \max \{j_0,j_1\}.\) Therefore, the desired iteration procedure (4.23) is valid for

$$\begin{aligned} \begin{aligned}&Q_{j+1}:=\frac{C_0C_3p^{-2j}Q_j^p}{(r_2+3+\sigma _{j}p)(r_2+2+\sigma _{j}p)}~,\\&\theta _{j+1}:=r_2+1+n(p-1)+\theta _jp~,\\&\sigma _{j+1}:=r_2+3 + p\sigma _j. \end{aligned} \end{aligned}$$
(4.30)

Step 4 Achieving blowup. For any number j such that \(j\geqslant \max \{j_0,j_1\}\), from (4.30) one has

$$\begin{aligned}{} & {} \begin{aligned} \theta _{j}=\left( \frac{A_1}{p-1}+\theta _1\right) p^{j-1}-\frac{A_1}{p-1},\quad A_1=r_2+1+n(p-1), \end{aligned} \end{aligned}$$
(4.31)
$$\begin{aligned}{} & {} \begin{aligned} \sigma _{j}=&\left( \frac{B_1}{p-1}+\sigma _1\right) p^{j-1}-\frac{B_1}{p-1} <\left( \frac{B_1}{p-1}+\sigma _1\right) p^{j-1}\triangleq B_{1}^{'}p^{j-1},\\&\quad B_1=r_2+3 \end{aligned} \end{aligned}$$
(4.32)

and

$$\begin{aligned} \begin{aligned} Q_{j} \geqslant \frac{C_0C_3p^{-2(j-1)}}{B_{1}^{'2}p^{2(j-1)}}Q_{j-1}^p \triangleq E_2p^{-4(j-1)}Q_{j-1}^{p}. \end{aligned} \end{aligned}$$
(4.33)

Then for any number \(j\geqslant \max \{j_0,j_1\}\), it holds that

$$\begin{aligned} \begin{aligned} \ln Q_j&\geqslant p^{j-1}\ln Q_1-4\ln p\bigg (\sum _{k=0}^{j-2}(j-k-1)p^k\bigg )+\ln E_2 \bigg (\sum _{k=0}^{j-2}p^k\bigg )\\&=p^{j-1}\ln Q_1-\frac{4}{p-1} \bigg (\frac{p^j-1}{p-1}-j\bigg )\ln p +\bigg (\frac{p^{j-1}-1}{p-1}\bigg )\ln E_2\\&=p^{j-1}\bigg (\ln Q_1-\frac{4p}{(p-1)^2}\ln p+\frac{1}{p-1}\ln E_2 \bigg )\\&\qquad +\bigg (\frac{4j\ln p}{p-1}+\frac{4\ln p}{(p-1)^2}-\frac{\ln E_2}{p-1}\bigg ). \end{aligned} \end{aligned}$$
(4.34)

If we take \(j_2\) is the minimum integer such that

$$\begin{aligned} \frac{4j\ln p}{p-1}+\frac{4\ln p}{(p-1)^2}-\frac{\ln E_2}{p-1}>0, \end{aligned}$$

then for any \(j\geqslant \max \{j_0,j_1,j_2\}\), one has

$$\begin{aligned} \begin{aligned} \ln Q_j\geqslant p^{j-1}\bigg (\ln Q_1-\frac{4p\ln p}{(p-1)^2}+\frac{1}{p-1}\ln E_2 \bigg )\triangleq p^{j-1}\ln E_3, \end{aligned} \end{aligned}$$
(4.35)

where \(E_3= Q_1 p^{-\frac{4p}{(p-1)^2}} E_2^{\frac{1}{p-1}}\) is the suitable positive constants independent of j.

It is easy to obtain that, for \(t\geqslant \max \{1,2Lt_0\}\),

$$\begin{aligned} \ln (1+t)\leqslant \ln (2t)~,~\ln (t-Lt_0)\geqslant \ln (\frac{t}{2}). \end{aligned}$$
(4.36)

Then, for any number \(j\geqslant \max \{j_0,j_1,j_2\}\), noting (4.31), (4.32), (4.35) and \(L_j \in [1,L]\), it follows from (4.23) that

$$\begin{aligned} \begin{aligned} U(t)&\geqslant \exp \bigg (p^{j-1}\ln E_3\bigg )\\&\quad \times (1+t)^{-\big (\frac{A_1}{p-1}+\theta _1\big )p^{j-1}+\frac{A_1}{p-1}}(t-L_jt_0)^{\big (\frac{B_1}{p-1}+\sigma _1\big )p^{j-1}-\frac{B_1}{p-1}}\\&\geqslant \exp \bigg (p^{j-1}\ln \left( E_3(1+t)^{-\big (\frac{A_1}{p-1}+\theta _1\big )}(t-Lt_0)^{\big (\frac{B_1}{p-1}+\sigma _1\big )}\right) \bigg )\\&\quad \quad \times (1+t)^{\frac{A_1}{p-1}} (t-t_0)^{-\frac{B_1}{p-1}}\\&\triangleq \exp \bigg (p^{j-1}J(t)\bigg )(1+t)^{\frac{A_1}{p-1}} (t-t_0)^{-\frac{B_1}{p-1}}, \end{aligned} \end{aligned}$$
(4.37)

where

$$\begin{aligned} \begin{aligned} J(t)&= \ln \left( E_3(1+t)^{-\big (\frac{A_1}{p-1}+\theta _1\big )}(t-Lt_0)^{\big (\frac{B_1}{p-1}+\sigma _1\big )}\right) \\&\geqslant \ln \bigg (E_3(2t)^{-\big (\frac{A_1}{p-1}+\theta _1\big )}\left( \frac{t}{2}\right) ^{\big (\frac{B_1}{p-1}+\sigma _1\big )}\bigg )\\&=\ln \left( E_32^{-\big (\frac{A_1}{p-1}+\theta _1\big )-\big (\frac{B_1}{p-1}+\sigma _1\big )} t^{-\big (\frac{A_1}{p-1}+\theta _1\big )+\big (\frac{B_1}{p-1}+\sigma _1\big )}\right) \\&=\ln \left( E_32^{-\frac{2r_2+4}{p-1}-\big (\frac{n+\mu _1-1}{2}p+2n+2r_2+3\big )} t^{-\frac{1}{p-1}\big (\frac{n+\mu _1-1}{2}p^2-\frac{n+\mu _1+1}{2}p-1\big )}\right) . \end{aligned} \end{aligned}$$
(4.38)

For any number \(j\geqslant \max \{j_0,j_1,j_2\}\), the condition (2.4) guarantees that the power of t in J(t) is positive. So, there exists a \(t_1\) such that

$$\begin{aligned} t_1^{-\frac{1}{p-1}\big (\frac{n+\mu _1-1}{2}p^2-\frac{n+\mu _1+1}{2}p-1\big )}>\bigg (E_3 2^{-\frac{2r_2+4}{p-1}-(\frac{n+\mu _1-1}{2}p+2n+2r_2+3)}\bigg )^{-1}. \end{aligned}$$
(4.39)

If we take \(t\geqslant \max ~\{1,2Lt_0,t_1\}\), then the lower bound of U(t) in (4.37) must blow up as \(j\rightarrow \infty \), which completes the proof of Theorem 2.2.