1 Introduction

It is known that stochastic partial differential equations (SPDEs) are appropriate mathematical models for many multiscale systems with uncertain and fluctuating influences. In the past few decades, the theoretical research of SPDEs has attracted a large number of research workers and has already achieved fruitful results. There are many interesting problems, such as well-posed problems, blow-up problems, stability, random attractors and ergodicity, which have been investigated for different kinds of SPDEs. We refer the readers to [2, 4, 15, 20,21,22, 24, 25] for details.

In paper [15], Shiga investigated the following SPDE

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} \frac{\partial u(t, x)}{\partial t}= \Delta u(t, x)+b\left( u(t, x)\right) +a\left( u(t, x)\right) {\dot{W}}(t,x), \quad t\ge 0, x\in {{\mathbb {R}}},\\ u(0, x)= f(x), \end{array} \right. \end{aligned} \end{aligned}$$
(1.1)

where \(\Delta = \frac{\partial ^{2}}{\partial x^{2}}, {\dot{W}}(t,x)\) is a space–time white noise, and a(u) and \(b(u): {\mathbb {R}}\rightarrow {\mathbb {R}}\) are continuous functions. Such an equation arises in many fields, such as population biology, quantum field, statistical physics and neurophysiology, see [9, 18]. Noting that support compactness of solutions propagates with passage of time (SCP), Shiga showed that the SCP property and the strong positivity are two contrasting properties of solutions for (1.1). Moreover, Shiga also showed that if the coefficients a(u) and b(u) satisfy the global Lipschitz condition or a(u) and b(u) are continuous and satisfy the linear growth condition, then (1.1) has a continuous solution and the pathwise uniqueness or uniqueness in law holds in some appropriate space, respectively.

Xie [25] generalized the results of Shiga [15] and considered the Cauchy problem of the following stochastic heat equation

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} \frac{\partial u(t, x)}{\partial t}= \frac{1}{2}\frac{\partial ^{2}u(t, x)}{\partial x^{2}}+b\left( t,x,u(t, x)\right) +\sigma \left( t, x, u(t, x)\right) \frac{\partial ^{2}W(t, x)}{\partial t \partial x}, \quad \ t\ge 0, x\in {\mathbb {R}},\\ u(0, x)= u_{0}(x), \end{array} \right. \end{aligned}\nonumber \\ \end{aligned}$$
(1.2)

where \(\{W(t,x), t\ge 0, x\in {\mathbb {R}}\}\) is a two-sided Brownian sheet and the coefficients \(b, \sigma : [0, \infty )\times {\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) are continuous, nonlinear functions. By utilizing the successive approximations, Xie [25] investigated the existence of mild solutions to (1.2) under the weaker conditions than the Lipschitz condition.

The above results are based on the fact that the future state of the system is dependent of the past states and is determined solely by the present. However, the rate of change of the state variable of a more realistic model depends on not only the current but also the historical and even the future states of the system. Stochastic functional differential equations (SFDEs) give a mathematical formulation for such models and have been extensively studied in the literature [1, 5, 7, 8, 10, 11, 14, 16, 17, 26,27,28]. Particularly, the existence and uniqueness of mild solutions to stochastic partial functional differential equations (SPFDEs) have received a great deal of attention. Under the global Lipschitz condition and the linear growth condition, Taniguchi [17] and Luo [12] studied the existence and uniqueness of mild solutions to SPFDEs by the well-known Banach fixed-point theorem and strong approximating system, respectively. Govindan [6] investigated the existence, uniqueness and almost sure exponential stability of stochastic neutral partial functional differential equations under the global Lipschitz condition and the linear growth condition by the stochastic convolution.

However, we know that the nonlinear terms of some systems perhaps do not obey the global Lipschitz condition or the linear growth condition, or even the local Lipschitz condition. Therefore, in this paper we shall promote the works of [15, 25] and investigate the following SPFDE

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} \frac{\partial u(t, x)}{\partial t}= \frac{1}{2}\frac{\partial ^{2}u(t, x)}{\partial x^{2}}+b\left( t, x, u_{t}(x)\right) +\sigma \left( t, x, u_{t}(x)\right) \frac{\partial ^{2}W(t, x)}{\partial t \partial x}, \quad t_{0}\le t\le T, x\in {\mathbb {R}},\\ u_{t_{0}}=\xi =\{\xi (\theta , x):-\tau \le \theta \le 0, x\in {\mathbb {R}}\}, \end{array} \right. \end{aligned}\nonumber \\ \end{aligned}$$
(1.3)

where \(u_{t}(x)=:\{u(t+\theta , x), -\tau \le \theta \le 0, x\in {\mathbb {R}}\}\) is regarded as an \({\mathcal {F}}_{t}\)-measurable \({\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\)-valued stochastic process, \(u_{t_{0}}\in {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\) is an \({\mathcal {F}}_{t_{0}}\)-measurable random variable, and \(\tau \) is a fixed positive constant. The coefficients \(b, \sigma : [t_{0}, T]\times {\mathbb {R}}\times {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\rightarrow {\mathbb {R}}\) are continuous Borel measurable functions, and \(\{W(t,x), t\ge t_{0}, x\in {\mathbb {R}}\}\) is a two-sided Brownian sheet.

The main aim of this paper is to study the well-posedness and other properties of mild solutions to (1.3) under some appropriate conditions. The paper is organized as follows. Firstly, we establish an existence–uniqueness theorem under the global Lipschitz condition and the linear growth condition. Secondly, we show the existence–uniqueness property under the global Lipschitz condition without the linear growth condition. Then, some sufficient conditions ensuring the existence–uniqueness are given under the local Lipschitz condition without assuming the linear growth condition. Furthermore, we consider the existence and uniqueness under the weaker condition than the Lipschitz condition. Finally, we obtain the nonnegativity and comparison theorems and utilize them to investigate the existence of mild solutions without assuming the Lipschitz condition.

2 Preliminaries

In this section, we introduce some notations and recall some basic definitions.

Let \((\Omega , {\mathcal {F}}, \{{\mathcal {F}}_{t}\}_{t\ge t_{0}}, {\mathbb {P}})\) be a complete probability space with the filtration \(\{{\mathcal {F}}_{t}\}_{t\ge t_{0}}\), which satisfies the usual condition. The random field \(\{W(t,x), t\ge t_{0}, x\in {\mathbb {R}}\}\) is a two-sided Brownian sheet if it is an \({\mathcal {F}}_{t}\)-adapted centered Gaussian random field with the covariance

$$\begin{aligned} {\mathbb {E}}\left[ W(s,x)W(t,y)\right] =\frac{1}{2}\min \{s,t\}(|x|+|y|-|x-y|), \ \mathrm {for} \ s,t\ge t_{0}, \ x,y\in {\mathbb {R}}. \end{aligned}$$

For the two-sided Brownian sheet, we refer the readers to the monograph [19]. In the sequel, we shall suppose that W generates an \(\{{\mathcal {F}}_{t}\}_{t\ge t_{0}}\)-martingale measure and investigate the stochastic integral with respect to W introduced by Evans [3] and Walsh [23].

Let \({\mathbb {C}}_{r}, r\in {\mathbb {R}}\), denote the family of all continuous Borel measurable functions \(\psi (x)\) defined on \({\mathbb {R}}\) with the norm \(\Vert \psi \Vert _{r}=: \mathrm {ess}\sup _{x\in {\mathbb {R}}}|\psi (x)|e^{-r|x|}<\infty \). Let \({\mathbb {C}}([-\,\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\) be the space of all continuous \({\mathbb {R}}\)-valued functions \(\varphi (\theta , x)\) defined on \([-\,\tau , 0]\times {\mathbb {R}}\) with the norm defined by \(\Vert \varphi \Vert =:\sup _{-\tau \le \theta \le 0, x\in {\mathbb {R}}}e^{-\frac{1}{2}r|x|}\mid \varphi (\theta , x)\mid \), where \(\tau >0\) and \(r>0\). Let \({\mathcal {M}}_{r,p,T}\), \(p\ge 2\), be the family of all random fields \(\{X(t, x), t\ge t_{0}, x\in {\mathbb {R}}\}\) defined on \((\Omega , {\mathcal {F}}, \{{\mathcal {F}}_{t}\}_{t\ge t_{0}}, {\mathbb {P}})\) such that

$$\begin{aligned} \Vert X\Vert _{r,p,T}=:\sup _{s\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}\left[ |X(s, x)|^{p}\right] <\infty . \end{aligned}$$

Following Borel–Cantelli lemma [3], \({\mathcal {M}}_{r,p,T}\) with the norm \(\Vert \cdot \Vert _{r,p,T}\) is a Banach space for each \(r>0\) and \(p\ge 2\). For the reader’s convenience, if \(p=2\) we abbreviate \({\mathcal {M}}_{r,p,T}\) and \(\Vert X\Vert _{r,p,T}\) as \({\mathcal {M}}_{r,T}\) and \(\Vert X\Vert _{r,T}\). In the sequel, unless otherwise specified, we always assume that \(\xi \in {\mathbb {C}}([-\,\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\) is an \({\mathcal {F}}_{t_{0}}\)-measurable random variable, \(\xi (0, x)\) is an \({\mathcal {F}}_{t_{0}}\)-measurable \({\mathbb {C}}_{r}\)-valued random variable, and r is a positive constant.

Definition 2.1

[25] Let \(\textit{J}=[t_{0}-\tau , T]\). A predictable random field u(tx) defined on \(\textit{J}\times {\mathbb {R}}\) is called a mild solution of (1.3), if for every \(t_{0}\le t\le T\), it satisfies the following stochastic integral equation (SIE)

$$\begin{aligned} \begin{aligned} u(t, x)=&\int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y+\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\\&+\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)\sigma (s, y, u_{s}(y))W({\text {d}}y,{\text {d}}s), \quad \ a.s. \end{aligned} \end{aligned}$$
(2.1)

where \(g(t, x, y )=(2\pi t)^{-\frac{1}{2}}\exp \{-\frac{(x-y)^{2}}{2t}\}\) is the fundamental solution of the homogeneous Cauchy problem of (1.3).

Remark 2.1

For any \(t\in [t_{0}, T]\), if an \({\mathcal {F}}_{t}\)-measurable functional \(u(t, \cdot )=u(t, \cdot , \omega )\) is a \({\mathbb {C}}_{r}\)-valued stochastic process and satisfies (2.1), we say u(tx) is a \({\mathbb {C}}_{r}\)-valued solution of (1.3).

Definition 2.2

[25] The solution u(tx) of (1.3) on \(\textit{J}\times {\mathbb {R}}\) is said to be unique if every other solution \({\bar{u}}(t, x)\) on \(\textit{J}\times {\mathbb {R}}\) is indistinguishable from it, that is,

$$\begin{aligned} P(u(t, x)={\bar{u}}(t, x) \ \mathrm {for} \ \mathrm {all} \ t\in \textit{J} \ \mathrm {and} \ x\in {\mathbb {R}})=1. \end{aligned}$$

Definition 2.3

[27] The sample path \(u(t, x)=u(t, x, \omega )\) of (1.3) is said to be explosive at \({\bar{t}}(\omega )\) if for some \(\omega \in \Omega \), \({\bar{t}}(\omega )<T\), it satisfies

$$\begin{aligned} {\overline{\lim }}_{t\rightarrow {\bar{t}}^{-}}|u(t, x)|=\infty . \end{aligned}$$

3 Main Results

In the sequel, the letter c represents a positive constant depending on some parameters which may change occasionally its value from line to line, while \(c_{i}\) and \(c'_{i}\) will denote some fixed positive constants.

For the reader’s convenience, we firstly provide some fundamental inequalities on the heat kernel g(txy) of (2.1), and Gronwall’s inequality and lemma of Kolmogorov.

Lemma 3.1

[25]

  1. (i)

    For each \(r\in {\mathbb {R}}\) and \(T>0\), there exists a constant c depending only on r, \(t_{0}\) and T such that

    $$\begin{aligned} \int _{{\mathbb {R}}}g(t, x, y )e^{r|y|}{\text {d}}y\le ce^{r|x|}, \quad \forall x\in {\mathbb {R}}, \quad \ \forall t\in [t_{0}, T]. \end{aligned}$$
    (3.1)
  2. (ii)

    If \(0<p<3\), then there exists a positive constant c such that

    $$\begin{aligned} \int _{t_{0}}^{t}\int _{{\mathbb {R}}}|g(t-s, x, y)|^{p}{\text {d}}y{\text {d}}s\le ct^{\frac{3-p}{2}}, \quad \forall x\in {\mathbb {R}}, \quad \forall t\in [t_{0}, T]. \end{aligned}$$
    (3.2)
  3. (iii)

    If \(\frac{3}{2}<p<3\), then there exists a positive constant c such that for all \(t\in [t_{0}, T]\),

    $$\begin{aligned} \int _{t_{0}}^{t}\int _{{\mathbb {R}}}|g(t-s, x, y)-g(t-s, x', y)|^{p}{\text {d}}y{\text {d}}s\le c|x-x'|^{3-p}, \quad \forall x, x'\in {\mathbb {R}}.\nonumber \\ \end{aligned}$$
    (3.3)
  4. (iv)

    If \(1<p<3\), then there exists a positive constant c such that for all t and \(t'\) (\(t_{0}\le t\le t'\le T\)),

    $$\begin{aligned} \int _{t_{0}}^{t}\int _{{\mathbb {R}}}|g(t-s, x, y)-g(t'-s, x, y)|^{p}{\text {d}}y{\text {d}}s\le c|t-t'|^{\frac{3-p}{2}}, \quad \forall x\in {\mathbb {R}},\nonumber \\ \end{aligned}$$
    (3.4)

    and

    $$\begin{aligned} \int _{t}^{t'}\int _{{\mathbb {R}}}|g(t'-s, x, y)|^{p}{\text {d}}y{\text {d}}s\le c|t-t'|^{\frac{3-p}{2}}, \quad \forall x\in {\mathbb {R}}. \end{aligned}$$
    (3.5)

Lemma 3.2

[3] Let \(\upsilon (t)\) and f(t) be nonnegative, continuous functions defined for \(t_{0}\le t \le T\), and let \(\upsilon _{0}\) denote a nonnegative constant. If

$$\begin{aligned} \upsilon (t)\le \upsilon _{0}+\int _{t_{0}}^{t}f(s)\upsilon (s) \mathrm{d}s \quad for \; all \; t_{0}\le t \le T, \end{aligned}$$

then

$$\begin{aligned} \upsilon (t)\le \upsilon _{0}e^{\int _{t_{0}}^{t}f(s) \mathrm{d}s} \quad for\; all\; t_{0}\le t \le T. \end{aligned}$$

Lemma 3.3

[25] Assume that \(\{u(t, x), t\ge 0, x\in {\mathbb {R}}\}\) is a real-valued random field. If there exist constants \(p>0\), \(\kappa >0\) and \(\delta >2\) such that for any compact subset \({\mathcal {K}}\),

$$\begin{aligned} {\mathbb {E}}[|u(t, x)-u(s, y)|^{p}]\le \kappa e^{r|x|}(|t-s|^{\delta }+|x-y|^{\delta }) \end{aligned}$$

for all \(|t-s|\le 1\) and \(x,y\in {\mathcal {K}}\), then \(u(t, \cdot )\) has a continuous \({\mathbb {C}}_{r}\)-valued version.

In what follows, we begin to state our main results. We firstly show that the Lipschitz condition and the linear growth condition can guarantee the existence and uniqueness of mild solutions to (1.3).

Theorem 3.1

Suppose that b and \(\sigma \) satisfy the global Lipschitz condition and the linear growth condition on \([t_{0}, T]\times {\mathbb {R}}\times {\mathbb {C}}([-\,\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\), that is, there exist two positive constants K and \({\bar{K}}\) such that for all \(\varphi , \psi \in {\mathbb {C}}([-\,\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\), \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\),

$$\begin{aligned} |b(t, x, \varphi )-b(t, x, \psi )|^{2}+|\sigma (t, x, \varphi )-\sigma (t, x, \psi )|^{2}\le&K \Vert \varphi -\psi \Vert ^{2}, \end{aligned}$$
(3.6)
$$\begin{aligned} |b(t, x, \varphi )|^{2}+|\sigma (t, x, \varphi )|^{2}\le&{\bar{K}}\left( 1+\Vert \varphi \Vert ^{2}\right) . \end{aligned}$$
(3.7)

Suppose further that the initial value \(\xi \) satisfies \({\mathbb {E}}\Vert \xi \Vert ^{2}<\infty \). Then, there exists a unique continuous \({\mathbb {C}}_{r}\)-valued solution \(u(t, \cdot )\) to (1.3). Moreover, the solution u(tx) belongs to \({\mathcal {M}}_{r, T}\).

To show this theorem, we shall prepare the following lemma.

Lemma 3.4

Assume that the linear growth condition (3.7) holds. If u(tx) is a solution to Eq. (1.3) with initial value \(\xi \) satisfying \({\mathbb {E}}\Vert \xi \Vert ^{2}<\infty \), then

$$\begin{aligned} \begin{aligned} \Vert u\Vert _{r, T}\le \Big (1+c_{3}{\mathbb {E}}\Vert \xi \Vert ^{2}\Big )e^{c_{1}(T-t_0)+2c_{2}\sqrt{T-t_{0}}}, \end{aligned} \end{aligned}$$
(3.8)

where \(c_{1}, c_{2}, c_{3}\) are constants specified in the following proof. In particular, u(tx) belongs to \({\mathcal {M}}_{r, T}\).

Proof

From (2.1), for \(t_{0}\le t\le T\) we have

$$\begin{aligned} \begin{aligned} {\mathbb {E}}[|u(t, x)|^{2}]&\le 3{\mathbb {E}}\Big [\Big |\int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y\Big |^{2}\Big ]\\&\quad +\,3{\mathbb {E}}\Big [\Big |\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\Big |^{2}\Big ]\\&\quad +\,3{\mathbb {E}}\Big [\Big |\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)\sigma (s, y, u_{s}(y))W({\text {d}}y,{\text {d}}s)\Big |^{2}\Big ]\\&=:\Phi _{0}(t, x)+\Phi _{1}(t, x)+\Phi _{2}(t, x). \end{aligned} \end{aligned}$$
(3.9)

By Lemma 3.1, we have

$$\begin{aligned} \begin{aligned} \Phi _{0}(t, x)&\le 3{\mathbb {E}}\left[ \int _{{\mathbb {R}}}g(t, x, y )|\xi (0, y)|{\text {d}}y\right] ^2\\&\le 3{\mathbb {E}}\left[ \int _{{\mathbb {R}}}g(t, x, y )e^{\frac{1}{2}r|y|}\left[ \sup _{y\in {\mathbb {R}}}e^{-\frac{1}{2}r|y|}|\xi (0, y)|\right] {\text {d}}y\right] ^2\\&\le c e^{r|x|}{\mathbb {E}}\left[ \sup _{y\in {\mathbb {R}}}e^{-r|y|}|\xi (0, y)|^{2}\right] \le c_{0}e^{r|x|}{\mathbb {E}}\Vert \xi \Vert ^{2}. \end{aligned} \end{aligned}$$
(3.10)

Note that

$$\begin{aligned} \sup _{t\in [t_{0}-\tau , T], x\in {\mathbb {R}}}e^{-r|x|}|u(t, x)|^{2}\le \Vert \xi \Vert ^{2}+\sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}|u(t, x)|^{2}, \end{aligned}$$

then combining Lemma 3.1 with Hölder and Burkhölder’s inequality, we obtain

$$\begin{aligned} \begin{aligned} \Phi _{1}(t, x)&\le c\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y){\text {d}}y{\text {d}}s\right] \\&\quad {\mathbb {E}}\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b^{2}(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\right] \\&\le c\int _{t_{0}}^{t}\int _{{\mathbb {R}}} g(t-s, x, y)(1+{\mathbb {E}}\Vert u_{s}\Vert ^{2}){\text {d}}y{\text {d}}s\\&\le c\int _{t_{0}}^{t}\int _{{\mathbb {R}}} g(t-s, x, y)e^{r|y|}\left[ \sup _{y\in {\mathbb {R}}}e^{-r|y|}(1+{\mathbb {E}}\Vert u_{s}\Vert ^{2})\right] {\text {d}}y{\text {d}}s\\&\le c_{1}e^{r|x|}\int _{t_{0}}^{t}(1+{\mathbb {E}}\Vert \xi \Vert ^{2}+\Vert u\Vert _{r, s}){\text {d}}s, \end{aligned} \end{aligned}$$
(3.11)

and

$$\begin{aligned} \begin{aligned} \Phi _{2}(t, x)&\le c{\mathbb {E}}\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g^{2}(t-s, x, y)\sigma ^{2}(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\right] \\&\le c\int _{t_{0}}^{t}\int _{{\mathbb {R}}}\frac{1}{\sqrt{t-s}}g(t-s, x, y)(1+{\mathbb {E}}\Vert u_{s}\Vert ^{2}){\text {d}}y{\text {d}}s\\&\le c_{2}e^{r|x|}\int _{t_{0}}^{t}\frac{1}{\sqrt{t-s}}(1+{\mathbb {E}}\Vert \xi \Vert ^{2}+\Vert u\Vert _{r, s}){\text {d}}s. \end{aligned} \end{aligned}$$
(3.12)

From (3.9)–(3.12), it follows that for \(t_{0}\le t\le T\),

$$\begin{aligned} \begin{aligned} e^{-r|x|}{\mathbb {E}}[|u(t, x)|^{2}]&\le c_{0} {\mathbb {E}}\Vert \xi \Vert ^{2}+\int _{t_{0}}^{t}\left( c_1+\frac{c_2}{\sqrt{t-s}}\right) (1+{\mathbb {E}}\Vert \xi \Vert ^{2}+\Vert u\Vert _{r, s}){\text {d}}s, \end{aligned} \end{aligned}$$

which implies

$$\begin{aligned} \begin{aligned} \Vert u\Vert _{r, t}\le c_{0}{\mathbb {E}}\Vert \xi \Vert ^{2}+\int _{t_{0}}^{t}\left( c_1+\frac{c_2}{\sqrt{t-s}}\right) (1+{\mathbb {E}}\Vert \xi \Vert ^{2}+\Vert u\Vert _{r, s}){\text {d}}s. \end{aligned} \end{aligned}$$

Now, the Gronwall’s inequality [3] yields that

$$\begin{aligned} \begin{aligned} 1+{\mathbb {E}}\Vert \xi \Vert ^{2}+\Vert u\Vert _{r, t}\le \Big (1+(1+c_{0}){\mathbb {E}}\Vert \xi \Vert ^{2}\Big )e^{c_{1}(t-t_0)+2c_{2}\sqrt{t-t_{0}}}. \end{aligned} \end{aligned}$$

Consequently,

$$\begin{aligned} \begin{aligned} \Vert u\Vert _{r, T}\le (1+c_{3}{\mathbb {E}}\Vert \xi \Vert ^{2})e^{c_{1}(T-t_0)+2c_{2}\sqrt{T-t_{0}}}, \end{aligned} \end{aligned}$$

where \(c_{3}=: 1+c_{0}\). Meanwhile, we can obtain that u(tx) is an element of \({\mathcal {M}}_{r, T}\). \(\square \)

Remark 3.1

Obviously, it is not difficult to show the boundedness of \(\Vert u\Vert _{r,p,T}\)\((p>2)\) by the same methods as those of Lemma 3.4, which implies \(u(t,x)\in {\mathcal {M}}_{r,p,T}\).

Proof of Theorem 3.1

Continuity. Assuming that u(tx) is the solution of (1.3), u(tx) satisfies the following SIE

$$\begin{aligned} \begin{aligned} u(t, x)&=\int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y+\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)\sigma (s, y, u_{s}(y))W({\text {d}}y,{\text {d}}s),\\&=:\Psi _{1}(t, x)+\Psi _{2}(t, x)+\Psi _{3}(t, x). \end{aligned} \end{aligned}$$

It is easy to prove that \(\Psi _{1}(t, \cdot )\) and \(\Psi _{2}(t, \cdot )\) are continuous \({\mathbb {C}}_{r}\)-valued stochastic processes by the regularity of heat kernel g(txy) and the assumptions of b and \(\xi (0, y)\). Therefore, it is sufficient to show that \(\Psi _{3}(t, \cdot )\) is a continuous \({\mathbb {C}}_{r}\)-valued stochastic process. By Lemma 3.1 and Remark 3.1, we see that for any compact subset \({\mathcal {K}}\) of \({\mathbb {R}}\) and some constant \(p>12\),

$$\begin{aligned}&{\mathbb {E}}[|\Psi _{3}(t,x)-\Psi _{3}(t,x')|^{p}]\nonumber \\&\quad \le c{\mathbb {E}}\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}(g(t-s, x, y)-g(t-s, x', y))^{2}\sigma ^{2}(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\right] ^{\frac{p}{2}}\nonumber \\&\quad \le c|x-x'|^{\frac{p-4}{2}}\int _{t_{0}}^{t}\int _{{\mathbb {R}}}|g(t-s, x, y)-g(t-s, x', y)|{\mathbb {E}}\left[ {\bar{K}}(1+\Vert u_{s}\Vert ^{2})\right] ^{\frac{p}{2}}{\text {d}}y{\text {d}}s\nonumber \\\\&\quad \le c(e^{r|x|}+e^{r|x'|})|x-x'|^{\frac{p-4}{2}}\int _{t_{0}}^{t}(1+{\mathbb {E}}\Vert \xi \Vert ^{p}+\Vert u\Vert _{r,p,s}){\text {d}}s\nonumber \\&\quad \le c_{4} e^{r|x|}|x-x'|^{\frac{p-4}{2}}\nonumber \end{aligned}$$
(3.13)

and

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}[|\Psi _{3}(t',x')-\Psi _{3}(t,x')|^{p}]\\&\quad \le c{\mathbb {E}}\left[ \int _{t}^{t'}\int _{{\mathbb {R}}}g^{2}(t'-s, x', y)\sigma ^{2}(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\right] ^{\frac{p}{2}}\\&\qquad +\,c{\mathbb {E}}\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}(g(t'-s, x', y)-g(t-s, x', y))^{2}\sigma ^{2}(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\right] ^{\frac{p}{2}}\\&\quad \le ce^{r|x|}|t'-t|^{\frac{p-4}{4}}\int _{t}^{t'}(1+{\mathbb {E}}\Vert \xi \Vert ^{p}+\Vert u\Vert _{r,p,s}){\text {d}}s\\&\qquad +\,ce^{r|x|}|t'-t|^{\frac{p-4}{4}}\int _{t_{0}}^{t}(1+{\mathbb {E}}\Vert \xi \Vert ^{p}+\Vert u\Vert _{r,p,s}){\text {d}}s\\&\quad \le c_{5}e^{r|x|}|t'-t|^{\frac{p-4}{4}} \end{aligned} \end{aligned}$$
(3.14)

for all \(t, t'\) with \(t_{0}\le t< t'\le T\), \(t'-t\le 1\), \(x, x'\in {\mathcal {K}}\). Combining (3.13) with (3.14), we have

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}[|\Psi _{3}(t',x')-\Psi _{3}(t,x)|^{p}]\\&\quad \le 2^{p-1}{\mathbb {E}}[|\Psi _{3}(t',x')-\Psi _{3}(t,x')|^{p}]+2^{p-1}{\mathbb {E}}[|\Psi _{3}(t,x)-\Psi _{3}(t,x')|^{p}]\\&\quad \le ce^{r|x|}\Big (|x-x'|^{\frac{p-4}{2}}+|t'-t|^{\frac{p-4}{4}}\Big ) \end{aligned} \end{aligned}$$

for all \(t'-t\le 1\), \(x, x'\in {\mathcal {K}}\). This, together with Lemma 3.3, implies that \(u(t, \cdot )\) has a continuous \({\mathbb {C}}_{r}\)-valued version.

Uniqueness Assume that u(tx) and \({\bar{u}}(t, x)\) are two solutions of (1.3). By Lemma 3.4, both of them belong to \({\mathcal {M}}_{r, T}\), and then for any \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\) we have

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}[|u(t, x)-{\bar{u}}(t, x)|^{2}]\\&\quad \le 2{\mathbb {E}}\left[ \Big |\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)(b(s, y, u_{s}(y))-b(s, y,{\bar{u}}_{s}(y))){\text {d}}y{\text {d}}s\Big |^{2}\right] \\&\qquad +\,2{\mathbb {E}}\left[ \Big |\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)(\sigma (s, y, u_{s}(y))-\sigma (s, y,{\bar{u}}_{s}(y)))W({\text {d}}y,{\text {d}}s)\Big |^{2}\right] \\&\quad = I_{1}(t, x)+I_{2}(t, x). \end{aligned} \end{aligned}$$
(3.15)

By (3.1), (3.3) and (3.6), applying Hölder’s inequality and Theorem 1.7.1 in [13] we have

$$\begin{aligned} I_{1}(t, x)\le & {} c \left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y){\text {d}}y{\text {d}}s\right] \nonumber \\&{\mathbb {E}}\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)|b(s, y, u_{s}(y))-b(s, y, {\bar{u}}_{s}(y))|^{2}{\text {d}}y{\text {d}}s\right] \nonumber \\\le & {} c\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y){\mathbb {E}}\Vert u_{s}-{\bar{u}}_{s}\Vert ^{2}{\text {d}}y{\text {d}}s\\\le & {} c\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)e^{r|y|}\left( \sup _{t_{0}\le l\le s, y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right) {\text {d}}y{\text {d}}s\nonumber \\\le & {} c_{6} e^{r|x|}\int _{t_{0}}^{t}\left[ \sup _{t_{0}\le l\le s, y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right] {\text {d}}s,\nonumber \end{aligned}$$
(3.16)

and

$$\begin{aligned} \begin{aligned} I_{2}(t, x)&\le c\int _{t_{0}}^{t}\int _{{\mathbb {R}}}\frac{1}{\sqrt{t-s}}g(t-s, x, y){\mathbb {E}}\Vert u_{s}-{\bar{u}}_{s}\Vert ^{2}{\text {d}}y{\text {d}}s\\&\le c\int _{t_{0}}^{t}\int _{{\mathbb {R}}}\frac{1}{\sqrt{t-s}}g(t-s, x, y)e^{r|y|}\\&\quad \left( \sup _{t_{0}\le l\le s, y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right) {\text {d}}y{\text {d}}s\\&\le c_{7}e^{r|x|}\int _{t_{0}}^{t}\frac{1}{\sqrt{t-s}}\left[ \sup _{t_{0}\le l\le s, y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right] {\text {d}}s. \end{aligned} \end{aligned}$$
(3.17)

Combining (3.15), (3.16) and (3.17), we deduce that

$$\begin{aligned} \begin{aligned} {\mathbb {E}}[|u(t, x)-{\bar{u}}(t, x)|^{2}]&\le e^{r|x|}\int _{t_{0}}^{t}\left( c_{6}+\frac{c_{7}}{\sqrt{t-s}}\right) \\&\quad \left[ \sup _{t_{0}\le l\le s, y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right] {\text {d}}s. \end{aligned} \end{aligned}$$

Therefore, we have

$$\begin{aligned} \begin{aligned} \Vert u-{\bar{u}}\Vert _{r,t}\le&\int _{t_{0}}^{t}\left( c_{6}+\frac{c_{7}}{\sqrt{t-s}}\right) \Vert u-{\bar{u}}\Vert _{r,s}{\text {d}}s. \end{aligned} \end{aligned}$$

From Lemma 3.2, it follows that

$$\begin{aligned} \begin{aligned} \Vert u-{\bar{u}}\Vert _{r,t}= 0, \ \ t\in [t_{0}, T], \end{aligned} \end{aligned}$$

which implies \(u(t, x)={\bar{u}}(t, x)\) for all \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\), a.s.

Existence We utilize the following successive approximation scheme. Define \(u^{0}(t, x)=\int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y, u_{t_{0}}^{0}(t, x)=\int _{{\mathbb {R}}}g(t, x, y )\xi (\theta , y){\text {d}}y\). For \(n=1, 2, \ldots \), we set \(u_{t_{0}}^{n}(t, x)=\int _{{\mathbb {R}}}g(t, x, y )\xi (\theta , y){\text {d}}y\) and define

$$\begin{aligned} \begin{aligned} u^{n}(t, x)&= \int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y+\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b(s, y, u^{n-1}_{s}(y)){\text {d}}y{\text {d}}s\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)\sigma (s, y, u^{n-1}_{s}(y))W({\text {d}}y,{\text {d}}s). \end{aligned} \end{aligned}$$
(3.18)

Now, we show that \(\{u^{n}(t, x), n=0, 1, 2, \ldots \}\) is uniformly bounded in \({\mathcal {M}}_{r, T}\) by induction. Since \({\mathbb {E}}\Vert \xi \Vert ^{2}<\infty \), by the same technique as the proof of Lemma 3.4, for \(t\in [t_{0}, T]\) we can obtain

$$\begin{aligned} \begin{aligned} {\mathbb {E}}[|u^{1}(t, x)|^{2}]&\le c_{0} e^{r|x|}{\mathbb {E}}\Vert \xi \Vert ^{2}+ e^{r|x|}\int _{t_{0}}^{t}\left( c_{1}+\frac{c_{2}}{\sqrt{t-s}}\right) (1+{\mathbb {E}}\Vert \xi \Vert ^{2}){\text {d}}s\\&\le c_{0} e^{r|x|}{\mathbb {E}}\Vert \xi \Vert ^{2}+\left( c_{1}(t-t_{0})+2c_{2}\sqrt{t-t_{0}}\right) (1+{\mathbb {E}}\Vert \xi \Vert ^{2})e^{r|x|}. \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} \begin{aligned} \Vert u^{1}\Vert _{r, t}\le c_{0}{\mathbb {E}}\Vert \xi \Vert ^{2}+\left( c_{1}(t-t_{0})+2c_{2}\sqrt{t-t_{0}}\right) (1+{\mathbb {E}}\Vert \xi \Vert ^{2}), \end{aligned} \end{aligned}$$

for \(t\in [t_{0}, T]\), which implies \(\Vert u^{1}\Vert _{r, T}< +\infty \). Therefore, we have \(u^{1}(t, x)\in {\mathcal {M}}_{r, T}\). Let us assume \(u^{n}(t, x)\in {\mathcal {M}}_{r, T}\) for some n; then, there exists a constant \(c'\) which satisfies \(\Vert u^{n}\Vert _{r, t}\le c'\) for \(t\in [t_{0}, T]\). From (3.18), it follows that

$$\begin{aligned} \begin{aligned} {\mathbb {E}}[|u^{n+1}(t, x)|^{2}]&\le c_{0} e^{r|x|}{\mathbb {E}}\Vert \xi \Vert ^{2}+ e^{r|x|}\int _{t_{0}}^{t}\left( c_{1}+\frac{c_{2}}{\sqrt{t-s}}\right) (1+{\mathbb {E}}\Vert u_{s}^{n}\Vert ^{2}){\text {d}}s\\&\le c_{0} e^{r|x|}{\mathbb {E}}\Vert \xi \Vert ^{2}+\left( c_{1}(t-t_{0})+2c_{2}\sqrt{t-t_{0}}\right) (1+c'+{\mathbb {E}}\Vert \xi \Vert ^{2})e^{r|x|}, \end{aligned} \end{aligned}$$

and hence that

$$\begin{aligned} \begin{aligned} \Vert u^{n+1}\Vert _{r, t}\le c_{0} {\mathbb {E}}\Vert \xi \Vert ^{2}+\left( c_{1}(t-t_{0}) +2c_{2}\sqrt{t-t_{0}}\right) (1+c'+{\mathbb {E}}\Vert \xi \Vert ^{2}) \end{aligned} \end{aligned}$$

for \(t\in [t_{0}, T]\), which means that \(u^{n+1}(t, x)\in {\mathcal {M}}_{r, T}\). Therefore, \(\{u^{n}(t, x), n=0, 1, 2, \ldots \}\) is uniformly bounded in \({\mathcal {M}}_{r, T}\).

For any two nonnegative integers mn, using the similar methods as above, we obtain for all \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\),

$$\begin{aligned} \begin{aligned}&\sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}|u^{n}(t, x)-u^{m}(t, x)|^{2}\\&\quad \le \int _{t_{0}}^{t}\left( c_{6}+\frac{c_{7}}{\sqrt{t-s}}\right) \left[ \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u^{n-1}(l, y)-u^{m-1}(l, y)|^{2}\right] {\text {d}}s. \end{aligned} \end{aligned}$$
(3.19)

Since \(\{u^{n}(t, x), n=0, 1, 2, \ldots \}\) is uniformly bounded in \({\mathcal {M}}_{r, T}\), we have

$$\begin{aligned} \sup _{m,n}\left[ \sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}\left| u^{n}(t, x)-u^{m}(t, x)\right| ^{2}\right] <\infty . \end{aligned}$$

Applying Fatou’s lemma to (3.19), we get

$$\begin{aligned} \begin{aligned}&\lim _{m,n\rightarrow \infty }\left[ \sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}|u^{n}(t, x)-u^{m}(t, x)|^{2}\right] \\&\quad \le \int _{t_{0}}^{t}\left( c_{6}+\frac{c_{7}}{\sqrt{t-s}}\right) \lim _{m,n\rightarrow \infty }\left[ \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u^{n-1}(l, y)-u^{m-1}(l, y)|^{2}\right] {\text {d}}s. \end{aligned} \end{aligned}$$

It is not difficult to see that \(\lim _{m,n\rightarrow \infty }\left[ \sup _{s\in [t_{0}, t], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u^{n}(s, y)-u^{m}(s, y)|^{2}\right] \) is nonnegative and monotonic in \(t\in [t_{0}, T]\). Using Lemma 3.2, we deduce that

$$\begin{aligned} \lim _{m,n\rightarrow \infty }\Vert u^{n}-u^{m}\Vert _{r,t}= 0, \quad t\in [t_{0}, T], \end{aligned}$$

which means that \(\{u^{n}(t, x), n=0, 1, 2, \ldots \}\) is a Cauchy sequence of the Banach space \({\mathcal {M}}_{r,T}\). Let u(tx) denote the limit of \(\{u^{n}(t, x), n=0, 1, 2, \ldots \}\). We pass to limits in the definition of \(u^{n}(t,x)\), to prove that \(u(t,x), t\in [t_{0}, T], x\in {\mathbb {R}}\) satisfies (2.1) a.s. which implies that u(tx) is the solution of (1.3). Therefore, the proof of Theorem 3.1 is completed. \(\square \)

Now, we give an existence–uniqueness theorem for (1.3) under the global Lipschitz condition without the linear growth condition, which will be a generalization of Theorem 3.1.

Theorem 3.2

Suppose that for every \(a\in [t_{0}, T)\) and \(t\in [t_{0}, a]\),

$$\begin{aligned} b(t, \cdot , 0)\in {\mathbb {C}}_{r}\ and \ \sigma (t, \cdot , 0)\in {\mathbb {C}}_{r}. \end{aligned}$$
(3.20)

And suppose further that b and \(\sigma \) satisfy the Lipschitz condition (3.6). Then, there exists a unique continuous \({\mathbb {C}}_{r}\)-valued solution \(u(t, \cdot )\) to (1.3) on \([t_{0}-\tau , a]\).

Remark 3.2

From (3.6), it follows that

$$\begin{aligned} \begin{aligned} |b(t, x, u_{t}(x))|^{2}&\le 2|b(t, x, 0)|^{2}+2|b(t, x, u_{t}(x))-b(t, x, 0)|^{2}\\&\le 2|b(t, x, 0)|^{2}+2K\Vert u_{t}\Vert ^{2}, \end{aligned} \end{aligned}$$
(3.21)
$$\begin{aligned} \begin{aligned} |\sigma (t, x, u_{t}(x))|^{2}&\le 2|\sigma (t, x, 0)|^{2}+2|\sigma (t, x, u_{t}(x))-\sigma (t, x, 0)|^{2}\\&\le 2|\sigma (t, x, 0)|^{2}+2K\Vert u_{t}\Vert ^{2}, \end{aligned} \end{aligned}$$
(3.22)

where \(t\in [t_{0}, a]\) and \(x\in {\mathbb {R}}\). Combining (3.21), (3.22) and the conditions of Theorem 3.2, it is not difficult to show the validity of Theorem 3.2 by the same method as that of Theorem 3.1. So we omit the details.

It is well known that the global Lipschitz condition is somewhat restrictive, so the next question will be whether the conditions of Theorem 3.2 can be further relaxed. We may wonder if there is a local solution. If so, what is the maximum existence interval of the local solution? The following theorem is related to these questions.

Theorem 3.3

Assume that the condition (3.20) holds and the initial value \(\xi \) satisfies \({\mathbb {E}}\Vert \xi \Vert ^{2}<\infty \). If b and \(\sigma \) satisfy the local Lipschitz condition on \([t_{0}, T)\times {\mathbb {R}}\times {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\), that is, for every integer \(n\ge 1\), there exists a positive constant \(K_{n}\) such that

$$\begin{aligned} |b(t, x, \varphi )-b(t, x, \psi )|^{2}+|\sigma (t, x, \varphi )-\sigma (t, x, \psi )|^{2}\le K_{n}\Vert \varphi -\psi \Vert ^{2}, \end{aligned}$$

for all \(t\in [t_{0}, T)\), \(x\in {\mathbb {R}}\) and \(\varphi , \psi \in {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\) satisfying \(\Vert \varphi \Vert \vee \Vert \psi \Vert \le n\). Then, there exists a stopping time \(\eta =\eta (\omega )\in (t_{0}, a]\) such that (1.3) has a unique solution u(tx) for \(t\in [t_{0}-\tau , \eta )\) and the solution u(tx) explodes at \(\eta \) if \(\eta (\omega )<a\). Otherwise, the solution u(tx) exists globally in \([t_{0}-\tau , T)\).

Proof

We shall utilize a truncation procedure to prove the theorem. For every integer \(n\ge 1\), we define truncation functions \(b_{n}(t, x, u_{t}(x))\) and \(\sigma _{n}(t, x, u_{t}(x))\) as follows,

$$\begin{aligned} b_{n}(t, x, u_{t}(x))= & {} b\Big (t, x, \frac{n\wedge \Vert u_{t}\Vert }{\Vert u_{t}\Vert }u_{t}(x)\Big ), \quad \sigma _{n}(t, x,\nonumber \\ u_{t}(x))= & {} b\Big (t, x, \frac{n\wedge \Vert u_{t}\Vert }{\Vert u_{t}\Vert }u_{t}(x)\Big ), \end{aligned}$$
(3.23)

where we set \(\frac{\Vert u_{t}\Vert }{\Vert u_{t}\Vert }=1\) if \(u_{t}\equiv 0\). Then, for any \(a\in [t_{0}, T)\), \(b_{n}\) and \(\sigma _{n}\) satisfy the global Lipschitz condition (3.6) on \([t_{0}, a]\times {\mathbb {R}}\times {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\). Therefore, by Theorem 3.2, there exists a unique solution \(u_{n}(t, x)\) in \({\mathcal {M}}_{r,a}\) to the following equation

$$\begin{aligned} \begin{aligned} u_{n}(t, x)&=\int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y+\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b_{n}(s, y, (u_{n})_{s}(y)){\text {d}}y{\text {d}}s\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)\sigma _{n}(s, y, (u_{n})_{s}(y))W({\text {d}}y,{\text {d}}s) \end{aligned} \end{aligned}$$
(3.24)

with the initial condition

$$\begin{aligned} \begin{aligned} (u_{n})_{t_{0}}= \left\{ \begin{array}{ll} \xi (\theta , x), &{}\quad \hbox { if}\; \Vert \xi \Vert \le n,\\ 0, &{}\quad \hbox { if}\; \Vert \xi \Vert > n, \end{array} \right. \end{aligned} \end{aligned}$$
(3.25)

where \(-\,\tau \le \theta \le 0\), \(x\in {\mathbb {R}}\). Define the following sequence of stopping time

$$\begin{aligned} \eta _{n}=a\wedge \inf \{t\in (t_{0}, a]: |u_{n}(t, x)|\ge n \ \mathrm {for}\; \ \mathrm {any}\; \ x\in {\mathbb {R}}\}, \end{aligned}$$

where we set \(\inf \phi =\infty \) if possible. From (3.23) and (3.25), it follows that

$$\begin{aligned} \begin{aligned}&b_{n+1}(s, y, (u_{n})_{s}(y))=b_{n}(s, y, (u_{n})_{s}(y))=b(s, y, (u_{n})_{s}(y)),\\&\sigma _{n+1}(s, y, (u_{n})_{s}(y))=\sigma _{n}(s, y, (u_{n})_{s}(y))=\sigma (s, y, (u_{n})_{s}(y)), \end{aligned} \end{aligned}$$
(3.26)

for all \(t\in [t_{0}, \eta _{n}]\). Thus, (3.24) and the following equation

$$\begin{aligned} \begin{aligned} u_{n+1}(t, x)&=\int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b_{n+1}(s, y, (u_{n+1})_{s}(y)){\text {d}}y{\text {d}}s\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)\sigma _{n+1}(s, y, (u_{n+1})_{s}(y))W({\text {d}}y,{\text {d}}s) \end{aligned} \end{aligned}$$

have the same coefficients for \(t\in [t_{0}, \eta _{n}]\); moreover, their initial data conditions overlap in \(D=\{u_{t}(x)\in {\mathbb {C}}([-\,\tau , 0]\times {\mathbb {R}}; {\mathbb {R}}): \Vert u_{t}\Vert \le n\}\). Therefore, we obtain

$$\begin{aligned} u_{n+1}(t, x)=u_{n}(t, x), \quad t\in [t_{0}-\tau , \eta _{n}], \ a.s. \end{aligned}$$

This further shows that \(\eta _{n}\) is increasing in n. Thus, we can define \(\eta (\omega )=\lim _{n\rightarrow \infty }\eta _{n}\). By utilizing the arguments of Xu [28], we can deduce that \(u(t,x)= \lim _{n\rightarrow +\infty }u_{n}(t, x)\) satisfies the integral equation (2.1) for \(t\in [t_{0}, \eta )\). That is to say, (1.3) has a unique solution u(tx) for \(t\in [t_{0}-\tau , \eta )\) and the solution u(tx) explodes at \(\eta (\omega )\) if \(\eta (\omega )<a\). Otherwise, from the arbitrariness of a, the solution u(tx) exists in \([t_{0}-\tau , T)\). Thus, we complete the proof of Theorem 3.3. \(\square \)

In the following, we consider the existence and uniqueness of mild solutions to (1.3) under the weaker conditions than the Lipschitz condition.

Theorem 3.4

Assume that the linear growth condition (3.7) holds. If there exist a strictly positive, nondecreasing and continuous function \(\lambda (t)\) defined on \([t_{0}, T]\) and a nondecreasing concave function \(\phi (u)\) defined on \({\mathbb {R}}_{+}\) satisfying \(\phi (0)=0, \phi (u)>0\) for all \(u>0\), and \(\int _{0^{+}}\frac{1}{\phi (u)}du=\infty \), such that for all \(\varphi , \psi \in {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}}), t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\),

$$\begin{aligned} |b(t, x, \varphi )-b(t, x, \psi )|^{2}+|\sigma (t, x, \varphi )-\sigma (t, x, \psi )|^{2}\le \lambda (t)\phi (\Vert \varphi -\psi \Vert ^{2}). \end{aligned}$$
(3.27)

Then, there exists a unique solution u(tx) to (1.3). Moreover, the solution belongs to \({\mathcal {M}}_{r, T}\).

To testify this theorem, for the reader’s convenience, we provide the following corollary of Bihari’s lemma.

Lemma 3.5

[25] Assume that the functions \(\lambda (t)\) and \(\phi (u)\) satisfy the conditions of Theorem 3.4. If there exists a nonnegative measurable function z(t) satisfying \(z(0)=0\) such that for some \(\alpha \in (0,\frac{1}{2}]\),

$$\begin{aligned} z(t)\le \int _{0}^{t}\frac{\lambda (s)\phi (z(s))}{(t-s)^{\alpha }}{\text {d}}s, \quad \forall t\in [t_{0}, T], \end{aligned}$$

then \(z(t)\equiv 0\) on \([t_{0}, T]\).

Proof of Theorem 3.4

Uniqueness We suppose that u(tx) and \({\bar{u}}(t, x)\) are two solutions of (1.3). From Lemma 3.4, it follows that both of them belong to \({\mathcal {M}}_{r, T}\). Moreover,

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}[|u(t, x)-{\bar{u}}(t, x)|^{2}]\\&\quad \le 2{\mathbb {E}}\left[ \Big |\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)(b(s, y, u_{s}(y))-b(s, y, {\bar{u}}_{s}(y))){\text {d}}y{\text {d}}s\Big |^{2}\right] \\&\qquad +\,2{\mathbb {E}}\left[ \Big |\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)(\sigma (s, y, u_{s}(y))-\sigma (s, y, {\bar{u}}_{s}(y)))W({\text {d}}y,{\text {d}}s)\Big |^{2}\right] \\&\quad = U_{1}(t, x)+U_{2}(t, x). \end{aligned}\nonumber \\ \end{aligned}$$
(3.28)

By (3.1), (3.3) and (3.27), using Hölder’s inequality, we have

$$\begin{aligned} \begin{aligned} U_{1}(t, x)&\le c\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y){\text {d}}y{\text {d}}s\right] \\&\qquad \left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y){\mathbb {E}}\left[ \lambda (s)\phi \left( \Vert u_{s}-{\bar{u}}_{s}\Vert ^{2}\right) \right] {\text {d}}y{\text {d}}s\right] \\&\le c\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)e^{r|y|}\lambda (s)\phi \\&\qquad \left( \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right) {\text {d}}y{\text {d}}s\\&\le ce^{r|x|}\int _{t_{0}}^{t}\lambda (s)\phi \left( \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right) {\text {d}}s\\&\le c_{8}e^{r|x|}\int _{t_{0}}^{t}\frac{\lambda (s)}{\sqrt{t-s}}\phi \left( \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right) {\text {d}}s \end{aligned} \end{aligned}$$
(3.29)

and

$$\begin{aligned} \begin{aligned} U_{2}(t, x)&\le c{\mathbb {E}}\left[ \int _{t_{0}}^{t}\int _{{\mathbb {R}}}g^{2}(t-s, x, y)(\sigma (s, y, u_{s}(y))-\sigma (s, y, {\bar{u}}_{s}(y)))^{2}{\text {d}}y{\text {d}}s\right] \\&\le c\int _{t_{0}}^{t}\int _{{\mathbb {R}}}\frac{1}{\sqrt{t-s}}g(t-s, x, y){\mathbb {E}}\left[ \lambda (s)\phi \left( \Vert u_{s}-{\bar{u}}_{s}\Vert ^{2}\right) \right] {\text {d}}y{\text {d}}s\\&\le c_{9}e^{r|x|}\int _{t_{0}}^{t}\frac{\lambda (s)}{\sqrt{t-s}}\phi \left( \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right) {\text {d}}s. \end{aligned} \end{aligned}$$
(3.30)

From (3.28), (3.29) and (3.30), it follows that

$$\begin{aligned} \begin{aligned}&\sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}|u(t, x)-{\bar{u}}(t, x)|^{2}\\&\quad \le \int _{t_{0}}^{t}\frac{(c_{8}+c_{9})\lambda (s)}{\sqrt{t-s}}\phi \left( \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u(l, y)-{\bar{u}}(l, y)|^{2}\right) {\text {d}}s. \end{aligned} \end{aligned}$$

By Lemma 3.5, we have

$$\begin{aligned} \begin{aligned} \Vert u-{\bar{u}}\Vert _{r,t}= 0, \quad t\in [t_{0}, T], \end{aligned} \end{aligned}$$

which implies \(u(t, x)={\bar{u}}(t, x)\) for all \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\), a.s.

Existence Similar to the proof of Theorem 3.1, we introduce the same successive approximation sequence as (3.18). By the same arguments, we can easily deduce that \(u^{n}(t, x)\in {\mathcal {M}}_{r, T}\) and \(\{u^{n}(t, x), n=0, 1, 2, \ldots \}\) is uniformly bounded in \({\mathcal {M}}_{r, T}\). Therefore, for any two nonnegative integers mn, we can obtain

$$\begin{aligned} \begin{aligned}&\sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}[|u^{n}(t, x)-u^{m}(t, x)|^{2}]\\&\quad \le c \int _{t_{0}}^{t}\frac{\lambda (s)}{\sqrt{t-s}}\phi \left( \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u^{n-1}(l, y)-u^{m-1}(l, y)|^{2}\right) {\text {d}}s. \end{aligned} \end{aligned}$$
(3.31)

Since \(\{u^{n}(t, x), n=0, 1, 2, \ldots \}\) is uniformly bounded in \({\mathcal {M}}_{r, T}\), we have

$$\begin{aligned} \sup _{m,n}\left[ \sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}|u^{n}(t, x)-u^{m}(t, x)|^{2}\right] \le \infty . \end{aligned}$$

Applying Fatou’s lemma to (3.31), we get

$$\begin{aligned} \begin{aligned}&\lim _{m,n\rightarrow \infty }\left[ \sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}|u^{n}(t, x)-u^{m}(t, x)|^{2}\right] \\&\quad \le c\int _{t_{0}}^{t}\frac{\lambda (s)}{\sqrt{t-s}}\phi \left( \lim _{m,n\rightarrow \infty }\left[ \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u^{n-1}(l, y)-u^{m-1}(l, y)|^{2}\right] \right) {\text {d}}s. \end{aligned} \end{aligned}$$

It is easy to find that \(\lim _{m,n\rightarrow \infty }[\sup _{s\in [t_{0}, t], x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}|u^{n}(s, x)-u^{m}(s, x)|^{2}]\) is nonnegative and monotonic in \(t\in [t_{0}, T]\). Using Lemma 3.5, we deduce that

$$\begin{aligned} \begin{aligned} \lim _{m,n\rightarrow \infty }\Vert u^{n}-u^{m}\Vert _{r,t}= 0, \quad t\in [t_{0}, T], \end{aligned} \end{aligned}$$

which implies that \(\{u^{n}(t,x), n=0, 1, 2, \ldots \}\) is a Cauchy sequence of the Banach space \({\mathcal {M}}_{r,T}\). Let u(tx) denote the limit of \(\{u^{n}(t,x), n=0, 1, 2, \ldots \}\). We pass to limits in the definition of \(u^{n}(t,x)\), to prove that \(u(t,x), t\in [t_{0}, T], x\in {\mathbb {R}}\), satisfies (2.1) a.s. which implies that u(tx) is the solution of (1.3). Therefore, the proof of Theorem 3.4 is completed. \(\square \)

Similar to Theorem 3.2, we can easily get the following result, which will be a generalization of Theorem 3.4.

Theorem 3.5

In addition to condition (3.20), suppose that b and \(\sigma \) satisfy the non-Lipschitz condition (3.27) with the functions \(\lambda (t), \phi (u)\) given in Theorem 3.4. Then, there exists a unique solution u(tx) to (1.3). Moreover, the solution belongs to \({\mathcal {M}}_{r, T}\).

Remark 3.3

Set

$$\begin{aligned} \begin{aligned} \phi (u)= \left\{ \begin{array}{ll} 0, &{}\quad \ u=0,\\ u(\log (u^{-1}))^{\frac{1}{2}}, &{}\quad \ 0<u<\iota ,\\ \iota (\log (\iota ^{-1}))^{\frac{1}{2}}+\phi '_{-}(\iota )(u-\iota ), &{}\quad \ u\ge \iota , \end{array} \right. \end{aligned} \end{aligned}$$

where \(\iota \) is sufficiently small and \(\phi '_{-}(\iota )\) denotes the left derivative of \(\phi (u)\) at the point \(\iota \). Then, \(\phi (u)\) is a nondecreasing, concave function and satisfies the related conditions of Theorems 3.4 and 3.5.

To investigate the existence of mild solutions to (1.3) under the linear growth condition without the Lipschitz condition, we now prepare two results, i.e., nonnegativity and comparison theorems.

Theorem 3.6

In addition to the assumptions of Theorem 3.1, suppose that

$$\begin{aligned}&b(t, x, u) \ and \ \sigma (t, x, u) \ are \ continuous \ in \ (x, u), \quad a.s. \end{aligned}$$
(3.32)
$$\begin{aligned}&b(t, x, 0)\ge 0 \ and \ \sigma (t, x, 0)=0, \quad a.s. \end{aligned}$$
(3.33)

where \((x,u)\in {\mathbb {R}}\times {\mathbb {R}}\). Let u(tx) be the solution to (1.3) with nonnegative initial function \(\xi \) satisfying \({\mathbb {E}}[\Vert \xi \Vert ^{2}]\le \infty \). Then, \({\mathbb {P}}(u(t, x)\ge 0\) for all \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}})=1\).

Proof

For any \(\epsilon >0\), we introduce a nonnegative, symmetric and dx\(\varrho ^{\epsilon }(x)\) defined on \({\mathbb {R}}\) and satisfying

$$\begin{aligned} \varrho ^{\epsilon }(x)=0 \ \ \mathrm {for} \ |x|>\epsilon \ \mathrm {and} \ \int _{{\mathbb {R}}}(\varrho ^{\epsilon }(x))^{2}{\text {d}}x=1. \end{aligned}$$

Define a spatially correlated Brownian motion \(W_{x}^{\epsilon }(t) (x\in {\mathbb {R}})\) satisfying

$$\begin{aligned} {\dot{W}}_{x}^{\epsilon }(t)=\int _{{\mathbb {R}}}\varrho ^{\epsilon }(x-y){\dot{W}}(t, y){\text {d}}y, \end{aligned}$$

where \({\dot{W}}(t, y)\) is a space–time white noise. In fact, for every \(x\in {\mathbb {R}}\), \(W_{x}^{\epsilon }(t)=\int _{{\mathbb {R}}}{\dot{W}}_{x}^{\epsilon }(s){\text {d}}s\) is a one-dimensional Brownian motion. Set \(\Delta ^{\epsilon }=(g(\epsilon , x, y)-I)/\epsilon \) for \(\epsilon >0\). We consider the following equation

$$\begin{aligned} u^{\epsilon }(t, x)= & {} \xi (0, x)+\int _{t_{0}}^{t}(\Delta ^{\epsilon }u^{\epsilon }(s, x)+b(s, x, u_{s}^{\epsilon }(x))){\text {d}}s\nonumber \\&+\,\int _{t_{0}}^{t}\sigma (s, x,7 u_{s}^{\epsilon }(x))dW_{x}^{\epsilon }(s), \quad x\in {\mathbb {R}}, \end{aligned}$$
(3.34)

where the last term is a standard one-dimensional stochastic integral. Since b and \(\sigma \) satisfy the Lipschitz condition and the linear growth condition, it is not difficult to show that (3.34) has a unique solution for every nonnegative initial function \(\xi \). We claim that

$$\begin{aligned} {\mathbb {P}}(u^{\epsilon }(t, x)\ge 0 \ \mathrm {for} \ \mathrm {every} \ t\in [t_{0}, T] \ \mathrm {and} \ x\in {\mathbb {R}})=1. \end{aligned}$$
(3.35)

Introduce a nonnegative function \(\vartheta (u)=-\min \{u, 0\}\). By (3.7), there must be a positive constant \(K'\) such that \(b(s, x, u)\ge -K'|u|\). Applying Itô’s formula, for \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\), we have

$$\begin{aligned} \begin{aligned} {\mathbb {E}}[\vartheta (u^{\epsilon }(t, x))]&=-I(u^{\epsilon }(t, x)\le 0)\left\{ \xi (0, x)\right. \\&\quad \left. +\,\int _{t_{0}}^{t}{\mathbb {E}}[\Delta ^{\epsilon }u^{\epsilon }(s, x)+b(s, x, u_{s}^{\epsilon }(x))]{\text {d}}s\right\} \\&\le \left( K'+\frac{1}{\epsilon }\right) \int _{t_{0}}^{t}{\mathbb {E}}[\vartheta (u^{\epsilon }(s, x))]{\text {d}}s \\&\quad +\,\frac{1}{\epsilon }\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(\epsilon , x, y){\mathbb {E}}[\vartheta (u^{\epsilon }(s, y))]{\text {d}}y{\text {d}}s, \end{aligned} \end{aligned}$$

where \(I(\cdot )\) denotes the indicator function of \((\cdot )\). Therefore, combining the Gronwall’s lemma with the nonnegativity of \(\vartheta (u)\), we can obtain that \({\mathbb {E}}[\vartheta (u^{\epsilon }(t, x))]=0\) for every \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\), which yields (3.35).

Next, we define

$$\begin{aligned} \begin{aligned} g^{\epsilon }(t)=\exp (t\Delta ^{\epsilon })&=e^{-\frac{t}{\epsilon }}\sum _{n=0}^{\infty }\frac{\left( \frac{t}{\epsilon }\right) ^{n}}{n!}g(n\epsilon )=: e^{-\frac{t}{\epsilon }}I+R^{\epsilon }(t, x, y),\\ R^{\epsilon }(t, x, y)&=e^{-\frac{t}{\epsilon }}\sum _{n=1}^{\infty }\frac{\left( \frac{t}{\epsilon }\right) ^{n}}{n!}g(n\epsilon , x, y). \end{aligned} \end{aligned}$$

By an elementary calculation similar to [15], it is easy to verify that \(R^{\epsilon }(t, x, y)\) satisfies the following estimates:

  1. (i)

    \(\int _{{\mathbb {R}}}(R^{\epsilon }(t, x, y))^{2}{\text {d}}y\le \sqrt{\frac{3}{2\pi }}t^{-\frac{1}{2}}\);

  2. (ii)

    For some \(\alpha >0\) and \(\beta >0\),

    $$\begin{aligned} \int _{{\mathbb {R}}}|R^{\epsilon }(t, x, y)-g(t, x, y)|{\text {d}}y\le e^{-\frac{t}{\epsilon }}+\alpha \left( \frac{\epsilon }{t}\right) ^{\frac{1}{3}} \ \mathrm {if} \; 0<\frac{\epsilon }{t}\le \beta ; \end{aligned}$$
    (3.36)
  3. (iii)

    For \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\), \(\lim _{\epsilon \rightarrow 0}\int _{t_{0}}^{t}\int _{{\mathbb {R}}}[R^{\epsilon }(s, x, y)-g(s, x, y)]^{2}{\text {d}}y{\text {d}}s=0.\) By (3.34), we can obtain \(u^{\epsilon }(t, x)\) which satisfies the following equation

    $$\begin{aligned} \begin{aligned} u^{\epsilon }(t, x)&=\int _{{\mathbb {R}}}g^{\epsilon }(t, x, y )\xi (0, y){\text {d}}y\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g^{\epsilon }(t-s, x, y)b(s, y, u_{s}^{\epsilon }(y)){\text {d}}y{\text {d}}s\\&\quad +\,\int _{t_{0}}^{t}e^{-\frac{t-s}{\epsilon }}\sigma (s, y, u_{s}^{\epsilon }(y)){\text {d}}W_{x}^{\epsilon }(s)\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}R^{\epsilon }(t-s, x, y)\sigma (s, y, u_{s}^{\epsilon }(y)){\text {d}}W_{x}^{\epsilon }(s){\text {d}}y. \end{aligned} \end{aligned}$$
    (3.37)

Assume that u(tx) is the unique solution to (1.3) under the conditions of Theorem 3.1, by (2.1), (3.36) and (3.37); applying the methods in Lemma 3.4, we can get

$$\begin{aligned} \sup _{0<\epsilon \le 1}\sup _{t_{0}\le t\le T}\sup _{x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}[|u^{\epsilon }(t, x)|^{2}]<\infty , \ \ \sup _{t_{0}\le t\le T}\sup _{x\in {\mathbb {R}}}e^{-r|x|}{\mathbb {E}}[|u(t, x)|^{2}]<\infty . \end{aligned}$$

Thus,

$$\begin{aligned}&{\mathbb {E}}\left[ |u^{\epsilon }(t, x)-u(t, x)|^{2}\right] \nonumber \\&\quad \le c \left\{ {\mathbb {E}}\left[ \left| \int _{{\mathbb {R}}}(g^{\epsilon }(t, x, y )-g(t, x, y ))\xi (0, y){\text {d}}y\right| ^{2}\right] \right. \nonumber \\&\qquad \left. +\,{\mathbb {E}}\left[ \left| \int _{t_{0}}^{t}e^{-\frac{t-s}{\epsilon }}b(s, x, u_{s}^{\epsilon }(x)){\text {d}}s\right| ^{2}\right] \right. \nonumber \\&\qquad +\,{\mathbb {E}}\left[ \left| \int _{t_{0}}^{t}\int _{{\mathbb {R}}}R^{\epsilon }(t-s, x, y)(b(s, y, u_{s}^{\epsilon }(y))-b(s, y, u_{s}(y))){\text {d}}y{\text {d}}s\right| ^{2}\right] \nonumber \\&\qquad +\,{\mathbb {E}}\left[ \left| \int _{t_{0}}^{t}\int _{{\mathbb {R}}}(R^{\epsilon }(t-s, x, y)-g(t-s, x, y))b(s, y, u_{s}(y)){\text {d}}y{\text {d}}s\right| ^{2}\right] \nonumber \\&\qquad +\,\int _{t_{0}}^{t}e^{-\frac{2(t-s)}{\epsilon }}{\mathbb {E}}\left[ \sigma ^{2}(s, x, u_{s}^{\epsilon }(x))\right] {\text {d}}s\\&\qquad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}{\mathbb {E}}\left[ \left| \int _{{\mathbb {R}}}R^{\epsilon }(t-s, x, z)\right. \right. \nonumber \\&\qquad \left. \left. (\sigma (s, z, u_{s}^{\epsilon }(z))-\sigma (s, z, u_{s}(z)))\varrho ^{\epsilon }(y-z)dz\right| ^{2}\right] {\text {d}}y{\text {d}}s\nonumber \\&\qquad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}{\mathbb {E}}\left[ \left| \int _{{\mathbb {R}}}R^{\epsilon }(t-s, x, z)\right. \right. \nonumber \\&\qquad \left. \left. (\sigma (s, z, u_{s}(z))-\sigma (s, y, u_{s}(y)))\varrho ^{\epsilon }(y-z)dz\right| ^{2}\right] {\text {d}}y{\text {d}}s\nonumber \\&\qquad \left. +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}\left[ \int _{{\mathbb {R}}}R^{\epsilon }(t-s, x, z)\varrho ^{\epsilon }(y-z)dz-g(t-s, x, y)\right] ^{2}\right. \nonumber \\&\qquad \left. {\mathbb {E}}\left[ |\sigma (s, y, u_{s}(y))|^{2}\right] {\text {d}}y{\text {d}}s\right\} \nonumber \\&\quad \equiv \Sigma _{n=1}^{8}\Gamma _{n}(\epsilon , t, x).\nonumber \end{aligned}$$
(3.38)

From (3.6)–(3.8), (3.32) and the above estimates of \(R^{\epsilon }(t, x, y)\), \(u^{\epsilon }(t, x)\) and u(tx), it follows that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\sup _{t_{0}\le t\le T}\sup _{x\in {\mathbb {R}}}\Gamma _{n}(\epsilon , t, x)=0 \ \mathrm {for} \ 1\le n \le 8 \ \mathrm {with} \ n\ne 3,6, \end{aligned}$$

and

$$\begin{aligned} \Gamma _{3}(\epsilon , t, x)+\Gamma _{6}(\epsilon , t, x)\le c\int _{t_{0}}^{t}\frac{1}{\sqrt{t-s}}{\mathbb {E}}[\Vert u_{s}^{\epsilon }(y)-u_{s}(y)\Vert ^{2}]{\text {d}}s \end{aligned}$$

for \(t\in [t_{0}, T]\) and \(x\in {\mathbb {R}}\). Therefore, (3.38) can be translated into the form

$$\begin{aligned} {\mathbb {E}}[|u^{\epsilon }(t, x)-u(t, x)|^{2}]\le c\int _{t_{0}}^{t}\frac{1}{\sqrt{t-s}}{\mathbb {E}}[\Vert u_{s}^{\epsilon }(y)-u_{s}(y)\Vert ^{2}]{\text {d}}s+o(\epsilon , t), \end{aligned}$$

where \(t_{0}\le t\le T\) and \(o(\epsilon , t)\) satisfies \(\lim _{\epsilon \rightarrow 0}\sup _{t_{0}\le t\le T}o(\epsilon , t)=0\). When \(\epsilon \rightarrow 0\), we obtain

$$\begin{aligned} \begin{aligned}&\sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}{\mathbb {E}}|u^{\epsilon }(t, x)-u(t, x)|^{2}\\&\quad \le c\int _{t_{0}}^{t}\frac{1}{\sqrt{t-s}}\left[ \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}e^{-r|y|}{\mathbb {E}}|u^{\epsilon }(l, y)-u(l, y)|^{2}\right] {\text {d}}s\\&\quad \le c\int _{t_{0}}^{t}\frac{1}{\sqrt{t-s}}\left[ \sup _{l\in [t_{0}, s], y\in {\mathbb {R}}}{\mathbb {E}}|u^{\epsilon }(l, y)-u(l, y)|^{2}\right] {\text {d}}s. \end{aligned} \end{aligned}$$

So, by Lemma 3.2, we deduce that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\left[ \sup _{t\in [t_{0}, T], x\in {\mathbb {R}}}{\mathbb {E}}|u^{\epsilon }(t, x)-u(t, x)|^{2}\right] =0, \end{aligned}$$

which, together with (3.35), implies that

$$\begin{aligned} {\mathbb {P}}(u(t, x)\ge 0 \ \mathrm {for} \ \mathrm {all} \ t\in [t_{0}, T] \ \mathrm {and} \ x\in {\mathbb {R}})=1. \end{aligned}$$

Thus, the proof of Theorem 3.6 is completed. \(\square \)

By Theorem 3.6, we immediately obtain the following comparison result.

Corollary 3.1

Assume that \(b^{i}(t, x, u)\) (\(i=1,2\)) and \(\sigma (t, x, u)\) satisfy the assumptions of Theorem (3.1). Let \(u^{i}(t, x)\) be the solution to (1.3) associated with the coefficients \(b^{i}(t, x, u)\) and \(\sigma (t, x, u)\), whose initial function \(\xi ^{i}\) satisfies \({\mathbb {E}}[\Vert \xi ^{i}\Vert ^{2}]<\infty \). Suppose further that

$$\begin{aligned}&b^{i}(t, x, u)\ (i=1,2) \ and \ \sigma (t, x, u) \ are \ continuous \ in \ (x, u), \quad a.s. \\&b^{1}(t, x, u)\ge b^{2}(t, x, u), for \ t\in [t_{0}, T] \ and \ x\in {\mathbb {R}}, u\in {\mathbb {R}}, \quad a.s. \\&\quad \quad \xi ^{1}(\theta , x)\ge \xi ^{2}(\theta , x), \ -\tau \le \theta \le 0,\, x\in {\mathbb {R}}. \end{aligned}$$

Then, \({\mathbb {P}}(u^{1}(t, x)\ge u^{2}(t, x) \ for \ all \ t\in [t_{0}, T] \ and \ x\in {\mathbb {R}})=1.\)

Proof

Set \(u(t, x)=u^{1}(t, x)-u^{2}(t, x)\) and \(\xi =\xi ^{1}-\xi ^{2}\). Then, u(tx) satisfies SIE (2.1), where \(b, \sigma : [t_{0}, T]\times {\mathbb {R}}\times {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\rightarrow {\mathbb {R}}\) are continuous Borel measurable functions defined by

$$\begin{aligned} b(s, y, u_{s}(y))=b^{1}(s, y, u^{1}_{s}(y))-b^{2}(s, y, u^{2}_{s}(y)),\\ \sigma (s, y, u_{s}(y))=\sigma (s, y, u^{1}_{s}(y))-\sigma (s, y, u^{2}_{s}(y)). \end{aligned}$$

It is easy to check that \(b(s, y, u_{s}(y))\) and \(\sigma (s, y, u_{s}(y))\) satisfy all conditions of Theorem 3.6; then, we can deduce that

$$\begin{aligned} {\mathbb {P}}(u(t, x)\ge 0 \ \mathrm {for} \ \mathrm {all} \ t\in [t_{0}, T] \ \mathrm {and} \ x\in {\mathbb {R}})=1, \end{aligned}$$

that is,

$$\begin{aligned} {\mathbb {P}}(u^{1}(t, x)\ge u^{2}(t, x) \ \mathrm {for} \ \mathrm {all} \ t\in [t_{0}, T] \ \mathrm {and} \ x\in {\mathbb {R}})=1, \end{aligned}$$

which completes the proof of Corollary 3.1. \(\square \)

To illustrate the efficiency of nonnegativity and comparison theorems of mild solutions to (1.3), here we give two examples to show that these results are applicable.

Example 3.1

Consider the following SPFDE

$$\begin{aligned} \left\{ \begin{aligned} \frac{\partial u(t, x)}{\partial t}&=\frac{1}{2}\frac{\partial ^{2}u(t, x)}{\partial x^{2}}+ \kappa _{1}e^{-\frac{1}{2}r|x|} u_{t}(x)+\kappa _{0} \\&\quad +\,\kappa _{2}e^{-\frac{1}{2}r|x|}u_{t}(x)\frac{\partial ^{2}W(t, x)}{\partial t \partial x}, \ t_{0}\le t\le T, x\in {\mathbb {R}},\\ u_{t_{0}}&=\{\xi _{1}(\theta , x):-\tau \le \theta \le 0, x\in {\mathbb {R}}\}, \end{aligned}\right. \end{aligned}$$
(3.39)

where \(r>0, \kappa _{i} (i=0,1,2)\) are some nonnegative constants, and the initial function \(\xi _{1}(\theta , x)\) satisfies \({\mathbb {E}}[\Vert \xi _{1}\Vert ^{2}]<\infty \). In this equation, the functions \(b(t, x, u_{t}(x))=\kappa _{1}e^{-\frac{1}{2}r|x|}u_{t}(x)+\kappa _{0}\) and \(\sigma (t, x, u_{t}(x))=\kappa _{2}e^{-\frac{1}{2}r|x|}u_{t}(x)\) satisfy the Lipschitz condition and the linear growth condition. Theorem 3.1 implies that there exists a unique solution to (3.39). It follows from Theorem 3.6 that the solution of (3.39) remains nonnegative if \(\xi _{1}(\theta , x)\ge 0\).

Example 3.2

Consider Eq. (3.39) and the following SPFDE

$$\begin{aligned} \left\{ \begin{aligned}&\frac{\partial u(t, x)}{\partial t}= \frac{1}{2}\frac{\partial ^{2}u(t, x)}{\partial x^{2}}+\kappa '_{1}e^{-\frac{1}{2}r|x|}u_{t}(x)+\kappa '_{0}\\&\qquad \qquad \qquad +\,\kappa '_{2}e^{-\frac{1}{2}r|x|}u_{t}(x)\frac{\partial ^{2}W(t, x)}{\partial t \partial x}, \quad t_{0}\le t\le T, x\in {\mathbb {R}},\\&u_{t_{0}}=\{\xi _{2}(\theta , x):-\tau \le \theta \le 0, x\in {\mathbb {R}}\}, \end{aligned} \right. \end{aligned}$$
(3.40)

where \(r>0\), \(\kappa '_{i}(i=0,1,2)\) are nonnegative constants satisfying \(\kappa '_{1}\ge \kappa _{1}\), \(\kappa '_{0}\ge \kappa _{0}\), \(\kappa '_{2}=\kappa _{2}\), and the initial function \(\xi _{2}(\theta , x)\) satisfies \({\mathbb {E}}[\Vert \xi _{2}\Vert ^{2}]<\infty \). Suppose that \(\xi _{2}(\theta , x)\ge \xi _{1}(\theta , x)\) for all \(-\tau \le \theta \le 0\) and \(x\in {\mathbb {R}}\). Comparing the coefficients and initial condition of (3.39) with those of (3.40), we see that all conditions of Corollary 3.1 are satisfied. Therefore, the solution \(u'(t, x)\) of (3.40) and the solution u(tx) of (3.39) satisfy \({\mathbb {P}}(u'(t, x)\ge u(t, x) \ \mathrm {for} \ \mathrm {all} \ t\in [t_{0}, T] \ \mathrm {and} \ x\in {\mathbb {R}})=1.\)

Now, we prepare a lemma which will be used in the forthcoming proof.

Lemma 3.6

[15] Assume that \(\{u_{n}(t, \cdot ), 0\le t\le 1, n\ge 1\}\) is a sequence of \({\mathbb {C}}_{r}\)-valued continuous processes. If there exist constants \(d>0\), \(c'>0\) and \(\delta >2\) such that

$$\begin{aligned} {\mathbb {E}}[|u_{n}(t, x)-u_{n}(s, y)|^{d}]\le c'(|t-s|^{\delta }+|x-y|^{\delta }) \end{aligned}$$
(3.41)

for all \(0\le t, s\le 1, n\ge 1\), and \(x,y\in {\mathbb {R}}\) with \(|x-y|\le 1\), then the sequence of probability distributions on \({\mathbb {C}}([0, 1]\rightarrow {\mathbb {C}}_{r})\) induced by \(\{u_{n}(t, \cdot )\}\) is tight.

We now give a theorem about the existence of mild solutions to (1.3) under the linear growth condition without the Lipschitz condition by using the above results.

Theorem 3.7

Suppose that \(b, \sigma : [t_{0}, T]\times {\mathbb {R}}\times {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\rightarrow {\mathbb {R}}\) are continuous Borel measurable functions satisfying the linear growth condition. Assume further that

$$\begin{aligned} b(t, x, 0)\ge 0 \ and \ \sigma (t, x, 0)=0. \end{aligned}$$

Then, for every nonnegative initial function \(\xi \) satisfying \({\mathbb {E}}[\Vert \xi \Vert ^{2}]\le \infty \), there exists a nonnegative solution u(tx) to (1.3).

Proof

We first choose two sequences of functions \(b^{n}(t, x, \varphi )\) and \(\sigma ^{n}(t, x, \varphi )\) (\(n\ge 1\)), which are Lipschitz continuous for variable \(\varphi \in {\mathbb {C}}([-\,\tau , 0]\times {\mathbb {R}}; {\mathbb {R}})\) and satisfy

$$\begin{aligned}&|b^{n}(t, x, \varphi )|^{2}+|\sigma ^{n}(t, x, \varphi )|^{2}\le {\bar{K}}(1+\Vert \varphi \Vert ^{2}), \end{aligned}$$
(3.42)
$$\begin{aligned}&b^{n}(t, x, 0)\ge 0 \ \mathrm {and} \ \sigma ^{n}(t, x, 0)=0, \end{aligned}$$
(3.43)

and that \(b^{n}(t, x, \varphi )\) and \(\sigma ^{n}(t, x, \varphi )\) converge to \(b(t, x, \varphi )\) and \(\sigma (t, x, \varphi )\) uniformly as \(n\rightarrow \infty \). Then, by Theorems 3.1 and 3.6, for each \(n\ge 1\) the following SPFDE

$$\begin{aligned} \left\{ \begin{aligned}&\frac{\partial u(t, x)}{\partial t}= \frac{1}{2}\frac{\partial ^{2}u(t, x)}{\partial x^{2}}+b^{n}(t, x, u_{t}(x))\\&\qquad \qquad \quad +\,\sigma ^{n}(t, x, u_{t}(x))\frac{\partial ^{2}W(t, x)}{\partial t \partial x}, \quad t_{0}\le t\le T, x\in {\mathbb {R}},\\&u_{t_{0}}=\{\xi (\theta , x):-\tau \le \theta \le 0, x\in {\mathbb {R}}\}, \end{aligned}\right. \end{aligned}$$

has a unique nonnegative continuous \({\mathbb {C}}_{r}\)-valued solution \(u^{n}(t, \cdot )\) satisfying

$$\begin{aligned} \begin{aligned} u^{n}(t, x)&=\int _{{\mathbb {R}}}g(t, x, y )\xi (0, y){\text {d}}y\\&\qquad \qquad \quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)b^{n}(s, y, u^{n}_{s}(y)){\text {d}}y{\text {d}}s\\&\quad +\,\int _{t_{0}}^{t}\int _{{\mathbb {R}}}g(t-s, x, y)\sigma ^{n}(s, y, u^{n}_{s}(y))W({\text {d}}y,{\text {d}}x), \quad a.s. \end{aligned} \end{aligned}$$
(3.44)

Using (3.42) and (3.43), it is easy to see that \(u^{n}(t, x)\) satisfies (3.41). From Lemma 3.6, it follows that the family of probability distribution induced by \(u^{n}(t, x)\) is tight, which implies that \(u^{n}(t, x)\) converges to a process u(tx). We pass to limits as \(n\rightarrow \infty \) in (3.44) to obtain the existence of solution to (1.3). By the assumptions of Theorems 3.6 and 3.7, we obtain \({\mathbb {P}}(u(t, x)\ge 0 \ \mathrm {for} \ \mathrm {all} \ t\in [t_{0}, T] \ \mathrm {and} \ x\in {\mathbb {R}})=1\). This completes the proof of Theorem 3.7. \(\square \)

Remark 3.4

Under the assumptions of Theorem 3.7, we cannot obtain the uniqueness of mild solutions, but the result can guarantee the existence and nonnegativity of mild solutions of (1.3).

The following example is given to illustrate the efficiency of Theorem 3.7. Here, the coefficients \(b(t, x, u_{t}(x))\) and \(\sigma (t, x, u_{t}(x))\) are not Lipschitz continuous for variable \(u_{t}(x)\), but satisfy our framework.

Example 3.3

Consider the following stochastic parabolic partial functional differential equation

$$\begin{aligned} \left\{ \begin{aligned}&\frac{\partial u(t, x)}{\partial t}= \frac{1}{2}\frac{\partial ^{2}u(t, x)}{\partial x^{2}}+e^{-\frac{1}{2}r|x|}\sqrt{u_{t}(x)}\\&\qquad \qquad \quad +\,e^{-\frac{1}{2}r|x|}\sqrt{u_{t}(x)}\frac{\partial ^{2}W(t, x)}{\partial t \partial x}, \quad t_{0}\le t\le T, x\in {\mathbb {R}},\\&u_{t_{0}}=\{\xi _{3}(\theta , x):-\tau \le \theta \le 0, x\in {\mathbb {R}}\}, \end{aligned}\right. \end{aligned}$$
(3.45)

where \(r>0\) and the initial function satisfying \({\mathbb {E}}[\Vert \xi _{3}\Vert ^{2}]\le \infty \). It is obvious that the functions \(b(t, x, u_{t}(x))=e^{-\frac{1}{2}r|x|}\sqrt{u_{t}(x)}\) and \(\sigma (t, x, u_{t}(x))=e^{-\frac{1}{2}r|x|}\sqrt{u_{t}(x)}\) are not Lipschitz continuous, but satisfy the linear growth condition if \(u_{t}(x)\in {\mathbb {C}}([-\tau , 0]\times {\mathbb {R}}; [0,1])\). Theorem 3.7 implies that there exists a nonnegative solution to (3.45) when \(\xi _{3}(\theta , x)\ge 0\).