1 Introduction

In recent decades, the fractional calculus has become a very bewitching area to researchers due to its challenges and convincing applications in the real world. Besides the common applications of fractional calculus which are now very famous in engineering, the reader may refer to [35] for an updated collection of applications of fractional calculus in the real world including the field of physics, signal and image processing, biology, environmental science, economic, etc. Comparing to the model using the ordinary derivative, the fractional model can be more adequate thanks to the advantages of the memory effect of the fractional derivative. However, this memory effect (or nonlocality) at the same time yields much more difficulties in solving these models (see for instance [48, 49]) for a discussion on the difficulty of the model with fractional derivative).

The space-fractional diffusion equation (SFDE) is derived by replacing the standard Laplacian by its fractional version defined as (1.2). In physical terms, space-fractional diffusion is obtained if one replaces the Gaussian statistics of the classical Brownian motion with a stable probability distribution, resulting in a Levy flight (see [3, 11, 13, 31]). It appears in many practical applications such as in the theory of viscoelasticity and viscoplasticity (mechanics), in the modeling of polymers and proteins (biochemistry), in the transmission of ultrasound waves (electrical engineering), and in the modeling of human tissue under mechanical loads (medicine). The forward problem for SFDE which is a well-posed one, has been studied deeply in recent years (see [9, 12, 29, 32, 37, 41] and the references given there). In comparison with the forward problem, the backward problem for SFDE is usually more difficult to solve because of its ill-posedness. Backward problem for diffusion equation has many applications in practice such as the hydrology [25], material science [28], groundwater contamination [34], image processing [4, 44]. The backward problem here is understood in the following sense: Given the data at the terminal time T, the aspiration is to reconstruct the historical distribution at an earlier time \(t<T\). Particularly, we studied the following backward problem

$$\begin{aligned} \left\{ \begin{aligned}&{u_t}(x,t) + \lambda (t){\left( { - \Delta } \right) ^\alpha }u(x,t) = \ell (x,t), t\in [0,T], x\in {\mathbb {R}},\\&u(x,T) = g(x), x \in {\mathbb {R}}, \end{aligned} \right. \end{aligned}$$
(1.1)

where \(0< \alpha \le 1\) is the fractional parameter, \(\lambda (t)\) is the time-dependent diffusivity, g(x) is the final data, \(\ell (x,t)\) is the source function and the fractional Laplacian is defined pointwise by

$$\begin{aligned} {\left( { - \Delta } \right) ^\alpha }\phi (x) = \frac{{\alpha {2^{2\alpha }}\Gamma \left( {0.5 + \alpha } \right) }}{{\sqrt{\pi }\Gamma \left( {1 - \alpha } \right) }}P.V.\int _{\mathbb {R}}{\frac{{\phi (x) - \phi (y)}}{{{{\left| {x - y} \right| }^{1 + 2\alpha }}}}\mathrm{d}y}. \end{aligned}$$
(1.2)

Here, P.V. stands for the principle value

$$\begin{aligned} P.V.\int _{\mathbb {R}}{\frac{{\phi (x) - \phi (y)}}{{{{\left| {x - y} \right| }^{1 + 2\alpha }}}}\mathrm{d}y} = \mathop {\lim }_{\varepsilon \rightarrow 0} \int _{{\mathbb {R}}\backslash \left\{ {\left| {x - y} \right| \le \varepsilon } \right\} } {\frac{{\phi (x) - \phi (y)}}{{{{\left| {x - y} \right| }^{1 + 2\alpha }}}}\mathrm{d}y}.\end{aligned}$$
(1.3)

The forward model associated with problem (1.1) is model (2.1) where the data is given at initial time \(t=0\), i.e., \(u(x,0)=\varphi (x)\). Unlike the constant diffusivity which is commonly used in the study of the diffusion equation, the diffusivity in this paper is non-constant, but a time-dependent function. The time-dependent diffusion coefficient appears in many phenomena. For instance, it appears in the diffusion of the population of photogenerated species in organic semiconductors [27], the methane diffusion phenomena in [5], the transient dynamics of diffusion-controlled bimolecular reactions in liquids [24], the hydrodynamic diffusion phenomena [30] and so on. The advantage of non-constant diffusivity was studied very carefully by experiment in [8, 10, 26, 42].

The model (1.1) generalizes some of the previous ones. Particularly, the backward heat conduction problem (BHCP) can be obtained from (1.1) by setting \(\alpha := 1\). The BHCP is a classical ill-posed problem that has been studied extensively for decades. In fact, there is a vast literature on the BHCP including the classical works in [2, 7, 14, 15, 19, 23, 36, 39, 43]. In the current paper, we are more interested in the fractional case of \(0< \alpha <1\) where it is called the backward problem for the space-fractional diffusion equation. The seminal works in this problem refer to Zheng and Zhang in [48,49,50] in which they have studied the problem (1.1) with \(\ell := 0\) and \(\lambda :=1\). It was mentioned in [48] that problem is ill-posed in the sense of Hadamard. However, proof of this conclusion has not been yet provided. In [22], the authors extended the work in [48,49,50] by investigating the problem (1.1) with \(\lambda := 1\). Again, the ill-posedness was also claimed without proof in [22]. Other remarkable works related to problem (1.1) including its nonlinear cases and the Riesz–Feller diffusion cases can be found in [18, 20, 38, 40, 45, 46].

One of the objectives of this paper is to supplement the theory of problem (1.1) by providing detailed proof of its ill-posedness which is not trivial. As a next step, we propose a filter-type regularization method to achieve reliable approximations to the solution of the problem. We emphasize that problem (1.1) is more difficult to deal with, compared to its homogenous or classical versions. The difficulties naturally arise from the nonlocality of fractional derivative and nonzero right-hand side. The nonlocality will make the finite difference matrix is not spare while the nonzero right-hand side makes the problem is more complicated. Here, inspired by the convolution regularization method for the classical backward diffusion equation (\(\alpha = 1\)) in [33], we proposed a filter method basing on the Fourier transform to achieve Hölder approximations to the solution of the investigated problem. Another important part of this paper is devoted to some regularity results of the forward problem (2.1). It is important to observe the forward problem because there always exist relations between forward and backward problems. For instance, using some properties of the forward problem, one may explain or derive the nature of the imposed condition on the backward one. In the current paper, using regularity results of the forward problem, we provide some explanation on the nature of the a priori assumption imposed to the problem in order to establish the Hölder error estimate.

The rest of this paper is divided into four sections. In Sect. 2, we establish some regularity results of the associated forward problem including the fractional parameter-continuity property. Section 3 is devoted to the analysis of the ill-posed structure of problem (1.1) and its treatment by the filter regularization method. The convergence rate is also presented in this section. Section 4 provides two numerical examples, mainly based on the finite difference scheme and the discrete Fourier transform, to illustrate the theoretical results. Finally, we end up this paper by Sect. 5 summarizing the achievements obtained in the paper.

2 Some Regularity Results for the Forward Problem

Throughout this paper, we assume that there exists two positive numbers \(\mathrm {M}_1\) and \(\mathrm {M}_2\) such that

$$\begin{aligned} 0 <\mathrm{{M}_1}\le \lambda \left( t \right) \le \mathrm{{M}_2}, \end{aligned}$$

for all \(t \in \left[ {0,T} \right] \). This assumption is quite natural since the diffusivity is usually positive and finite. We begin this section by introducing some notations that are needed for the analysis in the next sections. We always denote \(\left\| {\cdot } \right\| = {\left\| {\cdot } \right\| _{{L^2}({\mathbb {R}})}}\) the standard \(L^2\)-norm. The Fourier transform of a function g will be defined as

$$\begin{aligned} {\widehat{g}}\left( \xi \right) := {{\mathcal {F}}}\left( g \right) \left( \xi \right) = \frac{1}{{\sqrt{2\pi } }}\int _{\mathbb {R}}{g(s){e^{ - is\xi }}\mathrm{d}s} , \end{aligned}$$

and its inversion

$$\begin{aligned} g(x) := {{{\mathcal {F}}}^{ - 1}}\left( {{\widehat{g}}} \right) \left( x \right) = \frac{1}{{\sqrt{2\pi } }}\int _{\mathbb {R}}{{\widehat{g}}\left( \xi \right) {e^{ix\xi }}\mathrm{d}\xi }. \end{aligned}$$

For \(s>0\), \({\mathbf{{H}}^s}\left( {\mathbb {R}}\right) \) stands for the standard Sobolev space

$$\begin{aligned} {{\mathbf {H}}^{s}}\left( {\mathbb {R}}\right) : = \left\{ {v \in {L^2}({\mathbb {R}}){} \text { such that } {} {} \left\| v \right\| _{s}^2 := \int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^{2}}} \right) }^s}{{\left| {{\widehat{v}}(\xi )} \right| }^2}\mathrm{d}\xi } < \infty } \right\} . \end{aligned}$$

For a Banach space X, we denote by \(L^p \left( 0,T;X \right) \) and \(C\left( {[0,T];X} \right) \) the Banach space of real functions \(u:\left[ {0,T} \right] \rightarrow X\) measurable, such that

$$\begin{aligned} {\left\| u \right\| _{{L^p}\left( {0,T;X} \right) }}&= {\left( {\int _0^T {\left\| {u\left( { \cdot ,} \right) } \right\| _X^pdt} } \right) ^{1/p}}< \infty ,&1 \le p< \infty , \\ {\left\| u \right\| _{{L^\infty }\left( {0,T;X} \right) }}&= \mathrm{{ess}}\,\mathop {\sup }\limits _{0< t< T} {\left\| {u\left( { \cdot ,} \right) } \right\| _X}< \infty ,&p = \infty , \\ {\left\| u \right\| _{C\left( {[0,T];X} \right) }}&= \mathop {\sup }\limits _{0 \le t \le T} {\left\| {u\left( { \cdot ,} \right) } \right\| _X} < \infty .&\end{aligned}$$

In this section, we present some properties of the forward problem associated with problem (1.1), i.e., the following problem

$$\begin{aligned} \left\{ \begin{aligned}&{v_t}(x,t) + \lambda (t){\left( { - \Delta } \right) ^\alpha }v(x,t) = \ell (x,t),&t\in [0,T], x\in {\mathbb {R}},\\&v(x,0) = \varphi (x),&x \in {\mathbb {R}}. \end{aligned} \right. \end{aligned}$$
(2.1)

For ease of presentation, we refer problem (2.1) with respect to the fractional order \(\alpha \) as problem \((FP_\alpha )\). It is clear that problem \((FP_1)\) stands for the classical forward diffusion problem. Put

$$\begin{aligned} \Lambda (t) = \int _0^t {\lambda (s)\mathrm{d}s}. \end{aligned}$$

Taking the Fourier transform with respect to the space variables for both sides of (2.1), we get

$$\begin{aligned} \left\{ \begin{aligned}&\frac{\partial }{{\partial t}}{\widehat{v}} {_\alpha }(\xi ,t) + {\left| \xi \right| ^{2\alpha }}\lambda (t){\widehat{v}}{_\alpha }(\xi ,t) = {\widehat{\ell }} (\xi ,t),\\&{\widehat{v}}{_\alpha }(\xi ,0) = {\widehat{\varphi }}(\xi ). \end{aligned} \right. \end{aligned}$$

Then, we obtain

$$\begin{aligned} {{\widehat{v}}_\alpha }(\xi ,t) = {e^{ - \Lambda (t){{\left| \xi \right| }^{2\alpha }}}}{\widehat{\varphi }} (\xi ) + \int _0^t {{e^{ - {{\left| \xi \right| }^{2\alpha }}(\Lambda (t) - \Lambda (s))}}{\widehat{\ell }}(\xi ,s))\mathrm{d}s} . \end{aligned}$$

The solution of (2.1) is then written as

$$\begin{aligned} v_\alpha (x,t) = \frac{1}{{\sqrt{2\pi } }}\int _{\mathbb {R}}{\widehat{v}_\alpha (\xi ,t){e^{ix\xi }}\mathrm{d}\xi } , \end{aligned}$$

Using this representation, we establish the following result concerning the continuity of the solution of (2.1) with respect to the fractional parameter. This type of result may be called the parameter continuity result (see [6]).

Theorem 2.1

Let v be the solution of classical diffusion problem \((FP_1)\) and \(v_\alpha \) be the solution of problem \((FP_\alpha )\). Assume that \(\ell \in {L^2}(0,T;{\mathbf{{H}}^{3 - 2\alpha }}({\mathbb {R}}))\) and the initial data \(\varphi \in {{\mathbf {H}}^{3 - 2\alpha }}({\mathbb {R}})\), then there exists a positive constant \({{\mathcal {K}}}:={{\mathcal {K}}}[\varphi ,\ell ]\) such that

$$\begin{aligned} \mathop {\sup }\limits _{t \in [0,T]} \left\| {\left( {v - {v_\alpha }} \right) ( \cdot ,t)} \right\| \le {{\mathcal {K}}}\left| {1 - \alpha } \right| . \end{aligned}$$

Proof

From v be the solution of problem \((FP_1)\), and \(v_\alpha \) be the solution of problem \((FP_\alpha )\), we have

$$\begin{aligned} {\widehat{v}}(\xi ,t)&= {e^{ - \Lambda (t){{\left| \xi \right| }^2}}}{\widehat{\varphi }} (\xi ) + \int _0^t {{e^{ - {{\left| \xi \right| }^2}(\Lambda (t) - \Lambda (s))}}{\widehat{\ell }}(\xi ,s))\mathrm{d}s} ,\\ {{\widehat{v}}_\alpha }(\xi ,t)&= {e^{ - \Lambda (t){{\left| \xi \right| }^{2\alpha }}}}{\widehat{\varphi }} (\xi ) + \int _0^t {{e^{ - {{\left| \xi \right| }^{2\alpha }}(\Lambda (t) - \Lambda (s))}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} . \end{aligned}$$

By directly computation, and Hölder inequality, one has

$$\begin{aligned} \begin{aligned}&{\left\| {\left( {{{{\widehat{v}}}_\alpha } - {\widehat{v}}} \right) ( \cdot ,t)} \right\| ^2} \le 2\int _{\mathbb {R}}{{{\left| {{e^{ - \Lambda (t){{\left| \xi \right| }^2}}} - {e^{ - \Lambda (t){{\left| \xi \right| }^{2\alpha }}}}} \right| }^2}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi } \\&\qquad + 2\int _{\mathbb {R}}{{{\left( {\int _0^t {\left| {{e^{ - {{\left| \xi \right| }^2}(\Lambda (t) - \Lambda (s))}} - {e^{ - {{\left| \xi \right| }^{2\alpha }}(\Lambda (t) - \Lambda (s))}}} \right| \left| {{\widehat{\ell }}(\xi ,s)} \right| \mathrm{d}s} } \right) }^2}\mathrm{d}\xi } \\&\quad \le 2\underbrace{\int _{\mathbb {R}} {{{\left| {{e^{ - \Lambda (t){{\left| \xi \right| }^2}}} - {e^{ - \Lambda (t){{\left| \xi \right| }^{2\alpha }}}}} \right| }^2}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi } }_{{{{\mathcal {I}}}_1}\left( t \right) }\\&\qquad +2T\underbrace{\int _0^t {\int _{\mathbb {R}} {{{\left| {{e^{ - {{\left| \xi \right| }^2}(\Lambda (t) - \Lambda (s))}} - {e^{ - {{\left| \xi \right| }^{2\alpha }}(\Lambda (t) - \Lambda (s))}}} \right| }^2}{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}\xi } \mathrm{d}s} }_{{{{\mathcal {I}}}_2}\left( t \right) }. \end{aligned} \end{aligned}$$
(2.2)

To obtain a stability estimate with respect to the fractional parameter \(\alpha \), the idea is now to evaluate the inside exponential-difference term of \({{\mathcal {I}}}_1\) and \({{\mathcal {I}}}_2\). To do so, let us denote

$$\begin{aligned}{{\mathbb {D}}_1} = \left\{ {\left. {\xi \,} \right| \,\,\left| \xi \right| \ge 1} \right\} \text { and }{{\mathbb {D}}_2} = \left\{ {\left. {\xi \,} \right| \,\,\left| \xi \right| < 1} \right\} .\end{aligned}$$

We have the following cases.

Case 1: \(\xi \in {{\mathbb {D}}_1}\). Using the Lagrange mean value theorem for \(f(x) = {e^{ - x}},x \ge 0\), there exists a positive point \({\xi _1} \in \left[ {{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda (s)} \right) ,{{\left| \xi \right| }^2}\left( {\Lambda (t) - \Lambda (s)} \right) } \right] \) such that

$$\begin{aligned}&\left| {{e^{ - {{\left| \xi \right| }^2}\left( {\Lambda (t) - \Lambda (s)} \right) }} - {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda (s)} \right) }}} \right| = \left( {\Lambda (t) - \Lambda (s)} \right) {e^{ - {\xi _1}}}\left( {{{\left| \xi \right| }^2} - {{\left| \xi \right| }^{2\alpha }}} \right) \\&\quad \le \left( {\Lambda (t) - \Lambda (s)} \right) {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda (s)} \right) }}\left( {{{\left| \xi \right| }^2} - {{\left| \xi \right| }^{2\alpha }}} \right) \\&\quad \le \frac{1}{{{{\left| \xi \right| }^{2\alpha }}}}\left( {{{\left| \xi \right| }^2} - {{\left| \xi \right| }^{2\alpha }}} \right) = {\left| \xi \right| ^{2 - 2\alpha }} - 1. \end{aligned}$$

Again, by using Lagrange mean value theorem one more time, but for \(g(\alpha ) = {a^\alpha },a \ge 1\), there exists a positive number \({\alpha _1} \in [0 ,2-2\alpha ]\) such that

$$\begin{aligned} {\left| \xi \right| ^{2 - 2\alpha }} - 1 = 2{\left| \xi \right| ^{{\alpha _1}}}\log \left| \xi \right| \left( {1 - \alpha } \right) . \end{aligned}$$

Since \(0 \le \log x \le x\) for all \( x \ge 1\), we can write that

$$\begin{aligned} \left| {{e^{ - {{\left| \xi \right| }^2}\left( {\Lambda (t) - \Lambda (s)} \right) }} - {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda (s)} \right) }}} \right| \le 2{\left| \xi \right| ^{{\alpha _1}}}\log \left| \xi \right| \left| {1 - \alpha } \right| \le 2{\left| \xi \right| ^{3 - 2\alpha }}\left| {1 - \alpha } \right| . \end{aligned}$$

Case 2: \(\xi \in {{\mathbb {D}}_2}\). By adapting the same procedure as in Case 1, there exists a positive point \({\xi _2} \in \left[ {{{\left| \xi \right| }^2}\left( {\Lambda (t) - \Lambda (s)} \right) ,{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda (s)} \right) } \right] \) such that

$$\begin{aligned}&\left| {{e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda (s)} \right) }} - {e^{ - {{\left| \xi \right| }^2}\left( {\Lambda (t) - \Lambda (s)} \right) }}} \right| \nonumber \\&\quad = \left( {\Lambda (t) - \Lambda (s)} \right) {e^{ - {\xi _2}}}\left( {{{\left| \xi \right| }^{2\alpha }} - {{\left| \xi \right| }^2}} \right) \le \Lambda (T)\left( {{{\left| \xi \right| }^{2\alpha }} - {{\left| \xi \right| }^2}} \right) . \end{aligned}$$

Once again, by the Lagrange mean value theorem, there exists a positive \({\alpha _2} \in [2\alpha ,2]\) such that

$$\begin{aligned} {\left| \xi \right| ^{2\alpha }} - {\left| \xi \right| ^2} = 2{\left| \xi \right| ^{{\alpha _2}}}\left| {\log \left| \xi \right| } \right| \left( {1 - \alpha } \right) \le 2{\left| \xi \right| ^{{\alpha _2} - 1}}\left( {1 - \alpha } \right) \le 2\left( {1 - \alpha } \right) . \end{aligned}$$

Therefore,

$$\begin{aligned} \left| {{e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda (s)} \right) }} - {e^{ - {{\left| \xi \right| }^2}\left( {\Lambda (t) - \Lambda (s)} \right) }}} \right| \le 2\Lambda (T)\left| {1 - \alpha } \right| . \end{aligned}$$

Thus, we arrive the following estimates

$$\begin{aligned}&{{{\mathcal {I}}}_1}\left( t\right) \le 4\left( {1 + {\Lambda ^2}(T)} \right) {\left| {1 - \alpha } \right| ^2}\int _{\mathbb {R}} {{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{3 - 2\alpha }}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi }\nonumber \\&\quad = 4\left( {1 + {\Lambda ^2}(T)} \right) {\left| {1 - \alpha } \right| ^2}\left\| \varphi \right\| _{{{{\mathbf {H}}}^{3 - 2\alpha }}({\mathbb {R}})}^2, \end{aligned}$$
(2.3)
$$\begin{aligned}&{{{\mathcal {I}}}_2}\left( t\right) \le 4\left( {1 + {\Lambda ^2}(T)} \right) {\left| {1 - \alpha } \right| ^2}\int _0^T {\int _{\mathbb {R}} {{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{3 - 2\alpha }}{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}\xi } \mathrm{d}s} \nonumber \\&\quad = 4\left( {1 + {\Lambda ^2}(T)} \right) {\left| {1 - \alpha } \right| ^2}\left\| \ell \right\| _{{L^2}(0,T;{{{\mathbf {H}}}^{3 - 2\alpha }}({\mathbb {R}}))}^2. \end{aligned}$$
(2.4)

By combining (2.2), (2.3) and (2.4), we arrive at the final estimate

$$\begin{aligned}&\left\| {\left( {{v_\alpha } - v} \right) ( \cdot ,t)} \right\| \le 2\sqrt{2} \left( 1+\Lambda (T)\right) \left( {{{\left\| \varphi \right\| }_{{{\mathbf {H}}^{3 - 2\alpha }}({\mathbb {R}})}} + \sqrt{T} {{\left\| \ell \right\| }_{{L^2}(0,T;{{\mathbf {H}}^{3 - 2\alpha }}({\mathbb {R}}))}}} \right) \left| {1 - \alpha } \right| \\&\quad = {{\mathcal {K}}}\left| {1 - \alpha } \right| , \end{aligned}$$

where \({{\mathcal {K}}}: = {{\mathcal {K}}}[\varphi ,\ell ] = 2\sqrt{2} \left( 1+\Lambda (T)\right) \left( {{{\left\| \varphi \right\| }_{{{\mathbf {H}}^{3 - 2\alpha }}({\mathbb {R}})}} + \sqrt{T} {{\left\| \ell \right\| }_{{L^2}(0,T;{{\mathbf {H}}^{3 - 2\alpha }}({\mathbb {R}}))}}} \right) \). The proof is complete. \(\square \)

Next, we present some properties of the solution of \((FP_\alpha )\).

Theorem 2.2

The following statements hold:

  1. a.

    For \(0 \le p \le \alpha \), if \({\varphi } \in L^2\left( {\mathbb {R}}\right) \) and \( \ell \in {L^2}\left( {0,T;{L^2 }\left( {\mathbb {R}}\right) } \right) \), then \({v_\alpha }\left( {\cdot ,t} \right) \in {\mathbf{{H}}^{p }}\left( {\mathbb {R}}\right) \) for all \(0 < t \le T\) and

    $$\begin{aligned} {\left\| {{v_\alpha }( \cdot ,t)} \right\| _{{\mathbf{{H}}^p}\left( {\mathbb {R}}\right) }} \le {{{\mathcal {D}}}}(t)\left( {\left\| \varphi \right\| + {{\left\| \ell \right\| }_{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) }}} \right) , \end{aligned}$$

    where \({{\mathcal {D}}}\) is a positive function depends on t defined by

    $$\begin{aligned} {{{\mathcal {D}}}}(t) = 2\max \left\{ {\sqrt{\frac{{1 + t{\mathrm{{M}}_\mathrm{{1}}}}}{{t{\mathrm{{M}}_\mathrm{{1}}}}}} ,\sqrt{\frac{T}{{\mathrm{{2}}{\mathrm{{M}}_\mathrm{{1}}}}} + {T^2}} } \right\} . \end{aligned}$$
  2. b.

    For \(q\ge 0\), if \({\varphi } \in {\mathbf{{H}}^q}\left( {\mathbb {R}}\right) \) and \( \ell \in {L^2}\left( {0,T;{\mathbf{{H}}^q }\left( {\mathbb {R}}\right) } \right) \), then \({v_\alpha } \in {L^\infty }\left( {0,T;{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) } \right) \). More precisely,

    $$\begin{aligned} {\left\| {{v_\alpha }} \right\| _{{L^\infty }\left( {0,T;{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) } \right) }} \le \sqrt{2}\max \left\{ {1 ,T} \right\} \left( {{{\left\| \varphi \right\| }_{{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) }} + {{\left\| \ell \right\| }_{{L^2}\left( {0,T;{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) } \right) }}} \right) . \end{aligned}$$
  3. c.

    If \({\varphi } \in {L^2}\left( {\mathbb {R}}\right) \) and \( \ell \in {L^2}\left( {0,T;{{\mathbf {H}}^\alpha }\left( {\mathbb {R}}\right) } \right) \), then \(v_\alpha \in C\left( {\left( {0,T} \right] ,{L^2}\left( {\mathbb {R}}\right) } \right) \cap {L^\infty }\left( {0,T;{L^2 }\left( {\mathbb {R}}\right) } \right) .\)

  4. d.

    If \({\varphi } \in {{\mathbf {H}}^{2\alpha }}\left( {\mathbb {R}}\right) \) and \( \ell \in {L^2}\left( {0,T;{{\mathbf {H}}^\alpha }\left( {\mathbb {R}}\right) } \right) \), then \(v_\alpha \in C\left( {\left[ {0,T} \right] ,{L^2}\left( {\mathbb {R}}\right) } \right) \cap {L^\infty }\left( {0,T;{{\mathbf {H}}^\alpha }\left( {\mathbb {R}}\right) } \right) .\)

Proof

a.) Since \({\left\| {{v_\alpha }( \cdot ,t)} \right\| _{{\mathbf{{H}}^p}\left( {\mathbb {R}}\right) }} \le {\left\| {{v_\alpha }( \cdot ,t)} \right\| _{{\mathbf{{H}}^\alpha }\left( {\mathbb {R}}\right) }}\) for all \(0 \le p \le \alpha \), it suffices to prove the part a of the theorem for \(p=\alpha \). By Hölder inequality , we see that

$$\begin{aligned}&\left\| {{v_\alpha }( \cdot ,t)} \right\| _{{{\mathbf {H}}^{\alpha } }\left( {\mathbb {R}}\right) }^2 = {\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {{\widehat{\varphi }} (\xi ) + \int _0^t {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} } \right) } ^2}\mathrm{d}\xi \\&\quad \le 2\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}} \mathrm{d}\xi \\&\qquad + 2{\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {\int _0^t {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} } \right) } ^2}\mathrm{d}\xi \\&\quad \le 2\int _{\mathbb {R}}{{{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }} \over {1 + 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}} \mathrm{d}\xi \\&\qquad + 2t\int _{\left| \xi \right| \le 1} {\left( {{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\int _0^t {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}\mathrm{d}s\int _0^t {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}s} } } \right) } \mathrm{d}\xi \\&\qquad + 2t\int _{\left| \xi \right| > 1} {\left( {{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\int _0^t {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}\mathrm{d}s\int _0^t {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}s} } } \right) } \mathrm{d}\xi . \end{aligned}$$

Since \(\Lambda (t) \ge {\mathrm{{M}}_\mathrm{{1}}}t \) for all \(0<t\le T\), we can write that

$$\begin{aligned}&\left\| {{v_\alpha }( \cdot ,t)} \right\| _{{{\mathbf {H}}^{\alpha } }\left( {\mathbb {R}}\right) }^2 \le 2\int _{\mathbb {R}}{{{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha }}} \over {1 + t{\mathrm{{M}}_\mathrm{{1}}}{{\left| \xi \right| }^{2\alpha }}}}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}} \mathrm{d}\xi + {2^{\alpha + 1}}{T^2}\int _0^T {\int _{\mathbb {R}}{{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}} } \mathrm{d}\xi \mathrm{d}s\\&\qquad + 2T\int _{\left| \xi \right|> 1} {\left( {{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\int _0^t {{{d\left( {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}} \right) } \over {2{{\left| \xi \right| }^{2\alpha }}\lambda (s)d\mathrm{{s}}}}\int _0^T {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}s} } } \right) } \mathrm{d}\xi \\&\quad \le {{2\left( {1 + t{\mathrm{{M}}_\mathrm{{1}}}} \right) } \over {t{\mathrm{{M}}_\mathrm{{1}}}}}\int _{\mathbb {R}}{{{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha }}} \over {\left( {1 + {{\left| \xi \right| }^{2\alpha }}} \right) }}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}} \mathrm{d}\xi + {2^{\alpha + 1}}{T^2}\left\| \ell \right\| _{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) }^2\\&\qquad + {T \over {{\mathrm{{M}}_\mathrm{{1}}}}}\int _{\left| \xi \right| > 1} {\left( {{{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{\alpha } }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}} \over {{{\left| \xi \right| }^{2\alpha }}}}\int _0^t {{{d\left( {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}} \right) } \over {d\mathrm{{s}}}}\int _0^T {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}s} } } \right) } \mathrm{d}\xi \\&\quad \le {2^{\alpha + 1}}\left( {{{1 + t{\mathrm{{M}}_\mathrm{{1}}}} \over {t{\mathrm{{M}}_\mathrm{{1}}}}}\left\| \varphi \right\| ^2 + {T^2}\left\| \ell \right\| _{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) }^2} \right) \\&\qquad + {{{2^\alpha }T} \over {{\mathrm{{M}}_\mathrm{{1}}}}}\int _{\mathbb {R}}{{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}} - 1} \right) \int _0^T {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}s} } \mathrm{d}\xi \\&\quad \le 4\left( {{{1 + t{\mathrm{{M}}_\mathrm{{1}}}} \over {t{\mathrm{{M}}_\mathrm{{1}}}}}\left\| \varphi \right\| ^2 + \left( {{T \over {\mathrm{{2}}{\mathrm{{M}}_\mathrm{{1}}}}} + {T^2}} \right) \left\| \ell \right\| _{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) }^2} \right) , \end{aligned}$$

which means that \({v_\alpha }\left( {\cdot ,t} \right) \in {\mathbf{{H}}^{\alpha }}\left( {\mathbb {R}}\right) \) for all \(t \in \left( {0,T} \right] \). The above estimate can be rewritten as

$$\begin{aligned}{\left\| {{v_\alpha }( \cdot ,t)} \right\| _{{\mathbf{{H}}^\alpha }\left( {\mathbb {R}}\right) }} \le {{{\mathcal {D}}}}(t)\left( {\left\| \varphi \right\| + {{\left\| \ell \right\| }_{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) }}} \right) .\end{aligned}$$

The part (a) of this theorem is proved.

b.) For \(q \ge 0\), we have

$$\begin{aligned} \begin{aligned}&\left\| {{v_\alpha }( \cdot ,t)} \right\| _{{{\mathbf {H}}^q }\left( {\mathbb {R}}\right) }^2 = {\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^q }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {{\widehat{\varphi }} (\xi ) + \int _0^t {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} } \right) } ^2}\mathrm{d}\xi \\&\quad \le 2\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^q}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}} \mathrm{d}\xi \\&\qquad + 2{\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^q }{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {\int _0^t {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} } \right) } ^2}\mathrm{d}\xi \\&\quad \le 2\left\| \varphi \right\| _{{{\mathbf {H}}^q}\left( {\mathbb {R}}\right) }^2\\&\qquad + 2T\int _{\mathbb {R}}{\left( {{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^q}{e^{ - 2{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\int _0^t {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}\mathrm{d}s\int _0^t {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}\mathrm{d}s} } } \right) } \mathrm{d}\xi \\&\quad \le 2\left\| \varphi \right\| _{{{\mathbf {H}}^q}\left( {\mathbb {R}}\right) }+ 2{T^2}\int _0^T {\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^q}{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}} } \mathrm{d}\xi \mathrm{d}s\\&\quad = 2\left( {\left\| \varphi \right\| _{{{\mathbf {H}}^q}\left( {\mathbb {R}}\right) }^2 + {T^2}\left\| \ell \right\| _{{L^2}\left( {0,T;{{\mathbf {H}}^q}\left( {\mathbb {R}}\right) } \right) }^2} \right) . \end{aligned} \end{aligned}$$
(2.5)

The right-hand side of (2.5) is independent of t, we conclude that \({v_\alpha } \in {L^\infty }\left( {0,T;{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) } \right) \). Moreover, (2.5) also implies that

$$\begin{aligned}{\left\| {{v_\alpha }} \right\| _{{L^\infty }\left( {0,T;{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) } \right) }} \le \sqrt{2}\max \left\{ {1 ,T} \right\} \left( {{{\left\| \varphi \right\| }_{{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) }} + {{\left\| \ell \right\| }_{{L^2}\left( {0,T;{\mathbf{{H}}^q}\left( {\mathbb {R}}\right) } \right) }}} \right) .\end{aligned}$$

The part (b) is proved.

c.) By applying the result of part (b) with \(q=0\), we conclude that \({v_\alpha } \in {L^\infty }\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) \). Now we will prove that \(v_\alpha \in C\left( {\left( {0,T} \right] ,{L^2}\left( {\mathbb {R}}\right) } \right) \). For \({t_0} \in \left( {0,T} \right] \), let us evaluate the limit \(\mathop {\lim }\limits _{t \rightarrow t_0^ + } \left\| {{{\widehat{v}}_\alpha }( \cdot ,t) - {{{\widehat{v}}}_\alpha }( \cdot ,{t_0})} \right\| \). We have

$$\begin{aligned}&{{{\widehat{v}}}_\alpha }(\xi ,t) - {{{\widehat{v}}}_\alpha }(\xi ,{t_0})\\&\quad = \underbrace{\left( {{e^{ - {{\left| \xi \right| }^{2\alpha }}\Lambda (t)}} - {e^{ - {{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}} \right) {\widehat{\varphi }} (\xi )}_{{{{\mathcal {I}}}_3\left( t\right) }} + \underbrace{\int _{{t_0}}^t {{e^{{{\left| \xi \right| }^{2\alpha }}(\Lambda (s) - \Lambda (t))}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} }_{{{{\mathcal {I}}}_4\left( t\right) }} \\&\qquad + \underbrace{\left( {{e^{ - {{\left| \xi \right| }^{2\alpha }}\Lambda (t)}} - {e^{ - {{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}} \right) \int _0^{{t_0}} {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} }_{{{{\mathcal {I}}}_5\left( t\right) }}. \end{aligned}$$

Since \(1 - {e^{ - x}} \le x\) for all \(x\ge 0\), it yields that

$$\begin{aligned}&\left| {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}} - {e^{{{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}} \right| = {e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {1 - {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) - \Lambda ({t_0})} \right) }}} \right) \\&\quad \le {\left| \xi \right| ^{2\alpha }}{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {\Lambda (t) - \Lambda ({t_0})} \right) \\&\quad \le {\mathrm{{M}}_2}{\left| \xi \right| ^{2\alpha }}{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}}\left( {t - {t_0}} \right) . \end{aligned}$$

Then, we can assert that

$$\begin{aligned}&{\left\| {{{{\mathcal {I}}}_3}} \right\| ^2} = \int _{\mathbb {R}}{{{\left( {{e^{ - {{\left| \xi \right| }^{2\alpha }}\Lambda (t)}} - {e^{ - {{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}} \right) }^2}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi } \\&\quad = \int _{\mathbb {R}}{{{{{\left| {{e^{ {{\left| \xi \right| }^{2\alpha }}\Lambda (t)}} - {e^{ {{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}} \right| }^2}} \over {{e^{2{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) + \Lambda ({t_0})} \right) }}}}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi } \\&\quad \le \mathrm{{M}}_2^2{\left( {t - {t_0}} \right) ^2}\int _{\mathbb {R}}{{{{{\left| \xi \right| }^{4\alpha }}} \over {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}}}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi } \\&\quad \le {{\mathrm{{M}}_2^2{{\left( {t - {t_0}} \right) }^2}} \over {2{\Lambda ^2}({t_0})}}\int _{\mathbb {R}}{{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi }\\&\quad \le {{\mathrm{{M}}_2^2{{\left\| \varphi \right\| }^2}} \over {2\mathrm{{M}}_1^2t_0^2}}{\left( {t - {t_0}} \right) ^2}. \end{aligned}$$

In view of the Hölder inequality, one has

$$\begin{aligned}&{\left\| {{{{\mathcal {I}}}_4}} \right\| ^2} = {\int _{\mathbb {R}}{\left( {\int _{{t_0}}^t {{e^{{{\left| \xi \right| }^{2\alpha }}(\Lambda (s) - \Lambda (t))}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} } \right) } ^2}\mathrm{d}\xi \\&\quad \le \int _{\mathbb {R}}{\left( {\int _{{t_0}}^t {{e^{2{{\left| \xi \right| }^{2\alpha }}(\Lambda (s) - \Lambda (t))}}\mathrm{d}s} \int _{{t_0}}^t {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}d\mathrm{{s}}} } \right) } \mathrm{d}\xi \\&\quad \le \left( {t - {t_0}} \right) \int _{\mathbb {R}}{\int _{{0}}^T {{{\left| {{\widehat{\ell }} (\xi ,s))} \right| }^2}d\mathrm{{s}}} } \mathrm{d}\xi \\&\quad = \left\| \ell \right\| _{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) }^2\left( {t - {t_0}} \right) . \end{aligned}$$

The remaining task is now to find a bound for \(\left\| {{\mathcal {I}}}_5 \right\| \). In fact, we have

$$\begin{aligned}&{\left\| {{{{\mathcal {I}}}_5}} \right\| ^2} = \int _{\mathbb {R}}{{{{{\left| {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (t)}} - {e^{{{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}} \right| }^2}} \over {{e^{2{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (t) + \Lambda ({t_0})} \right) }}}}{{\left( {\int _0^{{t_0}} {{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}{\widehat{\ell }} (\xi ,s))\mathrm{d}s} } \right) }^2}\mathrm{d}\xi } \\&\quad \le \mathrm{{M}}_2^2{\left( {t - {t_0}} \right) ^2}\int _{\mathbb {R}}{\left( {{{{{\left| \xi \right| }^{4\alpha }}} \over {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}}}\int _0^{{t_0}} {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}\mathrm{d}s} {{\int _0^{{t_0}} {\left| {{\widehat{\ell }} (\xi ,s))} \right| } }^2}\mathrm{d}s} \right) \mathrm{d}\xi } \\&\quad \mathrm{{ = M}}_2^2{\left( {t - {t_0}} \right) ^2}\int _ {\mathbb {R}}{\left( {{{{{\left| \xi \right| }^{4\alpha }}} \over {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}}}\int _0^{{t_0}} {{{d\left( {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda (s)}}} \right) } \over {2{{\left| \xi \right| }^{2\alpha }}\lambda (s)\mathrm{d}s}}} {{\int _0^{{t_0}} {\left| {{\widehat{\ell }} (\xi ,s))} \right| } }^2}\mathrm{d}s} \right) \mathrm{d}\xi } \\&\quad \le {{\mathrm{{M}}_2^2{{\left( {t - {t_0}} \right) }^2}} \over {{2\mathrm{{M}}_1}}}\int _{\mathbb {R}}{\left( {{{{{\left| \xi \right| }^{2\alpha }}\left( {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}} - 1} \right) } \over {{e^{2{{\left| \xi \right| }^{2\alpha }}\Lambda ({t_0})}}}}{{\int _0^{{t_0}} {\left| {{\widehat{\ell }} (\xi ,s))} \right| } }^2}\mathrm{d}s} \right) \mathrm{d}\xi } \\&\quad \le {{\mathrm{{M}}_2^2{{\left( {t - {t_0}} \right) }^2}} \over {\mathrm{{2}}{\mathrm{{M}}_1}}}{\int _0^T {\int _{\mathbb {R}}{{{\left| \xi \right| }^{2\alpha }}\left| {{\widehat{\ell }} (\xi ,s))} \right| } } ^2}\mathrm{d}\xi \mathrm{d}s\\&\quad \le {{\mathrm{{M}}_2^2} \over {\mathrm{{2}}{\mathrm{{M}}_1}}}\left\| \ell \right\| _{{L^2}\left( {0,T;{{\mathbf {H}}^\alpha }\left( {\mathbb {R}}\right) } \right) }^2{\left( {t - {t_0}} \right) ^2}. \end{aligned}$$

Having disposed of this preliminary step, we can now return to the main estimate

$$\begin{aligned}&\mathop {\lim }\limits _{t \rightarrow t_0^ + } \left\| {{v_\alpha }( \cdot ,t) - {v_\alpha }( \cdot ,{t_0})} \right\| = \mathop {\lim }\limits _{t \rightarrow t_0^ + } \left\| {{{{\widehat{v}}}_\alpha }( \cdot ,t) - {{{\widehat{v}}}_\alpha }( \cdot ,{t_0})} \right\| \\&\quad \le \mathop {\lim }\limits _{t \rightarrow t_0^ + } \left( {\left\| {{{{\mathcal {I}}}_3}} \right\| + \left\| {{{{\mathcal {I}}}_4}} \right\| + \left\| {{{{\mathcal {I}}}_5}} \right\| } \right) =0. \end{aligned}$$

In the same manner, we can prove that \(\mathop {\lim }\limits _{t \rightarrow t_0^ - } \left\| {{v_\alpha }( \cdot ,t) - {v_\alpha }( \cdot ,{t_0})} \right\| = 0\) for all \(t_0\in \left( 0,T\right] \). It implies that

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow t_0 } \left\| {{v_\alpha }( \cdot ,t) - {v_\alpha }( \cdot ,{t_0})} \right\| = 0. \end{aligned}$$

This leads to \(v_\alpha \in C\left( {\left( {0,T} \right] ;{L^2}\left( {\mathbb {R}}\right) } \right) \cap {L^\infty }\left( {0,T;{L^2 }\left( {\mathbb {R}}\right) } \right) \) as claimed.

d.) Applying the result of part b for \(q=\alpha \) which arrives at \({v_\alpha } \in {L^\infty }\left( {0,T;{\mathbf{{H}}^\alpha }\left( {\mathbb {R}}\right) } \right) .\) The remain task is to solve \(\mathop {\lim }\limits _{t \rightarrow {0^ + }} \left\| {{v_\alpha }( \cdot ,t) - {v_\alpha }( \cdot ,0)} \right\| = 0\). For \({{\mathcal {I}}}_4\) and \({{\mathcal {I}}}_5\), the estimate is the same with part (c). The rest is to re-evaluate \({{\mathcal {I}}}_3\). We have

$$\begin{aligned} {\left\| {{{{\mathcal {I}}}_3}} \right\| ^2} \le \mathrm{{M}}_2^2{t^2}\int _{\mathbb {R}}{{{\left| \xi \right| }^{4\alpha }}{{\left| {{\widehat{\varphi }} (\xi )} \right| }^2}\mathrm{d}\xi } \le \mathrm{{M}}_2^2{t^2}\int _{\mathbb {R}}{{{\left( {1 + {{\left| \xi \right| }^2}} \right) }^{2\alpha }}{{\left| {\widehat{\varphi }(\xi )} \right| }^2}\mathrm{d}\xi } = \mathrm{{M}}_2^2\left\| \varphi \right\| _{{{\mathbf {H}}^{2\alpha }}\left( {\mathbb {R}}\right) }^2{t^2}. \end{aligned}$$

This leads to \(\mathop {\lim }\limits _{t \rightarrow {0^ + }} \left\| {{v_\alpha }( \cdot ,t) - {v_\alpha }( \cdot ,0)} \right\| = 0\). The theorem is completely proved. \(\square \)

3 The Ill-Posedness and Regularization Method for the Backward Problem

This section is devoted to the investigation of the backward problem (1.1) which is the main result of the current paper. Using the same Fourier technique as in Sect. 2, after some elementary calculations, we get

$$\begin{aligned} {\widehat{u}}(\xi ,t) = {\widehat{g}}(\xi ){e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }} - \int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}{\widehat{\ell }} (\xi ,s)\mathrm{d}s}. \end{aligned}$$
(3.1)

Thus, the exact solution of backward problem (1.1) is obtained by the inverse Fourier transform

$$\begin{aligned} u(x,t) = \frac{1}{{\sqrt{2\pi } }}\int _{\mathbb {R}}{\left( {{\widehat{g}}(\xi ){e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }} - \int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}{\widehat{\ell }} (\xi ,s)\mathrm{d}s} } \right) {e^{ix\xi }}\mathrm{d}\xi } .\nonumber \\\end{aligned}$$
(3.2)

Let us take a look at the formula (3.2). It is not difficult to see that (3.2) contains the quantity \({e^{{{\left| \xi \right| }^{2\alpha }}\Lambda (T)}}\) which will go to infinity as \(|\xi |\) tends to infinity. Thus, the solution of problem (1.1) is unstable at the high frequencies of \(\xi \). As a result, the ill-posedness appears. This property is quite consistent with the classical case \(\alpha =1\). To be more precise, we present the following theorem.

Theorem 3.1

The backward problem (1.1) is ill-posed in the Hadamard’s sense.

Proof

The following example demonstrates the ill-posedness of (1.1). For any \(n\in {\mathbb {N}}\) and \(n\ge 2\). Define \({\Omega _n}: = \left\{ {\xi \in {\mathbb {R}};1 \le \xi \le n } \right\} \), let \(g _n \in L^2\left( {\mathbb {R}}\right) \), \({\ell _n} \in {L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) \) be the measured data such that

$$\begin{aligned} {\widehat{g}}{ _n}\left( {\xi } \right)&= {\left\{ \begin{array}{ll} {\widehat{g}} \left( {\xi } \right) + \frac{1}{{n^\gamma }}, &{} \text {if }\xi \in {\Omega _n}, \\ {\widehat{g}} \left( {\xi } \right) ,&{}\text {if } \xi \in {{\mathbb {R}}}\backslash {\Omega _n}, \end{array}\right. }\\ {{{{\widehat{\ell }} }_n}(\xi ,s)}&= {\left\{ \begin{array}{ll} {{{{\widehat{\ell }} }}(\xi ,s)} + \frac{1}{{n}}, &{} \text {if }\xi \in {\Omega _n}, \\ {{{{\widehat{\ell }} }}(\xi ,s)} ,&{}\text {if } \xi \in {{\mathbb {R}}}\backslash {\Omega _n}, \end{array}\right. } \end{aligned}$$

where \(\gamma \in \left( 0,1\right) \).

Using Parseval’s identity, we see that

$$\begin{aligned}&\left\| {{g _n} -g } \right\| = \left\| {{{{\widehat{g}} }_n} - {\widehat{g}} } \right\| = {\left( {\int _{{\Omega _n}} {\frac{1}{{{n^{2\gamma }}}}\mathrm{d}\xi } } \right) ^{\frac{1}{2}}} \le \frac{1}{{\sqrt{{n^{2\gamma - 1}}} }} \rightarrow 0 \text{ as } n \rightarrow \infty ,\\&\quad {\left\| {{\ell _n} - \ell } \right\| _{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) }} = {\left( {\int _0^T {{{\left( {\int _{{\Omega _n}} {\frac{1}{{{n^2}}}\mathrm{d}\xi } } \right) }^2}\mathrm{d}s} } \right) ^{\frac{1}{2}}} \le \frac{{\sqrt{T} }}{{\sqrt{n} }} \rightarrow 0 \text{ as } n \rightarrow \infty . \end{aligned}$$

Let u and \(u_n\) be two solutions of problem (1.1) correspond to the data \(\left( {g,\ell } \right) \) and \(\left( {{g_n},{\ell _n}} \right) \), respectively, i.e.,

$$\begin{aligned} {\widehat{u}}(\xi ,t)&= {\widehat{g}}(\xi ){e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }} - \int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}{\widehat{\ell }} (\xi ,s)\mathrm{d}s} ,\\ {{\widehat{u}}_n}(\xi ,t)&= {{\widehat{g}}_n}(\xi ){e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }} - \int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}{\widehat{\ell }} _n(\xi ,s)\mathrm{d}s} . \end{aligned}$$

We know that

$$\begin{aligned}&\left\| {\left( {{u_n} - u} \right) \left( { \cdot ,t} \right) } \right\| ^2 = \left\| {\left( {{{{\widehat{u}}}_n} - u} \right) \left( { \cdot ,t} \right) } \right\| ^2\\&\quad = \int _{{\Omega _n}} {{{\left| {\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{{n^\gamma }}} - \int _t^T {\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{n}\mathrm{d}s} } \right| }^2}\mathrm{d}\xi } \\&\quad = \frac{1}{{{n^{2\gamma }}}}\int _{{\Omega _n}} {{e^{2{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}\mathrm{d}\xi } + \frac{1}{{{n^2}}}\int _{{\Omega _n}} {{{\left( {\int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}\mathrm{d}s} } \right) }^2}\mathrm{d}\xi } \\&\qquad - \frac{2}{{{n^{1 + \gamma }}}}\int _{{\Omega _n}} {\int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) + \Lambda (s) - 2\Lambda (t)} \right) }}\mathrm{d}s} \mathrm{d}\xi } \\&\quad \ge \frac{1}{{{n^{2\gamma }}}}\int _{{\Omega _n}} {{e^{2{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}\mathrm{d}\xi } - \frac{2}{{{n^{1 + \gamma }}}}\int _{{\Omega _n}} {\int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) + \Lambda (s) - 2\Lambda (t)} \right) }}\mathrm{d}s} \mathrm{d}\xi } \\&\quad \ge \frac{{1 - 2T{n^{\gamma - 1}}}}{{{n^{2\gamma }}}}\int _{{\Omega _n}} {{e^{2{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}\mathrm{d}\xi } \\&\quad = \frac{{1 - 2T{n^{\gamma - 1}}}}{{{n^{2\gamma }}}}\int _1^n {{e^{2{\xi ^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}\mathrm{d}\xi } \\&\quad \ge \frac{{1 - 2T{n^{\gamma - 1}}}}{{{n^{2\gamma }}}}\int _1^n {{e^{2\xi \left( {\Lambda (T) - \Lambda (t)} \right) }}\mathrm{d}\xi } \\&\quad = \frac{{\left( {1 - 2T{n^{\gamma - 1}}} \right) \left( {{e^{2n\left( {\Lambda (T) - \Lambda (t)} \right) }} - {e^{2\left( {\Lambda (T) - \Lambda (t)} \right) }}} \right) }}{{2\left( {\Lambda (T) - \Lambda (t)} \right) {n^{2\gamma }}}}. \end{aligned}$$

This leads to

$$\begin{aligned} \mathop {\lim }\limits _{n \rightarrow \infty } \left\| {\left( {{u_n} - u} \right) \left( { \cdot ,t} \right) } \right\| \ge \mathop {\lim }\limits _{n \rightarrow \infty } \frac{{\sqrt{\left( {1 - 2T{n^{\gamma - 1}}} \right) \left( {{e^{2n\left( {\Lambda (T) - \Lambda (t)} \right) }} - {e^{2\left( {\Lambda (T) - \Lambda (t)} \right) }}} \right) } }}{{\sqrt{2\left( {\Lambda (T) - \Lambda (t)} \right) } {n^\gamma }}} = + \infty . \end{aligned}$$

This proves the ill-posedness of the backward problem (1.1). The proof of this theorem is complete. \(\square \)

As in Theorem 3.1, the solution of backward problem (1.1) is unstable with respect to the data. Thus, there is a demand for the regularization method to mitigate the impact of this ill-posedness. Otherwise, the standard calculation may fail to describe the solution. In addition, the final data in practice is obtained by measurement which always contains error. To model this impact, for a noise level \(\delta \), let us denote that measure data of \(\left( {g,\ell } \right) \) by \(\left( {{g_\delta },{\ell _\delta }} \right) \) and \(\left( {{g_\delta },{\ell _\delta }} \right) \) is naturally required to satisfy the following error bound

$$\begin{aligned} \max \left\{ {\left\| {{g_\delta } - g} \right\| ,{{\left\| {{\ell _\delta } - \ell } \right\| }_{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) }}} \right\} \le \delta . \end{aligned}$$
(3.3)

As previously mentioned, there are exponential growths in the solution leading to the ill-posedness. Therefore, if one successfully control these growths, then the ill-posedness will be overcome. For the regularization method in the unbounded domain, the convolution regularization introduced in [33, 47] is proved a very effective method. This method may be simply considered as a very interesting application of the convolution properties of Fourier transform. In the current paper, we propose a filter approach which is originally inspired by the convolution regularization in [33, 47]. Rather than dealing with an approximate problem in form of a convolution operator as in [33, 47]. Based on Parseval’s equation, it is equivalent to modifying problem 1 to be equivalent to modifying the Fourier problem of problem 1. In particular, we directly use the filter called \({{\mathbb {F}}}_{\mu ,\beta }\) to regularize the problem (1.1). To be more precise, for \(\mu > 0,\beta \ge 0\) the \({{\mathbb {F}}}_{\mu ,\beta }\) is defined by

$$\begin{aligned}&{{\mathbb {F}}_{\mu ,\beta } }\left( {{{{\widehat{g}}}_\delta }} \right) (\xi ): = \frac{{{{{\widehat{g}}}_\delta }(\xi )}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}{\left( \Lambda \left( T \right) +\beta \right) }}}}},\\&\quad {{\mathbb {F}}_{\mu ,\beta } }\left( {{{{\widehat{\ell }} }_\delta }} \right) (\xi ,t): = \frac{{{{{\widehat{\ell }} }_\delta }(\xi ,t)}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( \Lambda \left( T \right) +\beta \right) }}}}. \end{aligned}$$

Using this filter, we consider the following regularized Fourier problem

$$\begin{aligned} \left\{ \begin{aligned}&\frac{\partial }{{\partial t}}{\widehat{u}}_{\mu ,\beta } ^\delta (\xi ,t) + \lambda (t){\left( { - \Delta } \right) ^\alpha }{\widehat{u}}_{\mu ,\beta } ^\delta (\xi ,t) = {{\mathbb {F}}_{\mu ,\beta } }\left( {{{{\widehat{\ell }} }_\delta }} \right) (\xi ,t),&t\in [0,T], \xi \in {\mathbb {R}},\\&{\widehat{u}}_{\mu ,\beta } ^\delta (\xi ,T) = {{\mathbb {F}}_{\mu ,\beta } }\left( {{{{\widehat{g}}}_\delta }} \right) (\xi ),&\xi \in {\mathbb {R}}. \end{aligned} \right. \nonumber \\ \end{aligned}$$
(3.4)

Using the same technique as in Sect. 2, the solution of problem (3.4) is obtained by

$$\begin{aligned} {\hat{u}}_{\mu ,\beta }^\delta (\xi ,t) = \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{\hat{g}}_\delta }(\xi ) - \int _t^T {\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{{\hat{\ell }} }_\delta }(\xi ,s)\mathrm{d}s} .\nonumber \\ \end{aligned}$$
(3.5)

Equivalently,

$$\begin{aligned} u_{\mu ,\beta }^\delta (x,t)= & {} \int _{\mathbb {R}} \left( \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{{\hat{g}}}_\delta }(\xi )\right. \nonumber \\&\qquad \left. - \int _t^T {\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{{\hat{\ell }} }_\delta }(\xi ,s)\mathrm{d}s} \right) {e^{i\xi x}}\mathrm{d}\xi . \end{aligned}$$
(3.6)

To perform the convergence analysis between the regularized and exact solution, we first present the following auxiliary results.

Lemma 3.1

Let \(\beta , r, a>0\). Then, we have

$$\begin{aligned} \frac{1}{{\beta {x^r} + {e^{ - xa}}}} \le {\left( {\frac{{2a}}{r}} \right) ^r}{\beta ^{ - 1}}{\left( {1 + \log \frac{a}{{r\root r \of {\beta }}}} \right) ^{ - r}}\ \text { for all } x>0. \end{aligned}$$

Proof

Applying inequality \(\left( a_1+a_2\right) ^p\le 2^p\left( a_1^p+a_2^p\right) \) for all \(a_1,a_2,p>0.\) We see that

$$\begin{aligned} \frac{1}{{\beta {x^r} + {e^{ - xa}}}} \le \frac{{{2^r}}}{{{{\left( {\root r \of {\beta }x + {e^{ - \frac{{xa}}{r}}}} \right) }^r}}} = :{2^r}{\left( {f_1\left( x \right) } \right) ^r}. \end{aligned}$$

By direct computation, we know that

$$\begin{aligned} f_1\left( x \right) \le f_1\left( {\frac{r}{a}\log \frac{a}{{r\root r \of {\beta }}}} \right) = \frac{a}{{r\root r \of {\beta }\left( {1 + \log \frac{a}{{r\root r \of {\beta }}}} \right) }}. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{1}{{\beta {x^r} + {e^{ - xa}}}} \le {\left( {\frac{{2a}}{r}} \right) ^r}{\beta ^{ - 1}}{\left( {1 + \log \frac{a}{{r\root r \of {\beta }}}} \right) ^{ - r}}. \end{aligned}$$

The proof is completed. \(\square \)

Lemma 3.2

Let \(\alpha >0\), and \(0<a<b\). Then, we have

$$\begin{aligned} \frac{{{x^a}}}{{1 + \alpha {x^b}}} \le {\alpha ^{ - \frac{a}{b}}},\ \forall x > 0. \end{aligned}$$

Proof

Consider the following function \(f_2\left( x \right) = \frac{{{x^a}}}{{1 + \alpha {x^b}}}\) for \(x>0\). Then,

$$\begin{aligned}&{f_2}\left( x \right) \le {f_2}\left( {\left( {\frac{a}{{\alpha \left( {b - a} \right) }}} \right) ^{\frac{1}{b}}} \right) = \frac{{b - a}}{b}{\left( {\frac{a}{{b - a}}} \right) ^{\frac{a}{b}}}{\alpha ^{ - \frac{a}{b}}}\\&\quad = {\left( {{{b - a} \over b}} \right) ^{1 - {a \over b}}}{\left( {{a \over b}} \right) ^{{a \over b}}}{\alpha ^{ - {a \over b}}} \le {\alpha ^{ - {a \over b}}} \text{ for } \text{ all } x > 0 \end{aligned}$$

which completes the proof Lemma 3.2. \(\square \)

Use Lemmas 3.1 and 3.2, we have the following proposition which is very important for the proof of next theorems.

Proposition 3.1

The following inequality holds

$$\begin{aligned}&\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}\\&\quad \le {\left( {\frac{{4\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{p}} \right) ^{\frac{p}{{2\alpha }}}}{\mu ^{ - 1}}{\left( {1 + \log \frac{{2\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{{p{\mu ^{\frac{{2\alpha }}{p}}}}}} \right) ^{\frac{{ - p}}{{2\alpha }}}}{\left( {1 + {\xi ^2}} \right) ^{\frac{p}{2}}}, \end{aligned}$$

where \(p> 0\).

Proof

By Lemma 3.1, one has

$$\begin{aligned}&\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}} = \frac{1}{{\mu + {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}} = \frac{{{{\left( {1 + {\xi ^2}} \right) }^{\frac{p}{2}}}}}{{{{\left( {1 + {\xi ^2}} \right) }^{\frac{p}{2}}}\left( {\mu + {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}} \right) }}\\&\quad \le \frac{{{{\left( {1 + {\xi ^2}} \right) }^{\frac{p}{2}}}}}{{\mu {{\left| \xi \right| }^p} + {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}} = \frac{{{{\left( {1 + {\xi ^2}} \right) }^{\frac{p}{2}}}}}{{\mu {{\left( {{{\left| \xi \right| }^{2\alpha }}} \right) }^{\frac{p}{{2\alpha }}}} + {e^{ - {{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}\\&\quad \le {\left( {\frac{{4\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{p}} \right) ^{\frac{p}{{2\alpha }}}}{\mu ^{ - 1}}{\left( {1 + \log \frac{{2\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{{p{\mu ^{\frac{{2\alpha }}{p}}}}}} \right) ^{\frac{{ - p}}{{2\alpha }}}}{\left( {1 + {\xi ^2}} \right) ^{\frac{p}{2}}} \end{aligned}$$

which completes the proof. \(\square \)

Proposition 3.2

The following inequality holds

$$\begin{aligned} \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}}\le {\left\{ \begin{array}{ll} {\mu ^{\vartheta - 1}}{e^{\vartheta {{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }},&{}\text { if }\vartheta \in \left( 0,1\right) ,\\ {{{e^{{{\vartheta \left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}},&{}\text { if } \vartheta \ge 1. \end{array}\right. } \end{aligned}$$

Proof

In case \( \vartheta \ge 1\), the proof is obvious. In the remaining case, using Lemma 3.2, one has

$$\begin{aligned}&\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}} = \frac{{{e^{\left( {1 - \vartheta } \right) {{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}}{e^{\vartheta {{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}\\&\quad \le {\mu ^{\vartheta - 1}}{e^{\vartheta {{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}. \end{aligned}$$

The proof is completed. \(\square \)

Now, we are in a position to present the convergence estimate. The first result reads as follows.

Theorem 3.2

Let \({v_1 }\) and \({v_2 }\) be two solution of regularized problem (3.4) correspond to the data \(\left( {g_1,\ell _1 } \right) \) and \(\left( {g_2,\ell _2 } \right) \), respectively, then we have

$$\begin{aligned} \left\| {\left( {{v_1} - {v_2}} \right) ( \cdot ,t)} \right\| \le {\mu ^{\frac{{\Lambda \left( t \right) +\beta }}{{\Lambda \left( T \right) + \beta }} - 1}}\left( {\left\| {{g_1} - {g_2}} \right\| + \sqrt{T} {{\left\| {{\ell _1} - {\ell _2}} \right\| }_{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) }}} \right) . \end{aligned}$$

Proof

We have

$$\begin{aligned} {{{\hat{v}}}_1}(\xi ,t)&= \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{{\hat{g}}}_1}(\xi ) - \int _t^T {\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{{\hat{\ell }} }_1}(\xi ,s)\mathrm{d}s} ,\\ {{{\hat{v}}}_1}(\xi ,t)&= \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{{\hat{g}}}_2}(\xi ) - \int _t^T {\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{{\hat{\ell }} }_2}(\xi ,s)\mathrm{d}s} . \end{aligned}$$

In view of Parseval’s identity, Lemma 3.2, one has

$$\begin{aligned}&\left\| {\left( {{v_1} - {v_2}} \right) ( \cdot ,t)} \right\| = \left\| \frac{{{e^{{{\left| \cdot \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \cdot \right| }^{2\alpha }}{\left( \Lambda \left( T \right) +\beta \right) }}}}}\left( {{{{\widehat{g}}}_1}( \cdot ) - {{{\widehat{g}}}_2}( \cdot )} \right) \right. \\&\qquad \left. - \int _t^T {\frac{{{e^{{{\left| \cdot \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \cdot \right| }^{2\alpha }}{\left( \Lambda \left( T \right) +\beta \right) }}}}}\left( {{{{\widehat{\ell }} }_1}( \cdot ,s) - {{{\widehat{\ell }} }_2}( \cdot ,s)} \right) \mathrm{d}s} \right\| \\&\quad \le \left\| {\frac{{{e^{{{\left| \cdot \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \cdot \right| }^{2\alpha }}{\left( \Lambda \left( T \right) +\beta \right) }}}}}\left( {{{{\widehat{g}}}_1}( \cdot ) - {{{\widehat{g}}}_2}( \cdot )} \right) } \right\| \\&\qquad + \left\| {\int _t^T {\frac{{{e^{{{\left| \cdot \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \cdot \right| }^{2\alpha }}{\left( \Lambda \left( T \right) +\beta \right) }}}}}\left( {{{{\widehat{\ell }} }_1}( \cdot ,s) - {{{\widehat{\ell }} }_2}( \cdot ,s)} \right) \mathrm{d}s} } \right\| \\&\quad \le \mathop {\sup }\limits _{\xi \in {\mathbb {R}}} \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) + \beta } \right) }}}}\left\| {{g_1} - {g_2}} \right\| \\&\qquad + \sqrt{T} {\left( {\int _0^T {{{\left\| {\frac{{{e^{{{\left| \cdot \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \cdot \right| }^{2\alpha }}{\left( \Lambda \left( T \right) +\beta \right) }}}}}\left( {{{{\widehat{\ell }} }_1}( \cdot ,s) - {{{\widehat{\ell }} }_2}( \cdot ,s)} \right) } \right\| }^2}\mathrm{d}s} } \right) ^{\frac{1}{2}}}\\&\quad \le \mathop {\sup }\limits _{\xi \in {\mathbb {R}}} \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) + \beta } \right) }}}}\left\| {{g_1} - {g_2}} \right\| \\&\qquad + \sqrt{T} \mathop {\sup }\limits _{\xi \in {\mathbb {R}}} \frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) + \beta } \right) }}}}{\left\| {{\ell _1} - {\ell _2}} \right\| _{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) }}\\&\quad \le {\mu ^{\frac{{\Lambda \left( t \right) +\beta }}{{\Lambda \left( T \right) + \beta }} - 1}}\left( {\left\| {{g_1} - {g_2}} \right\| + \sqrt{T} {{\left\| {{\ell _1} - {\ell _2}} \right\| }_{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) }}} \right) . \end{aligned}$$

The proof is complete. \(\square \)

Next, we proceed to show the convergence rate between the regularized and the exact solution. The most important theorem in this section can be stated as follows.

Theorem 3.3

Let \(\delta \in \left( 0,1\right) \). Assume that there exist constants \(p, \mathbf{{E}}_0\) such that the exact solution satisfy the following a priori bound

$$\begin{aligned} {\left\| {u\left( { \cdot ,t} \right) } \right\| _{{\mathbf{{H}}^p}\left( {\mathbb {R}}\right) }} \le \mathbf{{E}}_0, \end{aligned}$$
(3.7)

where \(p>0\), and \( {{{\mathbf {E}}}_0} > \delta {\left( {\frac{p}{{2\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}} \right) ^{\frac{p}{{2\alpha }}}}\). Let the measure data \(\left( {{g_\delta },{\ell _\delta }} \right) \) satisfy (3.3). Let u be the solution of backward problem (1.1) and \(u_{\mu ,\beta }^\delta \) be the solution of the regularized problem (1.1). If the regularization parameter \(\mu , \beta \) is selected by

$$\begin{aligned} \mu = \frac{\delta }{\mathbf{{E}}_0}, \beta >0, \end{aligned}$$
(3.8)

then for \(0 \le t < T\), the following convergence estimate holds

$$\begin{aligned}&\left\| {\left( {u_{\mu ,\beta }^\delta - u} \right) \left( { \cdot ,t} \right) } \right\| \le \left( {1 + \sqrt{T} } \right) {{\mathbf {E}}}_0^{\frac{{\Lambda \left( T \right) - \Lambda \left( t \right) }}{{\Lambda \left( T \right) + \beta }}}{\delta ^{\frac{{\Lambda \left( t \right) + \beta }}{{\Lambda \left( T \right) + \beta }}}}\nonumber \\&\quad + {\left( {\frac{{4\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{p}} \right) ^{\frac{p}{{2\alpha }}}}{{{\mathbf {E}}}_0}{\left( {1 + \log \frac{{2\alpha \left( {\Lambda \left( T \right) + \beta } \right) {{\mathbf {E}}}_0^{\frac{{2\alpha }}{p}}}}{{p{\delta ^{\frac{{2\alpha }}{p}}}}}} \right) ^{\frac{{ - p}}{{2\alpha }}}}. \end{aligned}$$
(3.9)

Proof

Let \(u_{\mu ,\beta }\) be as in (3.6) which corresponds to the exact data, i.e.,

$$\begin{aligned} {u_{\mu ,\beta } }(x,t) = \int _{\mathbb {R}} {\frac{{{\widehat{g}}(\xi ){e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (T) - \Lambda (t)} \right) }} - \int _t^T {{e^{{{\left| \xi \right| }^{2\alpha }}\left( {\Lambda (s) - \Lambda (t)} \right) }}{\widehat{\ell }} (\xi ,s)\mathrm{d}s} }}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\left( \Lambda \left( T \right) +\beta \right) }}}}} {e^{i\xi x}}\mathrm{d}\xi . \end{aligned}$$

From the triangle inequality, one has

$$\begin{aligned} \left\| {u_{\mu ,\beta }^\delta \left( { \cdot ,t} \right) - u\left( { \cdot ,t} \right) } \right\| \le \underbrace{\left\| {u_{\mu ,\beta }^\delta \left( { \cdot ,t} \right) - {u_{\mu ,\beta }}\left( { \cdot ,t} \right) } \right\| }_{{{{\mathcal {J}}}_1}\left( t \right) } + \underbrace{\left\| {{u_{\mu ,\beta }}\left( { \cdot ,t} \right) - u\left( { \cdot ,t} \right) } \right\| }_{{{{\mathcal {J}}}_2}\left( t \right) }. \end{aligned}$$
(3.10)

First, we evaluate \({\mathcal {J}}_1\). Using Theorem 3.2, and (3.8) , we obtain

$$\begin{aligned}&{{\mathcal {J}}_1}\left( t \right) \le {\mu ^{\frac{{\Lambda \left( t \right) + \beta }}{{\Lambda \left( T \right) + \beta }} - 1}}\left( {\left\| {{g_1} - {g_2}} \right\| + \sqrt{T} {{\left\| {{\ell _1} - {\ell _2}} \right\| }_{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) }}} \right) \nonumber \\&\quad \le \left( {1 + \sqrt{T} } \right) {{\mathbf {E}}}_0^{\frac{{\Lambda \left( T \right) - \Lambda \left( t \right) }}{{\Lambda \left( T \right) + \beta }}}{\delta ^{\frac{{\Lambda \left( t \right) + \beta }}{{\Lambda \left( T \right) + \beta }}}}. \end{aligned}$$
(3.11)

The remaining task is to estimate \({\mathcal {J}}_2\). In fact, we have

$$\begin{aligned}&{{\mathcal {J}}_2}\left( t \right) = \left\| {\left( {{u_{\mu ,\beta }} - u} \right) \left( { \cdot ,t} \right) } \right\| = \left\| {\left( {{{{\widehat{u}}}_{\mu ,\beta }} - {\widehat{u}}} \right) \left( { \cdot ,t} \right) } \right\| \\&\quad = \mu \left\| {\frac{{{e^{{{\left| \cdot \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{{1 + \mu {e^{{{\left| \cdot \right| }^{2\alpha }}\left( {\Lambda \left( T \right) + \beta } \right) }}}}{\widehat{u}}\left( { \cdot ,t} \right) } \right\| . \end{aligned}$$

By Proposition 3.1, (3.8), and (3.7), we arrive at the following estimates

$$\begin{aligned} \begin{aligned}&{{\mathcal {J}}_2}\left( t \right) \le {\left( {\frac{{4\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{p}} \right) ^{\frac{p}{{2\alpha }}}}{\mu ^{ - 1}}{\left( {1 + \log \frac{{2\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{{p{\mu ^{\frac{{2\alpha }}{p}}}}}} \right) ^{\frac{{ - p}}{{2\alpha }}}}\left\| {{{\left( {1 + {{\left| \cdot \right| }^2}} \right) }^{\frac{p}{2}}}{\widehat{u}}\left( { \cdot ,t} \right) } \right\| \\&\quad = {\left( {\frac{{4\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{p}} \right) ^{\frac{p}{{2\alpha }}}}{\left( {1 + \log \frac{{2\alpha \left( {\Lambda \left( T \right) + \beta } \right) {{\mathbf {E}}}_0^{\frac{{2\alpha }}{p}}}}{{p{\delta ^{\frac{{2\alpha }}{p}}}}}} \right) ^{\frac{{ - p}}{{2\alpha }}}}{\left\| {u\left( { \cdot ,t} \right) } \right\| _{{{{\mathbf {H}}}^p}\left( {\mathbb {R}} \right) }}\\&\quad \le {\left( {\frac{{4\alpha \left( {\Lambda \left( T \right) + \beta } \right) }}{p}} \right) ^{\frac{p}{{2\alpha }}}}{{{\mathbf {E}}}_0}{\left( {1 + \log \frac{{2\alpha \left( {\Lambda \left( T \right) + \beta } \right) {{\mathbf {E}}}_0^{\frac{{2\alpha }}{p}}}}{{p{\delta ^{\frac{{2\alpha }}{p}}}}}} \right) ^{\frac{{ - p}}{{2\alpha }}}}. \end{aligned} \end{aligned}$$
(3.12)

Plugging (3.11), (3.12) into (3.10), we get the conclusion (3.9). The theorem is proved. \(\square \)

Theorem 3.4

Let \(\delta \in \left( 0,1\right) \). Assume that there exist constants \(\vartheta , \mathbf{{E}}_1\) such that the exact solution satisfies the following a priori bound

$$\begin{aligned} {\left( {\int _{\mathbb {R}} {{e^{2\vartheta {{\left| \xi \right| }^{2\alpha }} {\Lambda \left( T \right) }}}{{\left| {{\widehat{u}}\left( {\xi ,t} \right) } \right| }^2}}\mathrm{d}\xi } \right) ^{\frac{1}{2}}} \le {{{\mathbf {E}}}_1}, \end{aligned}$$

where \(\vartheta ,{{{\mathbf {E}}}_1}>0\). Let the measure data the measured data \({g_\delta }\) and the noisy source \({\ell _\delta }\) satisfy (3.3). If the regularization parameter \(\mu , \beta \) are selected by

$$\begin{aligned} \beta =0,\ \mu = {\left\{ \begin{array}{ll} \left( {\frac{\delta }{{{{{\mathbf {E}}}_1}}}} \right) ^{\frac{{\Lambda \left( T \right) }}{{\left( {1 + \vartheta } \right) \Lambda \left( T \right) - \Lambda \left( t \right) }}}, &{}\text {if\ }\vartheta \in \left( 0,1\right) ,\\ {\left( {\frac{\delta }{{{{{\mathbf {E}}}_1}}}} \right) ^{\frac{{\Lambda \left( T \right) }}{{2\Lambda \left( T \right) - \Lambda \left( t \right) }}}},&{}\text {if\ } \vartheta \ge 1, \end{array}\right. } \end{aligned}$$

then for \(0 \le t < T\), the following convergence estimate holds

$$\begin{aligned} \left\| {\left( {u_{\mu ,\beta }^\delta - u} \right) \left( { \cdot ,t} \right) } \right\| \le {\left\{ \begin{array}{ll} \left( {2 + \sqrt{T} } \right) {{\mathbf {E}}}_1^{\frac{{\Lambda \left( T \right) - \Lambda \left( t \right) }}{{\left( {1 + \vartheta } \right) \Lambda \left( T \right) - \Lambda \left( t \right) }}}{\delta ^{\frac{{\vartheta \Lambda \left( T \right) }}{{\left( {1 + \vartheta } \right) \Lambda \left( T \right) - \Lambda \left( t \right) }}}}&{}\text {if\ }\vartheta \in \left( 0,1\right) \\ \left( {2 + \sqrt{T} } \right) {{\mathbf {E}}}_1^{\frac{{\Lambda \left( T \right) - \Lambda \left( t \right) }}{{2\Lambda \left( T \right) - \Lambda \left( t \right) }}}{\delta ^{\frac{{\Lambda \left( T \right) }}{{2\Lambda \left( T \right) - \Lambda \left( t \right) }}}}&{}\text {if\ }\vartheta \ge 1 \end{array}\right. }. \end{aligned}$$

Proof

Similarly as Theorem 3.3, we know that

$$\begin{aligned}&\left\| {u_{\mu ,\beta }^\delta \left( { \cdot ,t} \right) - u\left( { \cdot ,t} \right) } \right\| \le \left\| {u_{\mu ,\beta }^\delta \left( { \cdot ,t} \right) - { u_{\mu ,\beta }}\left( { \cdot ,t} \right) } \right\| + \left\| {{{\widehat{u}}_{\mu ,\beta }}\left( { \cdot ,t} \right) - {\widehat{u}}\left( { \cdot ,t} \right) } \right\| \\&\quad \le {\mu ^{\frac{{\Lambda \left( t \right) }}{{\Lambda \left( T \right) }} - 1}}\left( {\left\| {{g_1} - {g_2}} \right\| + \sqrt{T} {{\left\| {{\ell _1} - {\ell _2}} \right\| }_{{L^2}\left( {0,T;{L^2}\left( {\mathbb {R}} \right) } \right) }}} \right) \\&\qquad + \mu \left\| {\frac{{{e^{{{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}}{{1 + \mu {e^{{{\left| \xi \right| }^{2\alpha }}\Lambda \left( T \right) }}}}{\widehat{u}}\left( { \cdot ,t} \right) } \right\| . \end{aligned}$$

By Proposition 3.2, one has

$$\begin{aligned}&\left\| {u_{\mu ,\beta }^\delta \left( { \cdot ,t} \right) - u\left( { \cdot ,t} \right) } \right\| \\&\quad \le {\left\{ \begin{array}{ll} \left( {1 + \sqrt{T} } \right) {\mu ^{\frac{{\Lambda \left( t \right) }}{{\Lambda \left( T \right) }} - 1}}\delta + {\mu ^{\vartheta }}\left\| {{e^{\vartheta {{\left| \cdot \right| }^{2\alpha }}\Lambda \left( T \right) }}{\widehat{u}}\left( { \cdot ,t} \right) } \right\| &{}\text {if\ }\vartheta \in \left( 0,1\right) \\ \left( {1 + \sqrt{T} } \right) {\mu ^{\frac{{\Lambda \left( t \right) }}{{\Lambda \left( T \right) }} - 1}}\delta + \mu \left\| {{e^{\vartheta {{\left| \cdot \right| }^{2\alpha }}\Lambda \left( T \right) }}{\widehat{u}}\left( { \cdot ,t} \right) } \right\| &{}\text {if\ }\vartheta \ge 1 \end{array}\right. }\\&\quad \le {\left\{ \begin{array}{ll} \left( {1 + \sqrt{T} } \right) {\mu ^{\frac{{\Lambda \left( t \right) }}{{\Lambda \left( T \right) }} - 1}}\delta + {\mu ^{\vartheta }}{{{\mathbf {E}}}_1}&{}\text {if\ }\vartheta \in \left( 0,1\right) \\ \left( {1 + \sqrt{T} } \right) {\mu ^{\frac{{\Lambda \left( t \right) }}{{\Lambda \left( T \right) }} - 1}}\delta + \mu {{{\mathbf {E}}}_1}&{}\text {if\ }\vartheta \ge 1 \end{array}\right. }\\&\quad \le {\left\{ \begin{array}{ll} \left( {2 + \sqrt{T} } \right) {{\mathbf {E}}}_1^{\frac{{\Lambda \left( T \right) - \Lambda \left( t \right) }}{{\left( {1 + \vartheta } \right) \Lambda \left( T \right) - \Lambda \left( t \right) }}}{\delta ^{\frac{{\vartheta \Lambda \left( T \right) }}{{\left( {1 + \vartheta } \right) \Lambda \left( T \right) - \Lambda \left( t \right) }}}}&{}\text {if\ }\vartheta \in \left( 0,1\right) \\ \left( {2 + \sqrt{T} } \right) {{\mathbf {E}}}_1^{\frac{{\Lambda \left( T \right) - \Lambda \left( t \right) }}{{2\Lambda \left( T \right) - \Lambda \left( t \right) }}}{\delta ^{\frac{{\Lambda \left( T \right) }}{{2\Lambda \left( T \right) - \Lambda \left( t \right) }}}}&{}\text {if\ }\vartheta \ge 1 \end{array}\right. }. \end{aligned}$$

The proof is completed. \(\square \)

Some comments on the a priori bound (3.7). It is well known in the regularization theory that to obtain the convergence rate between the regularized and exact solution, one needs some a priori information on the exact solution. In the present paper, we use an a priori condition as in (3.7). Let \(\varphi \) be the initial status of problem (1.1). At the initial time \(t=0\), the a priori condition (3.7) is equivalent to the assumption that \(\varphi \in {\mathbf{{H}}^p}\left( {\mathbb {R}}\right) \). However, for \(t>0\) and \(0\le p \le \alpha \), it follows from Theorem 2.2 (part a) that one just needs \(\varphi \in L^2({\mathbb {R}})\) and the source function \(\ell \) belongs to \({L^2}\left( {0,T;{L^2}\left( {\mathbb {R}}\right) } \right) \) to result \(u(\cdot ,t) \in {\mathbf{{H}}^p}\left( {\mathbb {R}}\right) \). Hence, the a priori condition (3.7) is not a very strict condition. Therefore, from our point of view, the technique in this paper is applicable to a wide class of functions.

4 The Numerical Illustration

In this section, we will illustrate theoretical results in Sect. 3 through some specific numerical examples. In fact for numerical purposes, we are usually interested in the bounded domain. In this spirit, let L be a positive number, we consider in this section the numerical solution of the following backward problem

$$\begin{aligned} \left\{ \begin{aligned}&{u_t}(x,t) + \lambda (t){\left( { - \Delta } \right) ^\alpha }u(x,t) = \ell (x,t),&t\in [0,T], x\in [-L,L],\\&u(x,T) = g(x), x\in [-L,L], \\&u(x,t) = 0,&t\in [0,T], x \notin [ - L,L]. \end{aligned} \right. \end{aligned}$$
(4.1)

For the fractional diffusion model, it is very difficult to find an analytical solution of the problem (4.1). Thus, we are not going to find an analytical solution of (4.1) in this section. Instead, a fully discrete scheme will be adapted to derive an approximation of (4.1). To do so, let us consider the initial problem associated with problem (1.1), i.e., the following problem

$$\begin{aligned} \left\{ {\begin{matrix} &{}{u_t}(x,t) + \lambda (t){\left( { - \Delta } \right) ^\alpha }u(x,t) = \ell (x,t), &{} t\in [0,T], x\in [-L,L],\\ &{}u(x,0) = \varphi (x), &{} x\in [-L,L], \\ &{}u(x,t) = 0, &{} t\in [0,T], x \notin [ - L,L]. \end{matrix}} \right. \end{aligned}$$
(4.2)

Since (4.2) is a well-posed problem, a finite difference scheme will be very effective to numerically solve (4.2). We use the following simulation strategy:

  • Step 1: Using the finite difference scheme to solve (4.2). After this step, one may obtain the final data \(g(x):=u(x,T)\).

  • Step 2: Perturbing the final data to obtain the measured final data

    $$\begin{aligned} {g^\varepsilon }({x}) = u({x},T)\left( {1 + \varepsilon \mathrm {rand()}} \right) ,\end{aligned}$$
    (4.3)

    where the command rand() returns the random value in (0, 1).

  • Step 3: Using (3.5) to construct the Fourier transform of the regularized solution. In this step, the discrete Fourier transform will be adapted.

  • Step 4: Using the inverse discrete Fourier transform to obtain the regularized solution.

For the finite difference scheme in step 1, we follow the well-known scheme proposed in [21]. For other numerical methods dealing with the fractional Laplacian, the reader may refer to [1, 16, 17]. Denote N and M the number of grid points in the space and time interval, respectively. Let \(\left\{ {{x_j}} \right\} _{j = 0}^N\) be a space-discretization of \([-L,L]\) with the mesh size \(h=\frac{2L}{N}\) and \(\left\{ {{t_m}} \right\} _{m = 0}^{M}\) be a time-discretization of [0, T] with the mesh size \(\rho =\frac{T}{M}\). Let \(u_j^m\) be the finite difference approximation to \(u(x_j,t_m)\). We have:

  • For \(0\le m \le M\), due to the boundary condition, one has

    $$\begin{aligned} u_0^m = u_N^m = 0. \end{aligned}$$
  • For \(m=0\), thanks the initial condition of (4.2), for \(0 \le j \le N \), we have

    $$\begin{aligned} u_j^0 = \varphi ({x_j}).\end{aligned}$$
    (4.4)
  • For \(1 \le m \le M\) and \(1 \le j \le N-1\), we use the following forward scheme

    $$\begin{aligned} \begin{aligned}&u_j^{m + 1} = u_j^m\\&\quad - \frac{{\rho {\lambda _{m+1}}}}{{2\cos \left( {\alpha \pi } \right) {h^\alpha }}}\left( {\sum \limits _{k = 0}^{j + 1} {{{( - 1)}^k}\left( {\begin{array}{*{20}{l}} {2\alpha }\\ k \end{array}} \right) u_{j + 1 - k}^n} + \sum \limits _{k = 0}^{N - j + 1} {{{( - 1)}^k}\left( {\begin{array}{*{20}{l}} {2\alpha }\\ k \end{array}} \right) u_{j - 1 + k}^n} } \right) \\&\quad + \rho \ell \left( {{x_j},{t_{m+1}}} \right) \end{aligned}\nonumber \!\!\!\!\!\\ \end{aligned}$$
    (4.5)

where \(\lambda _m=\lambda (t_m)\).

Denote \(U^m_{j}=u^\varepsilon ({x_j,t_m})\) the regularized solution with respects to the noisy data \(g^\varepsilon \), the following discrete error measure will be calculated

$$\begin{aligned}{{{\mathcal {E}}}}({t_m})\mathrm{{ }}&= \sqrt{\frac{1}{{N + 1}}\sum \limits _{j = 0}^N {{{\left| {U_j^m - u_j^m} \right| }^2}} } ,\\ {{\mathcal {R}}}_{{{\mathcal {E}}}}({t_m})&= {{\sqrt{\sum \limits _{j = 0}^N {{{\left| {{U}_j^m - u_j^m} \right| }^2}} } } \bigg /{\sqrt{\sum \limits _{j = 0}^N {{{\left| {u_j^m} \right| }^2}} } }}. \end{aligned}$$

Here, \({{\mathcal {E}}}(t_m)\) and \({{\mathcal {R}}}_{{\mathcal {E}}}(t_m)\) denote the discrete root mean square error and the relative root mean square error at time \(t_m\). Fixing \(L = 10, T = 1, N = 100, M = 100, \lambda (t)=2t+1\), let us consider the following examples.

Example 1

In this example, we work with a smooth initial data. For \(\alpha = 0.6\), the initial data and source term in example 1 are given by

$$\begin{aligned} \varphi (x) = {e^{ - \frac{{{x^4}}}{{{L^2}}}}},\ell (x,t) = {e^{{t^2} - \frac{{{x^4}}}{{{L^2}}}}}. \end{aligned}$$
(4.6)

Example 2

In this example, we work with a non-smooth initial data. For \(\alpha = 0.9\), the initial data and source term in example 2 are given by

$$\begin{aligned} \varphi (x) = \left\{ \begin{aligned}&2,&\left| x \right| \le 0.5L,\\&1,&0.5L < \left| x \right| \le L, \end{aligned} \right. \end{aligned}$$
(4.7)
$$\begin{aligned} \ell (x,t) = \left\{ \begin{aligned}&2t,&\left| x \right| \le 0.5L,\\&t,&0.5L < \left| x \right| \le L. \end{aligned} \right. \end{aligned}$$
(4.8)

With the data as in these example, we have Figs. 1 and 2 and Tables 1 and 2.

Fig. 1
figure 1

Example 1 with \(\alpha =0.6\): The exact solution (solid line) and regularized solution with \(\varepsilon =10^{-1}\) (asterisk line) and \(\varepsilon =10^{-2}\) (circle line) at various points of time

Fig. 2
figure 2

Example 2 with \(\alpha =0.9\): The exact solution (solid line) and regularized solution with \(\varepsilon =10^{-1}\) (asterisk line) and \(\varepsilon =10^{-2}\) (circle line) at various points of time

Table 1 The table of error in Example 1
Table 2 The table of error in Example 2

5 Conclusion

In this paper, the forward and backward problems associated with space-fractional diffusion equations are investigated. In particular, we derived some regularity results for the forward problem. Next, we provided detailed proof of the ill-posedness of the backward problem and further proposed a regularization method to achieve Hölder approximation to the exact solution. As a potential future work, we wish to study a more general model where the diffusivity factor can be a more general function. For example, \(\lambda := \lambda (x,t)\) or \(\lambda := \lambda (x,t,u)\).