1 Introduction

In [13] and [20] the authors studied the following equation,

$$\begin{aligned} \frac{\partial u }{\partial t} - \frac{\partial }{\partial x} D^\alpha _{x_C} u = 0 \qquad (x,t)\in \varOmega \times (0,\infty ), \end{aligned}$$
(1.1)

where \(\varOmega \) is bounded interval, i.e. \(\varOmega = (0,l),\) augmented with the boundary data and initial conditions. Here \(D^\alpha _{x_C}\) is the fractional Caputo derivative with respect to the spacial variable x. The interest in such problems stems from the fact that eq. (1.1) appears in the Green-Ampt infiltration models of subsurface flows, see [22]. In fact, (1.1) is a simplification of free boundary problem, which was studied in [21].

Interestingly, problems involving Caputo derivative could be derived as limits of the classical diffusion. More information about that could be found in [1].

In this note we derive in Theorem 1 a formula for \({{\mathscr {E}}}\), a self-similar solution to (1.1) considered on \( {\mathbb {R}}_+\times {\mathbb {R}}_+\) (here we write \({\mathbb {R}}_+\) for \((0,\infty )\)). Studying self-similar solutions or travelling fronts is important, when we wish to gain insight into the structure of solutions and in particular their long time behavior.

This function \({{\mathscr {E}}}\) is smooth and it is a classical solution to (1.1). We show that

$$\begin{aligned} {{\mathscr {E}}}(x,t) = \frac{a_0}{t^{\frac{1}{1+\alpha }}} E_{\alpha , 1+1/\alpha , 1/\alpha } \left( - \frac{x^{1+\alpha }}{(1+\alpha )t}\right) , \end{aligned}$$

where \(E_{\alpha , 1+1/\alpha , 1/\alpha }\) is the 3-parameter generalized Mittag-Leffler function, see (2.8). An extensive study of the Mittag-Leffler functions is presented in [5].

We show that \({{\mathscr {E}}}\) is positive and integrable over \({\mathbb {R}}_+\) for fixed \(t>0\). The role of the prefactor in the definition of \({{\mathscr {E}}}\) is to guarantee that the integral of \({{\mathscr {E}}}(\cdot , t)\) does not depend on \(t>0\). In fact, \({{\mathscr {E}}}\) is a fundamental solution of (1.1), namely solutions to the Dirichlet and fixed slope problems for \(\varOmega = {\mathbb {R}}_+\) may be expressed by means of convolution of the initial condition with \({{\mathscr {E}}}\), this is the content of Theorem 2. Here, we call the boundary condition \(u_x(0,t) = f(t)\) a fixed slope problem. We do so in order to avoid confusion with the situation, when we set the flux. For the problem we study, the flux is not the normal derivative of u.

The problem of existence of a fundamental solution to various versions of time-fractional problems has already been addressed in the literature. We name just a few papers dealing with this issue, see [4, 9, 12, 17, 18]. The tools used there are different from ours. However, it is not surprising that generalized Mittag-Leffler functions play a role. We could justify the positivity of \({{\mathscr {E}}}\) on the grounds of the theory of Mittag-Leffler functions, (actually we do this in the Appendix). However, we would like to stress that our proof of positivity of \({{\mathscr {E}}}\) is entirely based on a PDE tool, which is the maximum principle, we use this idea after [19].

Let us mention that the discussion of fundamental solutions comprises just a part of the large field of studies related to existence of solutions of equations involving fractional differential operators. There is a number of monographs devoted to this topic, the following books are among them, [3, 15, 16]. Of course, we do not pretend that this list is exhaustive.

Let us stress that the justification of formulas for solutions based on the convolution requires establishing a number of properties of \({{\mathscr {E}}}(\cdot , 1)\). In particular, we show that \({{\mathscr {E}}}(\cdot , 1)\) is monotone. For this purpose we use a PDE tool, which is the maximum principle. We also show some sort of decay, namely \(x{{\mathscr {E}}}(x,1)\) is uniformly bounded, see Lemma 8. These properties of \({{\mathscr {E}}}\) seem to be of independent interest.

Once we have a convolution formula for solutions to (1.1), we may study their properties, when \(\varOmega = {\mathbb {R}}_+\). An urging question is about uniqueness of solutions given in this way. We show that if the initial conditions are sufficiently regular, i.e. they are absolutely continuous with compact support, then solutions enjoy sufficient regularity for employing the method of testing the equation with the solution itself, see Proposition 2. This technique immediately yields uniqueness and decay of solutions.

The statement in the previous paragraph takes integrability of \(\frac{\partial u}{\partial t}\), \(\frac{\partial u}{\partial x}\), \(D^\alpha _{x_C} u\) for granted. This is indeed true for the integer derivatives, if the initial condition is in \(W^{1,1}\), but \(D^\alpha _{x_C} u\) is only in \(L^\infty \) as it is shown in Lemma 9. This result is not automatic due to poor integrability of \({{\mathscr {E}}}\). This is distinctively different from the behavior of solutions to the heat equation.

Eq. (1.1) contains a parameter \(\alpha \), so does \({{\mathscr {E}}}\). Due to analyticity of \(E_{\alpha , 1+1/\alpha , 1/\alpha }\) we deduce that if \(u^\alpha \) is a solution to (1.1) on \({\mathbb {R}}_+\), then \(u^\alpha \rightarrow u^{\alpha _0}\) in \(L^1({\mathbb {R}}_+)\), when \(\alpha \) goes to \(\alpha _0\in (0,1]\). We explicitly exclude \(\alpha _0 =0\), which is due to the fact that \(E_{\alpha , 1+1/\alpha , 1/\alpha }\) has no limit on \({\mathbb {R}}_+\) when \(\alpha \rightarrow 0\). However, this case is covered by [13, Theorem 6.1] for viscosity solutions by a different method.

With this observation we may address here the issue of the speed of the signal propagation. This is a bit puzzling because for \(\alpha =1\), eq. (1.1) becomes the heat equation with the infinite speed of propagation, while for \(\alpha =0\) problem (1.1) is the transport equation for which the speed is finite.

Our numerical experiments presented in [14] show that an initial pulse moves to the left with a finite speed. The same conclusions are drawn on the basis of numerical simulations by the authors of [11] who dealt with the time-fractional diffusion-wave equation. However, our Proposition 5 stated for (1.1) with \(\varOmega ={\mathbb {R}}_+\) and the fixed slope boundary condition shows that actually the speed of the signal is infinite, i.e. the support of the solution instantly becomes equal to \([0,\infty )\). This is shown with the help of the explicit formulas employing the fundamental solution, \({{\mathscr {E}}}\) constructed here. Thus, regarding the speed of propagation, the solutions share properties of the transport and heat equations.

After presenting the content we describe the organization of this article. In Section 2 we recall the fundamentals of the fractional calculus. Section 3 is devoted to the derivation of a formula for a self-similar solution \({{\mathscr {E}}}\). Here we also study its properties collected in Theorem 1 and we derive the formulas for the integral representation of unique solutions. In the Appendix we present the derivation of \({{\mathscr {E}}}\) which is based on the properties of the generalized Mittag-Leffler function \(E_{\beta , k, l}\) and not the series manipulation. We also show a short proof of positivity of \({{\mathscr {E}}}\), which follows from the theory of the generalized Mittag-Leffler function.

2 Preliminaries

We recall the definitions of the Caputo and Riemann-Liouville fractional derivatives. For a function \(f\in L^1(0,l)\) and \(\alpha \in (0,1)\) we introduce the fractional integration operator by

$$\begin{aligned} (I^\alpha f)(x) = \frac{1}{\varGamma (\alpha )}\int _0^x (x-z)^{\alpha -1}f(z)\,dz. \end{aligned}$$
(2.1)

For an absolutely continuous function \(u\in AC[0,L]\) we define the Caputo fractional derivative of order \(\alpha \in (0,1)\) by the following formula

$$\begin{aligned} D^\alpha _C u (x)= (I^{1-\alpha }u')(x) = \frac{1}{\varGamma (1-\alpha )}\int _0^x \frac{u'(s)}{(x-s)^\alpha }\,ds, \end{aligned}$$
(2.2)

while the Riemann-Liouville fractional derivative has the form

$$\begin{aligned} D^\alpha _{RL} u = \frac{d}{dx} (I^{1-\alpha }u) . \end{aligned}$$
(2.3)

Later, we will consistently write \(D^\alpha _C \) to denote the fractional Caputo derivative of a single variable function, \(D^\alpha _{x_C} \) will mean a partial derivative with respect to variable x.

We notice that if u is absolutely continuous and \(u(0)=0,\) then we have

$$\begin{aligned} \frac{d}{d x}\left( I^{1-\alpha } u\right) = I^{1-\alpha } \frac{d}{dx}u \qquad \hbox {i.e.}\qquad D^\alpha _{C}u = D^\alpha _{{RL}}u. \end{aligned}$$
(2.4)

The following formula explains the relationship between the two types of derivatives for a general function \(u\in AC[0,L]\),

$$\begin{aligned} D^\alpha _{x_{RL}}u = D^\alpha _{x_C}u + \frac{x^{-\alpha }}{\varGamma (1-\alpha )} u(0). \end{aligned}$$
(2.5)

The fractional integration is the inverse of the Caputo derivative up to a constant,

$$\begin{aligned} I^\alpha D^\alpha _{x_C}u(x) = u(x) - u(0). \end{aligned}$$
(2.6)

In our analysis we will need a more convenient representation of the operator \((D^\alpha _{x_C} u)_x\). For this purpose we need to recall:

Lemma 1

(see [13, Proposition 2.1]) Let \(u:[0,l)\rightarrow {\mathbb {R}}\) be such that \(u\in C^2(0,l)\cap C[0,l)\) and \(u'\in L^1(0,l)\). Then, \((D^\alpha _{x_C} u)_x\) exists everywhere in (0, l) and

$$\begin{aligned} \begin{aligned} (D^\alpha _{x_C} u)_x(x)&= \frac{1}{\varGamma (1-\alpha )}\left( \frac{\alpha (u(0)-u(x))+(\alpha +1)u'(x)x}{x^{\alpha +1}}\right. \\&\quad \left. +\, \alpha (\alpha +1)\int _0^x[u(x-z)-u(x)+u'(x)z]\frac{dz}{z^{\alpha +2}}\right) \end{aligned} \end{aligned}$$
(2.7)

for \(x\in (0,l)\).

With the help of this lemma we will evaluate the action of \((D^\alpha _{x_C} u)_x\) on scaled functions:

Corollary 1

Let \(u:[0,l)\rightarrow {\mathbb {R}}\) be such that \(u\in C^2(0,l)\cap C[0,l)\) and \(u'\in L^1(0,l)\). If \(\lambda >0\) and we set \(v_\lambda (x) = u(\lambda ^{\frac{1}{1+\alpha }} x)\), then \((D^\alpha _{x_C} v_\lambda )_x(x) = \lambda (D^\alpha _{y_C} u)_y(\lambda ^{\frac{1}{1+\alpha }} x)\).

Proof We use Lemma 1 to calculate \((D^\alpha _{x_C} v_\lambda )_x\),

$$\begin{aligned}&(D^\alpha _{x_C} v_\lambda )_x(x)\\&=\frac{1}{\varGamma (1-\alpha )}\left( \frac{\alpha (v_\lambda (0)-v_\lambda (x))+(\alpha +1)(v_\lambda )_x(x)x}{x^{\alpha +1}}\right. \\&\quad \left. +\, \alpha (\alpha +1)\int _0^x[v_\lambda (x-z)-v_\lambda (x)+(v_\lambda )_x(x)z]\frac{dz}{z^{\alpha +2}} \right) \\&=\frac{1}{\varGamma (1-\alpha )} \left( \frac{\alpha (u(0) - u(\lambda ^{\frac{1}{1+\alpha }}x))+(\alpha +1)\lambda ^{\frac{1}{1+\alpha }}u_y(\lambda ^{\frac{1}{1+\alpha }}x)x}{x^{\alpha +1}}\right. \\&\quad \left. + \,\alpha (\alpha +1)\int _0^x[u(\lambda ^{\frac{1}{1+\alpha }}(x-z))- u(\lambda ^{\frac{1}{1+\alpha }}x)+\lambda ^{\frac{1}{1+\alpha }}u_y(\lambda ^{\frac{1}{1+\alpha }}x)z]\frac{dz}{z^{\alpha +2}}\right) \\&=\frac{1}{\varGamma (1-\alpha )}\left( \frac{\alpha (u(0)-u(\lambda ^{\frac{1}{1+\alpha }}x))+(\alpha +1)u_y(\lambda ^{\frac{1}{1+\alpha }}x)(\lambda ^{\frac{1}{1+\alpha }}x)}{\lambda ^{-1}(\lambda ^{\frac{1}{1+\alpha }}x)^{\alpha +1}}\right. \\&\quad \left. + \,\alpha (\alpha +1)\int _0^x[u(\lambda ^{\frac{1}{1+\alpha }}(x-z))-u(\lambda ^{\frac{1}{1+\alpha }}x)+u_y(\lambda ^{\frac{1}{1+\alpha }}x)(\lambda ^{\frac{1}{1+\alpha }}z)]\frac{dz}{z^{\alpha +2}}\right) . \end{aligned}$$

Changing the variable of integration by \(\lambda ^{\frac{1}{1+\alpha }}z=\xi \) we see \(dz/z^{\alpha +2}=\lambda d\xi /\xi ^{2+\alpha }\). Hence that the last term of the right-hand-side (RHS) is equal to

$$\begin{aligned} \frac{\alpha (\alpha +1)\lambda }{\varGamma (1-\alpha )} \int _0^{\lambda ^{\frac{1}{1+\alpha }}x}[u(\lambda ^{\frac{1}{1+\alpha }}x-\xi )-u(\lambda ^{\frac{1}{1+\alpha }}x)+u_y(\lambda ^{\frac{1}{1+\alpha }}x)\xi ]\frac{d\xi }{\xi ^{\alpha +2}}. \end{aligned}$$

Therefore applying Lemma 1 again yields

$$\begin{aligned} (D^\alpha _{x_C} v_\lambda )_x(x)=\lambda ^{1}(D_{y_C}^\alpha u)_y(\lambda ^{\frac{1}{1+\alpha }}x). \end{aligned}$$

\(\square \)

In our construction of the self-similar solution we will use a three-parameter generalized Mittag-Leffler function \(E_{\beta ,m,l}\). It is defined by the following series for \(z\in {\mathbb {C}}\) (see [8, formula (1.9.19)])

$$\begin{aligned} E_{\beta ,m,l}(z) = \sum _{n=0}^\infty c_n^{\beta ml} z^n, \quad \Re \beta >0, \ m \in (0,+\infty ),\ l\in {\mathbb {C}}, \ -\beta (k m + l)\not \in {\mathbb {N}}\setminus \{0\}, \end{aligned}$$
(2.8)

where \( \Re \beta \) denotes the real part of a complex number \(\beta \) and

$$\begin{aligned} c_n^{\beta ml} =\prod _{k=0}^{n-1} \frac{\varGamma (\beta ( k m +l)+1)}{\varGamma (\beta (k m+l+1)+1)}. \end{aligned}$$

It is worth recalling that due to [6, Theorem 1], see also [8, page 48], we know that \(E_{\beta ,m,l}\) is an entire function of order \((\Re \beta )^{-1}\) and type \(m^{-1}\), i.e. for any \(\epsilon >0\),

$$\begin{aligned} |E_{\beta ,m,l}(z)|< \exp ( (m^{-1}+ \epsilon ) |z|^{1/\beta }) \end{aligned}$$
(2.9)

holds for \(|z| \ge r_0(\epsilon ),\) where \(r_0(\epsilon )\) is sufficiently large.

In several places we will use the following observation concerning the Mittag-Leffler functions. Let us set,

$$\begin{aligned} \varPhi (x) = E_{\alpha , 1+ 1/\alpha , 1/\alpha } \left( - \frac{x^{1+\alpha }}{1+\alpha }\right) . \end{aligned}$$
(2.10)

We recall:

Proposition 1

(see [8, Example 4.11], [7, Theorem 4]) The function \(\varPhi \) defined in (2.10) satisfies the following fractional ordinary differential equation,

$$\begin{aligned} D_{y_C}^\alpha v(y)+\frac{1}{1+\alpha }yv(y)=0, \quad y>0, \qquad v(0) = 1. \end{aligned}$$
(2.11)

3 The fundamental solution and its properties

We will derive a self-similar solution, \({{\mathscr {E}}},\) to the following equation,

$$\begin{aligned} \frac{\partial u }{\partial t} - \frac{\partial }{\partial x} D^\alpha _{x_C} u = 0 \qquad (x,t)\in (0,\infty )^2. \end{aligned}$$
(3.1)

This is done in the theorem below, where we also present the basic properties of \({{\mathscr {E}}}\).

Theorem 1

There is a function \({{\mathscr {E}}}: (0,\infty )^2 \rightarrow {\mathbb {R}}\) with the following properties:

(1) \({{\mathscr {E}}}\in C^2((0,\infty )^2)\) and \({{\mathscr {E}}}\) is a self-similar solution to (3.1), i.e. it is invariant under the transformation \((x,t) \mapsto (\lambda ^{\frac{1}{1+\alpha }} x, \lambda t)\) for \(\lambda >0\).

(2) \({{\mathscr {E}}}\) is positive for all \(x,t >0\).

(3) For all \(t>0\) function \({{\mathscr {E}}}(\cdot ,t)\) is decreasing.

(4) For all \(t>0\) function \({{\mathscr {E}}}(\cdot ,t)\) is in \(L^1(0, \infty )\) and \(\int _0^\infty {{\mathscr {E}}}(x,t)\,dx=\frac{1}{2}\).

Moreover, function \({{\mathscr {E}}}\) is given by the following formula,

$$\begin{aligned} {{\mathscr {E}}}(x,t) = a_0 t^{-\frac{1}{1+\alpha }} E_{\alpha , 1+1/\alpha , 1/\alpha }\left( -\frac{x^{1+\alpha }}{ (1+\alpha )t}\right) , \end{aligned}$$
(3.2)

where

$$\begin{aligned} \frac{1}{a_0} = 2\int _0^\infty E_{\alpha , 1+1/\alpha , 1/\alpha }( -x^{1+\alpha }/ (1+\alpha ))\,dx. \end{aligned}$$
(3.3)

Remark 1

Using \(\varPhi \) defined in (2.10) we can write \({{\mathscr {E}}}\) shortly as follows,

$$\begin{aligned} {{\mathscr {E}}}(x,t) = a_0 t^{-\frac{1}{1+\alpha }} \varPhi \left( \frac{x}{t^{\frac{1}{1+\alpha }}}\right) . \end{aligned}$$

The way we stated this theorem suggests that it is sufficient to plug in the formula for \({{\mathscr {E}}}\) into the equation. To some extent, this is what we do in the Appendix, where we use Proposition 1 to prove Lemma 3 below. However, we think it is more instructive to go through the process of derivation of (3.2).

Properties of \({{\mathscr {E}}}\), among them positivity or monotonicity on \((0,\infty )\), are not easy to check by inspection of the formula given by a series. This is why we employ methods typical for PDE, like the maximum principle.

Our first task, however, is to construct a self-similar solution to (1.1). We do this in a couple of steps. Here is the first observation:

Lemma 2

Let us suppose that \(u\in C^2((0,\infty )^2 )\). Then, u is a solution to (3.1) if and only if \(u_\lambda \) given by

$$\begin{aligned} u_\lambda (x,t) = u(\lambda ^{\frac{1}{1+\alpha }}x, \lambda t) \end{aligned}$$

is a solution to (3.1).

Proof

Now, it is straightforward to see that

$$\begin{aligned} (u_\lambda )_t(x,t)=\lambda ^{1}u_s(\lambda ^{\frac{1}{1+\alpha }}x,\lambda t). \end{aligned}$$

For the purpose of computing the fractional derivative \((D^\alpha _{x_C} u_\lambda )_x\) of a scaled function \(u_\lambda \) we will use Corollary 1, which applies to single variable functions. This is why we introduce \(v_\lambda \) defined as \(v_\lambda (x) = u(\lambda ^{\frac{1}{1+\alpha }}x, \lambda t)\), where t is just a fixed parameter. This leads us to the identity

$$\begin{aligned} (D^\alpha _{x_C} u_\lambda )_x(x,t) = (D^\alpha _{x_C} v_\lambda )_x (x)= \lambda ^1(D^\alpha _{y_C} u)_y(\lambda ^{\frac{1}{1+\alpha }}x, \lambda t). \end{aligned}$$

Thus, we see that u satisfies (3.1) if and only if \(u_\lambda \) fulfills

$$\begin{aligned} (u_\lambda )_t(x,t)-(D^\alpha _{x_C} u_\lambda )_x(x,t)=0 \end{aligned}$$

for all \(x>0,\ t>0\). \(\square \)

The above lemma tells us that self-similar solutions depend only on \(\xi = x t^{-\frac{1}{1+\alpha }}\). However, if we want to obtain a solution, whose average over \({\mathbb {R}}_+\) is independent of time, then we must consider \({{\mathscr {E}}}\) of the form

$$\begin{aligned} {{\mathscr {E}}}(x,t) = a_0 t^{\gamma } v( x^{1+\alpha }/ t), \end{aligned}$$
(3.4)

where v is integrable. It is easy to see that for

$$\begin{aligned} \gamma = - \frac{1}{1+\alpha } \end{aligned}$$

and \(t_2\ne t_1>0\) we have

$$\begin{aligned} \int _0^\infty {{\mathscr {E}}}(x,t_2)\,dx = \int _0^\infty {{\mathscr {E}}}(x,t_1)\,dx. \end{aligned}$$

We are now ready to derive the form of \({{\mathscr {E}}}\). We make an informed guess that v appearing in (3.4) should be analytic, since this is the case of the classical heat equation. It turns out that we are right.

Lemma 3

Let us assume that v appearing in (3.4) is analytic and \(v(0) =1\). Then,

$$\begin{aligned} v(z) = E_{\alpha , 1+1/\alpha , 1/\alpha } \left( - \frac{z}{1+\alpha }\right) , \end{aligned}$$

where \(E_{\beta ,m,l}\) is the generalized Mittag-Leffler function defined in (2.8).

Proof

Let us suppose that

$$\begin{aligned} v(z) = \sum _{n=0}^\infty c_n z^n. \end{aligned}$$

Then, inserting \({{\mathscr {E}}}\) defined by (3.4) into (3.1) yields

$$\begin{aligned} \sum _{n=0}^\infty c_n (\gamma -n) \frac{x^{(1+\alpha )n}}{t^{n-\gamma +1}} = \sum _{n=1}^\infty c_n \frac{\varGamma ((1+\alpha )n +1)[(1+\alpha )(n-1) +1]}{\varGamma ((1+\alpha )(n-1)+2)} \frac{x^{(1+\alpha )(n-1)}}{t^{n-\gamma }} , \end{aligned}$$

where we took into account that \(D^\alpha _{C} x^\beta = \frac{\varGamma (\beta +1)}{\varGamma (\beta +1-\alpha )} x^{\beta -\alpha }.\) Hence, we find the formula for \(c_n\),

$$\begin{aligned} c_{n} = \frac{(-1)^{n}}{(1+\alpha )^{n}} b_n,\qquad n\ge 1, \end{aligned}$$

where we set \(c_0=1\) and

$$\begin{aligned} b_n = \prod _{i=0}^{n-1} \frac{\varGamma (\alpha i + i +2)}{\varGamma (\alpha (i+1) + i +2)}. \end{aligned}$$
(3.5)

Hence,

$$\begin{aligned} v(z) = \sum _{n=0}^\infty (-1)^n b_n \left( \frac{z}{1+\alpha }\right) ^n. \end{aligned}$$

If we take into account the form of \(b_n\)’s, then we realize that

$$\begin{aligned} v(z) = E_{\alpha , 1+1/\alpha , 1/\alpha }\left( - \frac{z}{1+\alpha }\right) , \end{aligned}$$

where \(E_{\beta ,m,l}\) is a generalized Mittag-Leffler function defined in (2.8). Thus, we reach \({{\mathscr {E}}}\) of the form (3.2), where the multiplicative constant \(a_0\) has to be determined by other means. \(\square \)

Remark 2

The argument above is based on the series manipulation. In the Appendix, we present the proof of this lemma, which is based only on the theory of the generalized Mittag-Leffler function \(E_{\beta , m, l}\). It is more elegant, but less informative.

An important step in our analysis is checking that \({{\mathscr {E}}}\) is indeed positive. The argument we use is based on the use of the maximum principle for (1.1). This idea was used first in the proof of a similar result in [19]. We provide our own and extended version of the argument. We find this type of argument very important for creation of a toolbox for analysing properties of solutions to (1.1). However, in the Appendix we present a short proof of the same result, which is of independent interest. It is based on the theory of the three-parameter Mittag-Leffler function.

Lemma 4

The function \(\varPhi \) defined in (2.10) is positive for all \(x>0\).

Proof

Let us set

$$\begin{aligned} u(x,t) := \int _0^{xt^{-\frac{1}{1+\alpha }}} \varPhi (z) \, dz. \end{aligned}$$

We will see that

$$\begin{aligned} \left( \frac{\partial }{\partial t} - \frac{\partial }{\partial x}D^\alpha _{x_C} \right) u<0. \end{aligned}$$
(3.6)

Indeed, \(\frac{\partial u}{\partial t}(x,t) = - \frac{1}{t(1+\alpha )}y \varPhi (y)\), where \(y = x {t^{-\frac{1}{1+\alpha }}}\) and by Corollary 1 we see that \(\frac{\partial }{\partial x}D^\alpha _{x_C}u(x,t) = \frac{1}{t} \frac{\partial }{\partial y}D^\alpha _{y_C}u(y,t)\). Thus, due to

$$\begin{aligned} \frac{\partial }{\partial x}D^\alpha _{x_C} u = D^\alpha _{x_{RL}} \frac{\partial }{\partial x}u \end{aligned}$$

and (2.5) we obtain

$$\begin{aligned} \left( \frac{\partial }{\partial t} - \frac{\partial }{\partial x}D^\alpha _{x_C} \right) u(x,t)= & {} - \frac{1}{t} \left( \frac{y\varPhi (y)}{1+\alpha }+ D^\alpha _{y_{RL}}\varPhi (y)\right) \\= & {} - \frac{1}{t} \left( \frac{y\varPhi (y)}{1+\alpha }+ D^\alpha _{y_{C}}\varPhi (y)\right) - \frac{y^{-\alpha }}{t\varGamma (1-\alpha )} . \end{aligned}$$

Now, we invoke Proposition 1 to conclude that the expression in parenthesis vanishes. Thus, (3.6) follows.

Let us suppose our claim is not valid and the set

$$\begin{aligned} A =\{x>0: \varPhi (x) <0\} \end{aligned}$$

is not empty. Since \(\varPhi (0) =1\), we see that \(\inf A =: x_0 >0\). Due to the continuity of \(\varPhi \) there is \(x_1> x_0\) such that

$$\begin{aligned} u(x, 1) >0\qquad \hbox {for } x\in (0,x_1). \end{aligned}$$

Let us set

$$\begin{aligned} \varOmega =\{(x,t)\in (0,\infty )^2:\ x\in (0 , x_1 t^{1/(1+\alpha )}),\ t\in (1, 2)\} . \end{aligned}$$
(3.7)

By the weak maximum principle for (3.6) in non-cylindrical regions, see [21, Lemma 8], we have

$$\begin{aligned} \sup \{ u(x,t): (x,t) \in \varOmega \} = \max \{ u(x,t): (x,t)\in \partial \varOmega \setminus (\{2\}\times {\mathbb {R}}_+)\} =: M. \end{aligned}$$

However, since \(u(0, t) =0\) and \(u(x_1 t^{1/(1+\alpha )}, t)= u(x_1,1)< u(x_0, 1) \) for \(t\in [1,2]\), then we see that

$$\begin{aligned} M = u(x_0, 1). \end{aligned}$$

If we had the strong maximum principle, then we would have reached a contradiction. But we do not have it, thus, we have to continue our argument. For a positive \(\epsilon \) and \((x,t) \in {\mathbb {R}}_+ \times (1,\infty )\) we set,

$$\begin{aligned} v_\epsilon (x,t) = \epsilon c_\alpha x^{1+\alpha } + \epsilon (t -1), \end{aligned}$$

where \(c_\alpha = \frac{1}{(1+\alpha )\varGamma (1+\alpha )}.\) Due to the choice of \(c_\alpha \) we see that \(\frac{\partial v_\epsilon }{\partial t} - \frac{\partial }{\partial x} D^\alpha _{x_C} v_\epsilon = 0\). Thus, the sum \(u + v_\epsilon \) satisfies (3.6) and we may apply the weak maximum principle, see [21, Lemma 8], which yields,

$$\begin{aligned} M_\epsilon:= & {} \sup \{ (u+ v_\epsilon )(x,t): (x,t) \in \varOmega \} \nonumber \\= & {} \max \{ (u+ v_\epsilon )(x,t): (x,t)\in \partial \varOmega \setminus (\{2\}\times {\mathbb {R}}_+)\}. \end{aligned}$$
(3.8)

We want to select such \(\epsilon \) that \(M_\epsilon = \max (u+v_\epsilon )(x,1)\). We notice that \((u+ v_\epsilon )(0,t)= \epsilon (t-1)\le \epsilon \) and we may restrict \(\epsilon \) so that \(\epsilon < u(x_1, 1)\). We take even smaller \(\epsilon \) so that

$$\begin{aligned} \frac{\partial }{\partial x}(u+ v_\epsilon )(x_1,1)<0. \end{aligned}$$

Hence, there is \(x_2\in (x_0, x_1)\) such that

$$\begin{aligned} \max \{ (u+ v_\epsilon )(x,1): x \in (0, x_1)\} = (u+ v_\epsilon )(x_2,1) \end{aligned}$$

We want to guarantee that \(M_\epsilon \) equals \((u+ v_\epsilon )(x_2,1) = u(x_2, 1) + \epsilon c_\alpha x_2^{1+\alpha }.\)

We look at \(u+v_\epsilon \) on the last part of the parabolic boundary of \(\varOmega \),

$$\begin{aligned} \gamma = \{ (x_1 t^{1/(1+\alpha )},t ):\ t\in (1, 2)\}. \end{aligned}$$

For \((x,t)\in \gamma \) we have

$$\begin{aligned} (u+v_\epsilon )(x_1 t^{1/(1+\alpha )},t)= & {} u(x_1, 1) + \epsilon (c_\alpha x_1^{1+\alpha }t + t -1) \\= & {} u(x_1, 1) + \epsilon c_\alpha x_1^{1+\alpha } +(t -1)\epsilon (c_\alpha x_1^{1+\alpha } + 1). \end{aligned}$$

We see that RHS above is greater than \((u+v_\epsilon )(x_1, 1)\) for \(t\in (1,2)\). At the same time we see that the left-hand-side attains its maximum for \(t=2\) and we may take \(\epsilon \) so small that

$$\begin{aligned} \epsilon (c_\alpha x_1^{1+\alpha } + 1) + (u+v_\epsilon )(x_1, 1) < (u+ v_\epsilon )(x_2,1) = M_\epsilon . \end{aligned}$$

Now, we shall see that \(\sup _\varOmega (u+ v_\epsilon ) > M_\epsilon \). Let us consider points (xt) of the following form, \((x_2 t^{1/(1+\alpha )}, t)\), \(t\in (1,2).\) Now, we compute values of \((u+ v_\epsilon )\) there for \(t>1\). We obtain,

$$\begin{aligned} (u+ v_\epsilon )(x_2 t^{1/(1+\alpha )}, t)= & {} u(x_2, 1) + \epsilon c_\alpha x_2^{1+\alpha }t + \epsilon (t-1) \\= & {} u(x_2, 1) + \epsilon c_\alpha x_2^{1+\alpha } + \epsilon (t-1) (c_\alpha x_1^{1+\alpha } + 1) \\> & {} (u+ v_\epsilon )(x_2, 1)= M_\epsilon . \end{aligned}$$

But this inequality violates (3.8). Thus, our claim follows. \(\square \)

In fact, in the course of proof of Lemma 4, we established the following facts:

Lemma 5

Let \(U\in C^{1+\alpha }([0,\infty ))\) and set \(u(x,t) = U(x t^{-\frac{1}{1+\alpha }})\).

(1) If u satisfies inequality (3.6) in \(\varOmega \) defined in (3.7), where \(x_1>0\) is now arbitrary, then \(u(x,1) \equiv U(x)\) cannot attain a maximum inside \((0, x_1)\).

(2) If u satisfies the inequality

$$\begin{aligned} \left( \frac{\partial }{\partial t} - \frac{\partial }{\partial x}D^\alpha _{x_C} \right) u>0 \end{aligned}$$
(3.9)

in \(\varOmega \) defined in (3.7), where \(x_1>0\) is now arbitrary, then \(u(x,1) \equiv U(x)\) cannot attain a minimum inside \((0, x_1)\).

These observations immediately imply part (3) of Theorem 1.

Lemma 6

Let us fix any \(t>0\), then the function \({\mathbb {R}}_+ \ni x\mapsto {{\mathscr {E}}}(x,t)\in {\mathbb {R}}\) is decreasing.

Proof

Since \({{\mathscr {E}}}\) is a self-similar solution, we may restrict our attention to \(t=1.\) Let us suppose our claim is false and \({{\mathscr {E}}}(\cdot ,1)\) attains a minimum at \(x_0\). We can find \(x_1>x_0\) such that \({{\mathscr {E}}}(x_1,1)> {{\mathscr {E}}}(x_0,1)\). We define \(v_\epsilon \) by the formula,

$$\begin{aligned} v_\epsilon (x,t) = {{\mathscr {E}}}(x,t) - \epsilon x t^{-\frac{1}{1+\alpha }}, \qquad (x,t)\in D= \{(x,t): t\in [1,2],\ x\in (0, x_1 t^{\frac{1}{1+\alpha }})\}. \end{aligned}$$

Now, we choose \(\epsilon >0\) sufficiently small that so that \(v_\epsilon (\cdot , 1)\) attains its minimum at \(x_2\in ( x_0, x_1)\), hence \(v_\epsilon (x_2, 1) < v_\epsilon (x_1, 1).\) Moreover, for all \(\epsilon >0\) inequality (3.9) is satisfied. As a result we may apply Lemma 5 part 2) to \(v_\epsilon \) to deduce that \(v_\epsilon (\cdot , 1)\) cannot attain any minimum in \((0,x_1)\).

We observe that for no positive \(x_0\) function \(\varPhi (\cdot ) = {{\mathscr {E}}}(\cdot ,1)\) is increasing on \((0, x_0).\) If such a point existed, then \(D^\alpha _C \varPhi \ge 0\) on \((0, x_0)\), but this contradicts (2.11) due to positivity of \(\varPhi .\) Hence, \(v_\epsilon \) cannot attain any maximum in \((0,\infty )\).

These observations imply that \(v_\epsilon (\cdot , 1)\) is decreasing. Indeed, since \(\varPhi \) is defined as a series its inspection tells us that \(\varPhi '(0)<0\). Hence, there is \(x_3>0\) such that \(v_\epsilon (\cdot , 1)\) is decreasing on \((0, x_3)\). If there were \(0<x_3 <x_4\) and \(v_\epsilon (x_3, 1) < v_\epsilon ( x_4, 1)\) would be true then, taking into account that \(v_\epsilon (0, 1)> v_\epsilon (x_3,1)\) we would deduce that \(v_\epsilon (\cdot , 1)\) must attain a minimum in the interval \((0,x_4)\) but this is impossible. Hence, the claim follows.

Finally, we notice that \({{\mathscr {E}}}(x,1) = \displaystyle {\lim _{\epsilon \rightarrow 0^+}} v_\epsilon (x, 1)\), which implies monotonicity of \({{\mathscr {E}}}(\cdot , 1).\) \(\square \)

Now, we will show that \({{\mathscr {E}}}\) is integrable over the positive half-line.

Lemma 7

The function \(\varPhi \) defined in (2.10) is bounded with a bound uniform in \(\alpha \) and it is integrable over \((0,\infty )\) for each \(\alpha \in (0,1).\)

Proof

We will show first the boundedness of \(\varPhi \) by a different method, which is useful for further considerations. Due to Proposition  1 and Lemma 4, we notice that

$$\begin{aligned} D_{C}^\alpha \varPhi (x) = - \frac{x}{1+\alpha } \varPhi (x) <0. \end{aligned}$$

We may apply the fractional integration operator \(I^\alpha \) to both sides of the above inequality. Due to (2.6) we obtain,

$$\begin{aligned} \varPhi (x) - \varPhi (0) = I^\alpha D_C^\alpha \varPhi (x) <0. \end{aligned}$$
(3.10)

Hence,

$$\begin{aligned} \varPhi (x) \equiv E_{\alpha , 1+1/\alpha , 1/\alpha }(- \frac{x^{1+\alpha }}{1+\alpha }) < \varPhi (0) = 1. \end{aligned}$$
(3.11)

Let us stress that the estimate (3.11) is uniform in \(\alpha .\)

Now, we shall see that boundedness of \(\varPhi \) implies its integrability. For this purpose we rewrite (2.11) using (2.5) as follows,

$$\begin{aligned} \frac{1}{1+\alpha }\varPhi (x) = \frac{x^{-1-\alpha }}{\varGamma (1-\alpha )} - \frac{x^{-1}}{\varGamma (1-\alpha )}\frac{d}{dx}\int _0^x\frac{\varPhi (t)}{(x-t)^{1-\alpha }}\,dt. \end{aligned}$$

We integrate it over [1, R] and we reach,

$$\begin{aligned} \frac{\varGamma (1-\alpha )}{1+\alpha } \int _1^R \varPhi (s)\,ds \le \int _1^R \frac{dx}{x^{1+\alpha }} + \left| \int _1^R x^{-1} \frac{d}{dx}\int _0^x\frac{\varPhi (t)}{(x-t)^{1-\alpha }}\,dtdx \right| = J_1 + |J_2|. \end{aligned}$$

In the second term we integrate by parts. This yields,

$$\begin{aligned} J_2 = \int _1^R x^{-2} \int _0^x\frac{\varPhi (t)}{(x-t)^{1-\alpha }}\,dt dx + \left. x^{-1} \int _0^x\frac{\varPhi (t)}{(x-t)^{1-\alpha }}\,dt \right| _{x=1}^{x=R}. \end{aligned}$$

Now, we use (3.11) and positivity of \(\varPhi \) to see that

$$\begin{aligned} J_2 \le \frac{1 - R^{\alpha -1}}{\alpha (1-\alpha )} + \frac{R^\alpha }{\alpha R} < \frac{1}{\alpha (1-\alpha )}. \end{aligned}$$

If we combine it with an easy estimate on \(J_1\) we arrive at

$$\begin{aligned} \int _1^R \varPhi (s)\,ds \le \frac{(2-\alpha )(1+\alpha )}{\alpha (1-\alpha )\varGamma (1-\alpha )}. \end{aligned}$$

Our claim follows. \(\square \)

We notice that the estimate for the integral of \(\varPhi \) blows up at \(\alpha =0\).

We are now ready to finish the proof of Theorem 1. The derivation is performed in Lemmas 2 and 3. The properties of \(\varPhi \) were established in Lemmas 46 and 7. In particular, they guarantee that the integral

$$\begin{aligned} \frac{1}{a_0} = 2\int _0^\infty E_{\alpha , 1+1/\alpha , 1/\alpha }( -x^{1+\alpha }/ (1+\alpha ))\,dx \end{aligned}$$

is finite and positive. Hence, the definition of \(a_0\) given in (3.3) is correct and \({{\mathscr {E}}}\) is well-defined with the properties we stated. \(\square \)

Remark 3

Actually, our proof shows that \({{\mathscr {E}}}(\cdot , t)\in C^{1+\alpha }([0,\infty ))\) for all \(t>0\) we have and this regularity is optimal. This is indeed the case is if inspect the Taylor expansion of \(E_{\alpha , 1+1/\alpha , 1/\alpha }\), see (2.8). We notice that \(c_1^{\alpha , 1+1/\alpha , 1/\alpha } \ne 0\). As a result the space regularity of \({{\mathscr {E}}}\) is the same as the space regularity of \(x^{1+\alpha }\).

4 Integral representation and properties of solutions

We constructed \({{\mathscr {E}}}\) on \({\mathbb {R}}_+^2\). In order to discuss its properties leading to a justification of the name ’fundamental solution’ we have to extend \({{\mathscr {E}}}\) to \({\mathbb {R}}\times {\mathbb {R}}_+\), without changing the notation. Actually, the function \(x^{1+\alpha }\) is naturally defined for negative argument as \(|x|^{1+\alpha }\). We also set \({{\mathscr {E}}}\) equal to zero on \({\mathbb {R}}\times (-\infty , 0]\), so finally

$$\begin{aligned} {{\mathscr {E}}}(x,t) = {{\mathscr {E}}}(|x|,t) \chi _{{\mathbb {R}}_+}(t). \end{aligned}$$

We do not want to discuss the action of \(D^\alpha _{x_C}\) on \({\mathscr {D}}'({\mathbb {R}})\) so we will not try to show that \((\frac{\partial }{\partial t} - \frac{\partial }{\partial x}D^\alpha _{x_C}) {{\mathscr {E}}}= \delta _0\). Indeed, this requires extra consideration, because the convolution kernel in the definition of the Caputo derivative it is not symmetric by itself. This topic is interesting, but outside the scope of the present paper.

Instead we will justify the representation formulas for solution to (3.1) augmented with the initial and boundary data. In fact, we will reuse the well-known formula derived with the help of the reflection principles for solutions to the heat equation on the half line.

Theorem 2

Let us suppose that \(g\in L^p(0,\infty ),\) where \(p\in [1,\infty )\) and g has compact support, (resp. \(g\in C^0_c([0,\infty ))\) and we set

$$\begin{aligned} {{\mathscr {E}}}_t(x) = {{\mathscr {E}}}(x,t). \end{aligned}$$

We define functions \(w_1\), \(w_2\) by the following formulas,

$$\begin{aligned} w_1(x,t)= & {} \int _0^\infty ( {{\mathscr {E}}}_t(x-y) - {{\mathscr {E}}}_t(x+y))g(y)\,dy, \end{aligned}$$
(4.1)
$$\begin{aligned} w_2(x,t)= & {} \int _0^\infty ( {{\mathscr {E}}}_t(x-y) + {{\mathscr {E}}}_t(x+y))g(y)\,dy. \end{aligned}$$
(4.2)

Then,

(a) For all \(t, R>0\) functions \(w_1(\cdot ,t)\) and \(w_2(\cdot ,t)\) belong to \(C^{1+\alpha }([0,R])\), they are classical solutions to

$$\begin{aligned} \begin{array}{ll}\displaystyle { \frac{\partial w }{\partial t} - \frac{\partial }{\partial x} D^\alpha _{x_C} w = 0} &{} x,t>0, \\ ~ &{} ~\\ w(x,0) = g(x) &{} x>0. \end{array} \end{aligned}$$
(4.3)

In addition \(w_1\) (resp. \(w_2\)) satisfies the Dirichlet (resp. fixed slope) boundary condition,

$$\begin{aligned} w(0,t) = 0,\qquad (\hbox {resp. } w_x(0,t)=0)\qquad \hbox {for }t>0. \end{aligned}$$
(4.4)

(b) The functions \(w_1\) and \(w_2\) belong to \(L^\infty ({\mathbb {R}}_+; L^p({\mathbb {R}}_+))\) if \(p<\infty \) (resp. \(w_1, w_2 \in L^\infty ({\mathbb {R}}_+; C({\mathbb {R}}_+))\), when \(g\in C^0_c([0,\infty ))\)). The initial condition is satisfied in the sense below. However, when g is continuous, we require \(g(0) =0\) in case of the Dirichlet data,

$$\begin{aligned} \lim _{t\rightarrow 0}\Vert w_1(\cdot , t) - g\Vert _{L^p} =0, \qquad (\hbox {resp. } \lim _{t\rightarrow 0}\Vert w_2(\cdot , t) - g\Vert _{L^\infty } =0). \end{aligned}$$
(4.5)

Proof

We will first check that \(w_1\) and \(w_2\) are well-defined, for this reason we begin with the first part of (b). We will rewrite \(w_1\) and \(w_2\) as a convolution of the data on \({\mathbb {R}}_+\) with \({{\mathscr {E}}}_t,\)

$$\begin{aligned} w_1(x,t) = \int _{{\mathbb {R}}} {{\mathscr {E}}}_t(x-y) {\tilde{g}}(y)\,dy, \qquad w_2(x,t) = \int _{{\mathbb {R}}} {{\mathscr {E}}}_t(x-y) \bar{g}(y)\,dy , \end{aligned}$$
(4.6)

where \({\tilde{g}}\) (resp. \({\bar{g}}\)) is an odd extension, i.e. \(\tilde{g}(-y) = - g(y)\) (resp. even extension, i.e. \({\bar{g}}(-y) = g(y)\)) for \(y>0.\) Since \({{\mathscr {E}}}_t \in L^1({\mathbb {R}})\) and \(\int _{\mathbb {R}}{{\mathscr {E}}}_t \,dx =1\), then Young inequality for convolutions imply that \({{\mathscr {E}}}_t * {\tilde{g}}, {{\mathscr {E}}}_t * {\bar{g}}\in L^p({\mathbb {R}})\), when \(p<\infty \) and

$$\begin{aligned} \Vert {{\mathscr {E}}}_t * {\tilde{g}}\Vert _{L^p({\mathbb {R}})} \le 2 \Vert g\Vert _{L^p({\mathbb {R}}_+)}, \qquad \Vert {{\mathscr {E}}}_t * {\bar{g}}\Vert _{L^p({\mathbb {R}})} \le 2\Vert g\Vert _{L^p({\mathbb {R}}_+)}. \end{aligned}$$

When g is bounded, then

$$\begin{aligned} \Vert {{\mathscr {E}}}_t * {\tilde{g}}\Vert _{L^\infty } \le \Vert g\Vert _{L^\infty }, \qquad \Vert {{\mathscr {E}}}_t * {\bar{g}}\Vert _{L^\infty } \le \Vert g\Vert _{L^\infty }. \end{aligned}$$

We conclude that \(w_1\) and \(w_2\) are well-defined.

Let us argue that \(w_1\), \(w_2\) are solutions to (4.3). We will provide some details for \(w_1\), (the proof for \(w_2\) is the same). We first notice that the kernel \({{\mathscr {E}}}_t\) is a composition of an analytic function with \(x\mapsto |x|^{1+\alpha }\), hence the convolution appearing in the definition of \(w_i\), \(i=1,2\) shares this kind of smoothness. Since \(w_1\) is a \(C^{1+\alpha }\)-function we may apply \(D^\alpha _{x_C}\) to v defined above. We obtain,

$$\begin{aligned} D^\alpha _{x_C} w_1(x,t) = \frac{1}{\varGamma (1-\alpha )}\int _0^x \frac{dy}{(x-y)^\alpha } \int _{-\infty }^\infty \frac{\partial }{\partial y} {{\mathscr {E}}}_t (y+ s) {\tilde{g}}(s)\,ds, \end{aligned}$$

where we could interchange the integral over \({\mathbb {R}}\) with the differentiation due to the integrability of \(\frac{\partial }{\partial y} {{\mathscr {E}}}_t (y+ s) {\tilde{g}}(s)\) with respect s for all \(y\in {\mathbb {R}}\). Now, we notice that we may invoke the Fubini Theorem to interchange the order of integrals. Thus, we conclude that the partial integration operator \(I^{1-\alpha }\) and the integration over \({\mathbb {R}}_+\) with respect to y commute and we see,

$$\begin{aligned} D^\alpha _{x_C} w_1(x,t) = \int _{-\infty }^\infty D^\alpha _{x_C} {{\mathscr {E}}}_t (x+ y) {\tilde{g}}(y)\,dy. \end{aligned}$$

Here, a comment on the integrand is in order. Since, \({{\mathscr {E}}}_t\) is defined over \({\mathbb {R}}\) its argument may be negative. However, in accordance with the definition of the Caputo derivative with respect to x, the argument x is always positive.

Now, due to regularity of \({{\mathscr {E}}}_t\) we see that

$$\begin{aligned} \frac{\partial }{\partial x} D^\alpha _{x_C} w_1= \int _{-\infty }^\infty \frac{\partial }{\partial x}D^\alpha _{x_C} {{\mathscr {E}}}_t (x+ y){\tilde{g}}(y)\,dy. \end{aligned}$$

Thus, we conclude that \(( \frac{\partial }{\partial t} - \frac{\partial }{\partial x} D^\alpha _{x_C}) w_1 = 0\).

Now, we check the boundary conditions. Since \(w_1(\cdot ,t)\) is continuous up to \(x=0\) for \(t>0\) we see that

$$\begin{aligned} w_1(0,t) = \int _0^\infty ( {{\mathscr {E}}}_t(-y) - {{\mathscr {E}}}_t(y))g(y)\,dy =0. \end{aligned}$$

The RHS vanishes because \({{\mathscr {E}}}_t \) is even.

The argument for \(w_2\) is similar. We use that \(\frac{\partial w_2}{\partial x}(\cdot , t)\) is continuous up to \(\{x =0\}\) for \(t>0\). For positive x we have

$$\begin{aligned}{} & {} -\frac{1}{1+\alpha }\frac{\partial w_2}{\partial x}(x,t)\\{} & {} = \int _0^\infty \left( \frac{\partial {{\mathscr {E}}}_t}{\partial x}(x-y) \hbox {sgn}\,(x-y)|x-y|^\alpha + \frac{\partial {{\mathscr {E}}}_t}{\partial x}(x+y) \hbox {sgn}\,(x-y)|x-y|^\alpha \right) g(y)\,dy. \end{aligned}$$

Thus,

$$\begin{aligned} \lim _{x\rightarrow 0^+} \frac{\partial w_2}{\partial x}(x,t)= -(1+\alpha ) \int _0^\infty \left( - \frac{\partial {{\mathscr {E}}}_t}{\partial x}(-y) y^\alpha + \frac{\partial {{\mathscr {E}}}_t}{\partial x}(y) y^\alpha \right) g(y)\,dy =0. \end{aligned}$$

Now, we turn our attention to the initial condition. We recall that (4.5) follows from the standard properties of convolution with a kernel whose integral is one. \(\square \)

We present here convolution formulas to solve the non-homogeneous problem. After establishing them we may say that indeed \({{\mathscr {E}}}\) is a fundamental solution, because it behaves like one.

Corollary 2

Let us suppose that \(f\in C^1([0,\infty )^2)\) and f has a compact support. We set (in case of \(w_3\) below we require that \(f(0,t) =0\) for all \(t>0\)),

$$\begin{aligned} w_3 (x,t)= & {} \int _{0}^t \int _0^\infty ({{\mathscr {E}}}(x-y, t-s) - {{\mathscr {E}}}(x+y, t-s))f(y,s)\,dyds, \\ w_4 (x,t)= & {} \int _{0}^t \int _0^\infty ({{\mathscr {E}}}(x-y, t-s) + {{\mathscr {E}}}(x+y, t-s))f(y,s)\,dyds. \end{aligned}$$

Then, for all \(R>0\) we have \(w_j\in C^{1+\alpha } ([0,R]\times (0,R])\) and \(w_j(x,\cdot )\in C^\infty (0,\infty )\) for all \(x>0\), \(j=3,4\). Moreover, \(w_3, w_4\) are solutions (in the sense explained below) to

$$\begin{aligned} \begin{array}{ll}\displaystyle { \frac{\partial w }{\partial t} - \frac{\partial }{\partial x} D^\alpha _{x_C} w = f} &{} x,t>0, \\ ~ &{} ~\\ w(x,0) = 0 &{} x>0. \end{array} \end{aligned}$$
(4.7)

In addition, \(w_3\) (resp. \(w_4\)) satisfies the Dirichlet (resp. fixed slope) boundary conditions.

Proof

We will present an argument for \(w_3\). The proof for \(w_4\) goes along the same lines.

We extend f by odd reflection, \({\tilde{f}}(-y,t) = - f(-y,t)\), for \(y>0\). Then, \(w_3\) takes the following form, where we use the commutativity of the convolution,

$$\begin{aligned} w_3(x,t) = \int _{0}^t \int _{{\mathbb {R}}} {{\mathscr {E}}}(x-y, t-s) {\tilde{f}}(y,s) \,dyds= \int _{0}^t \int _{{\mathbb {R}}} {{\mathscr {E}}}(y,s) {\tilde{f}} (x-y, t-s)\,dyds. \end{aligned}$$

Due to the compactness of the support of \({\tilde{f}}\), we will be able to show that the weak derivative \(\frac{\partial w_3}{\partial t}\) exists and for all \(R>0\) it belongs to \(L^1({\mathbb {R}}_+ \times (0,R)) =: Y_R\). For any \(h>0\) we set

$$\begin{aligned} w_3^h (x,t) = \int _{0}^{t-h} \int _{{\mathbb {R}}} {{\mathscr {E}}}(y,s) {\tilde{f}} (x-y, t-s)\,dyds. \end{aligned}$$

We notice that for any \(R>0\) function \(w_3^h\) converges to \(w_3\) in \(Y_R\). Due to the regularity of f we may compute \(\frac{\partial w_3}{\partial t} \), we have

$$\begin{aligned} \frac{\partial w_3}{\partial t} (x,t)&= \int _{{\mathbb {R}}} {{\mathscr {E}}}(y,t) {\tilde{f}} (x-y, 0)\,dy + \int _{0}^{t} \int _{{\mathbb {R}}} {{\mathscr {E}}}(y,s) \frac{\partial {\tilde{f}}}{\partial t} (x-y, t-s)\,dyds\\&= \int _{{\mathbb {R}}} {{\mathscr {E}}}(y,t) {\tilde{f}} (x-y, 0)\,dy - \int _{0}^{t} \int _{{\mathbb {R}}} {{\mathscr {E}}}(x-y,s) \frac{\partial {\tilde{f}}}{\partial s} (y,t- s)\,dyds. \end{aligned}$$

Again using the compactness of the support of f and regularity of \({{\mathscr {E}}}\) we saw in the course of proof of Theorem 2 that we may interchange \(D^\alpha _{x_C}\) and the integration in the definition of \(w_3^h\).

$$\begin{aligned} D^\alpha _{x_C}w^h_3(x,t) = \int _0^{t-h}\int _{\mathbb {R}}D^\alpha _{x_C} {{\mathscr {E}}}(x-y, t-s) {\tilde{f}}(y,s) \,dyds. \end{aligned}$$

Subsequently we may also apply the differential operator \(\frac{\partial }{\partial x}\) to both sides of the equality above to reach,

$$\begin{aligned} \frac{\partial }{\partial x} D^\alpha _{x_C} w^h_3(x,t)= & {} \int _0^{t-h}\int _{\mathbb {R}}\frac{\partial }{\partial x} D^\alpha _{x_C} {{\mathscr {E}}}_{t-s}(x-y) {\tilde{f}}(y,s) \,dyds\\= & {} - \int _0^{t-h}\int _{\mathbb {R}}\frac{\partial }{\partial y} D^\alpha _{x_C} {{\mathscr {E}}}_{t-s}(x-y) \tilde{f}(y,s) \,dyds .\nonumber \end{aligned}$$
(4.8)

Here and further on we write \({{\mathscr {E}}}_t(x)\) instead of \({{\mathscr {E}}}(x,t)\) to emphasize that t is a parameter of a function whose argument is x. This remark is particularly important when we compute the action of the fractional derivative of \(w_3\).

The last equality follows from the fact \(\frac{\partial }{\partial x}g(x-y) = - \frac{\partial }{\partial y}g(x-y)\). We may apply the integration by parts to the RHS of (4.8). In this way we obtain,

$$\begin{aligned} \frac{\partial }{\partial x} D^\alpha _{x_C} w^h_3(x,t) = \int _0^{t-h}\int _{\mathbb {R}}D^\alpha _{x_C} {{\mathscr {E}}}_{t-s}(x-y) \frac{\partial }{\partial y}{\tilde{f}}(y,s) \,dyds . \end{aligned}$$

Passing with h to zero in the \(Y_R\)-norm we see that the weak spacial derivative of \(w_3\) exists and

$$\begin{aligned} \frac{\partial }{\partial x} D^\alpha _{x_C} w_3(x,t) = \int _0^{t}\int _{\mathbb {R}}D^\alpha _{x_C} {{\mathscr {E}}}_{t-s}(x-y) \frac{\partial }{\partial y}{\tilde{f}}(y,s) \,dyds . \end{aligned}$$

Now, we compute \((\frac{\partial }{\partial t} -\frac{\partial }{\partial x} D^\alpha _{x_C}) w_3\). We obtain

$$\begin{aligned} \left( \frac{\partial }{\partial t} -\frac{\partial }{\partial x} D^\alpha _{x_C}\right) w_3 = \int _{{\mathbb {R}}} {{\mathscr {E}}}_t(y) {\tilde{f}} (x-y, 0)\,dy -\lim _{h \rightarrow 0^+} RHS(h), \end{aligned}$$

where

$$\begin{aligned} RHS(h)= & {} \int _{h}^{t} \int _{{\mathbb {R}}} {{\mathscr {E}}}_s(x-y) \frac{\partial {\tilde{f}}}{\partial s} (y, t-s)\,dyds \\{} & {} \quad -\int _0^{t-h}\int _{\mathbb {R}}D^\alpha _{x_C} {{\mathscr {E}}}_{t-s}(x-y) \frac{\partial }{\partial y}{\tilde{f}}(y,s) \,dyds . \end{aligned}$$

Then, we integrate by parts to see that

$$\begin{aligned} RHS(h) =&- \int _{h}^{t} \int _{{\mathbb {R}}} \frac{\partial }{\partial s}{{\mathscr {E}}}_s(x-y) {\tilde{f}}(y, t-s)\,dyds + \int _{{\mathbb {R}}} {{\mathscr {E}}}_s(x-y) {\tilde{f}}(y,t-s) \left| _{s=h}^{s=t}\right. \\&+ \int _0^{t-h}\int _{\mathbb {R}}\frac{\partial }{\partial y} D^\alpha _{x_C} {{\mathscr {E}}}_{t-s}(x-y){\tilde{f}}(y,s) \,dyds \\ =&\int _{{\mathbb {R}}} {{\mathscr {E}}}_t(x-y) {\tilde{f}}(y, 0)\,dy - \int _{{\mathbb {R}}} {{\mathscr {E}}}_h(x-y) {\tilde{f}}(y,t- h)\,dy \\&- \int _0^{t-h}\int _{\mathbb {R}}\frac{\partial }{\partial x} D^\alpha _{x_C} {{\mathscr {E}}}_{t-s}(x-y){\tilde{f}}(y,s) \,dyds\\&+ \int _{0}^{t-h} \int _{{\mathbb {R}}} \frac{\partial }{\partial s}{{\mathscr {E}}}_{t-s}(x-y) {\tilde{f}}(y, s)\,dyds. \end{aligned}$$

Combining these computations gives us

$$\begin{aligned} \left( \frac{\partial }{\partial t} -\frac{\partial }{\partial x} D^\alpha _{x_C}\right) w_3(x,t) = \lim _{h \rightarrow 0^+} \int _{{\mathbb {R}}} {{\mathscr {E}}}_h(x-y) {\tilde{f}}(y,t- h)\,dy = {\tilde{f}}(x,t) . \end{aligned}$$

Hence, \(w_3\) is a solution to (4.7). It is such that \(D^\alpha _{x_C} w_3\) exists in the classical sense and for all \(R>0\) and weak derivatives\(\frac{\partial w_3}{\partial t}\), \(\frac{\partial }{\partial x}D^\alpha _{x_C} w_3\) exist and they belong to \(L^1({\mathbb {R}}_+\times (0,R))\) for all \(R>0\).

An argument used in the proof of Theorem 2 shows that \(w_3\) satisfies the Dirichlet boundary condition. Now, we shall investigate the initial condition. Due to the boundedness of f we see that

$$\begin{aligned} |w_3(x,t)| \le \int _0^t\int _{\mathbb {R}}{{\mathscr {E}}}_s(y) \Vert f\Vert _{L^\infty }\,dyds \le t \Vert f\Vert _{L^\infty }. \end{aligned}$$

Our claims follow. \(\square \)

We constructed solutions to (3.1) with the help of the convolution of the fundamental solution with the data. Since this case is not covered neither in [13] nor in [20] we have to show uniqueness separately. The difficulty with the classical method of testing the equation with the solution is that integrability of the derivatives (fractional and integer) of \({{\mathscr {E}}}\) is different than one might expect. Here, we present an observation which turns out very useful.

Lemma 8

Let us suppose that \(\varPhi \) is given by (2.10). Then, for all \(x>0\) we have,

$$\begin{aligned} 0< x \,\varPhi (x)\le 4. \end{aligned}$$

Proof

We combine (2.11) and (3.10) to obtain

$$\begin{aligned} \varPhi (0) - \varPhi (x) = \frac{1}{\varGamma (\alpha )(1+\alpha )} \int _0^x \frac{z \varPhi (z)\, dz}{(x-z)^{1-\alpha }}. \end{aligned}$$

Taking into account the positivity of \(\varPhi \), we obtain for \(x>1\) that

$$\begin{aligned} 1 = \varPhi (0) \ge \frac{1}{\varGamma (\alpha )(1+\alpha )} \int _{x-1}^x \frac{z \varPhi (z)\, dz}{(x-z)^{1-\alpha }}. \end{aligned}$$

Since Theorem 1 (3) guarantees monotonicity of \(\varPhi \), then we obtain the estimate

$$\begin{aligned} \varGamma (\alpha )(1+\alpha )\ge (x-1)\varPhi (x) \int _{x-1}^x \frac{dz}{(x-z)^{1-\alpha }} = \frac{1}{\alpha }(x-1)\varPhi (x). \end{aligned}$$

Hence, for \(x\in [0,2]\) we have \(x\varPhi (x) \le 2\). For \(x>2\) we obtain

$$\begin{aligned} x\varPhi (x) \le \varGamma (2+\alpha ) \frac{x}{x-1} \le 4. \end{aligned}$$

\(\square \)

We will use the above lemma to show limited integrability of the derivatives of solutions constructed in Theorem 2.

Lemma 9

Let us suppose that \(g\in W^{1,1}(0,\infty )\), \(g(0)=0\) and \(w_1\) is given by (4.1) and \(w_2\) is given by (4.2). We also assume that g in the definition of \(w_1\) satisfies \(g(0) =0\). Then,

(1) \(\frac{\partial }{\partial t} w_i (\cdot , t) \in L^1(0,\infty )\) for all \(t>0\) and \(i=1,2\);

(2) \(D^\alpha _{x_C} w_i (\cdot , t) \in L^\infty (0,\infty )\) for all \(t>0\) and \(i=1,2\);

(3) \(\frac{\partial }{\partial x} w_i (\cdot , t) \in L^1(0,\infty )\) for all \(t>0\) and \(i=1,2\).

Proof

We will use the representation formulas (4.6). The argument is conducted simultaneously for \({\tilde{g}}\) and \({\bar{g}}\). For the sake of simplicity of notation we will write here g for both \({\tilde{g}}\) and \({\bar{g}}\) and also w will denote \(w_1\) and \(w_2\).

We will check that (1) holds. Let us compute \(\frac{\partial }{\partial t} w\), we will express it in term of \(\varPhi \) introduced in (2.10),

$$\begin{aligned} \frac{\partial }{\partial t} w(x,t)= & {} - \frac{a_0}{(1+\alpha ) t^{1+ \frac{1}{1+\alpha }}} \int _{-\infty }^\infty \varPhi \left( \frac{x-y}{t^{\frac{1}{1+\alpha }}}\right) g(y)\,dy\\{} & {} - \frac{a_0}{(1+\alpha ) t^{1+\frac{2}{1+\alpha }}} \int _{-\infty }^\infty \frac{d\varPhi }{d\xi } \left( \frac{x-y}{t^{\frac{1}{1+\alpha }}}\right) g(y)\,dy. \end{aligned}$$

The RHS above is well-defined because g has a compact support and \(\varPhi \) is a \(C^{1+\alpha }\)-function. Since g belongs to \(W^{1,1}({\mathbb {R}}_+)\) we may integrate the last term by parts. Here, in the case of the odd extension of g we use \(g(0)=0\).

Finally, we reach

$$\begin{aligned} \frac{\partial }{\partial t} w(x,t) = - \frac{1}{(1+\alpha )t} \int _0^\infty {{\mathscr {E}}}_t(x-y)g(y)\,dy - \frac{1}{(1+\alpha ) t^{1+\frac{1}{1+\alpha }}} \int _0^\infty {{\mathscr {E}}}_t(x-y) g'(y)\,dy. \end{aligned}$$

The integrability of \({{\mathscr {E}}}\), g and \(g'\) implies claim (1) for w.

We are going to establish part (2). We have already seen that \(D^\alpha _{x_C}\) commute with integration, so we have,

$$\begin{aligned} D^\alpha _{x_C} w(x,t) = \int _{-\infty }^\infty D^\alpha _{x_C} {{\mathscr {E}}}_t(x-y) g(y)\, dy. \end{aligned}$$

Since \({{\mathscr {E}}}_t(x) = \frac{a_0}{t^{\frac{1}{1+\alpha }}} \varPhi (x t^{-\frac{1}{1+\alpha }})\), we have to calculate the Caputo derivative of a scaled function. Let us suppose that f is absolutely continuous on \([0,\infty \)). For \(\lambda >0\) we set \(f_\lambda (x) = f(\lambda x)\). We compute \(D^\alpha _{x_C}f_\lambda \),

$$\begin{aligned} D^\alpha _{x_C}f_\lambda (x) = \frac{1}{\varGamma (1-\alpha )}\int _0^x \frac{df_\lambda }{ds}(s)(x-s)^{-\alpha }\,ds = \frac{1}{\varGamma (1-\alpha )}\int _0^x \lambda \frac{f'(\lambda s)}{(x-s)^\alpha }\,ds. \end{aligned}$$

After changing the variables \(\lambda s = z\) we obtain,

$$\begin{aligned} D^\alpha _{x_C}f_\lambda (x) = \frac{\lambda ^\alpha }{\varGamma (1-\alpha )} \int _0^{\lambda x} \frac{ f'(z)}{(\lambda x - z)^\alpha }\, dz = \lambda ^\alpha D^\alpha _{y_C}f(\lambda x). \end{aligned}$$
(4.9)

Taking into account (4.9) yields,

$$\begin{aligned} D^\alpha _{x_C} w(x,t) = \frac{a_0 }{t^{\frac{2}{1+\alpha }}} \int _{-\infty }^\infty D^\alpha _{\xi _C} \varPhi ((x-y)t^{-\frac{1}{1+\alpha }}) g(y)\, dy. \end{aligned}$$

Now, due to (2.11) and Lemma 8 we conclude that

$$\begin{aligned} |D^\alpha _{x_C} v_2(x,t)| = \left| a_0 t^{ - \frac{1}{1+\alpha }} \int _{-\infty }^\infty \frac{(x - y)}{t^{\frac{1}{1+\alpha }}} \varPhi \left( \frac{(x - y)}{t^{\frac{1}{1+\alpha }}}\right) g(y)\, dy \right| \le a_0 4 t^{ - \frac{1}{1+\alpha }}. \end{aligned}$$

Part (2) follows.

Part (3) is established along the lines of the proof of (1). Let us compute the derivative of w, then we see

$$\begin{aligned} \frac{\partial }{\partial x} w(x,t) = \int _{-\infty }^\infty \frac{\partial }{\partial x} {{\mathscr {E}}}_t(x+y) g(y)\, dy = - \int _{-\infty }^\infty {{\mathscr {E}}}_t(x+y) g'(y)\, dy, \end{aligned}$$

where we used the boundedness of the support of g. \(\square \)

We would like to state our uniqueness result. For this purpose we define a class of functions, which we find suitable,

$$\begin{aligned} X =&\,C([0,\infty )^2)\cap C^{1+\alpha }([0,\infty )\times (0,\infty )) \cap \{ u\in L^\infty ({\mathbb {R}}_+; L^2({\mathbb {R}}_+)):\\&\hbox {for a.e. } t>0\ u_t(\cdot , t), u_x(\cdot , t)\in L^1({\mathbb {R}}_+), \ u(\cdot , t), \ D^\alpha _{x_C}u(\cdot , t) \in L^\infty ({\mathbb {R}}_+)\}. \end{aligned}$$

The virtue of this definition is that Lemma 9 guarantees that solutions constructed with the help of the convolution formula will belong to this class.

Proposition 2

If w is a solution to (4.3) with either Dirichlet or fixed slope boundary data (4.4) and \(w\in X\), then w is unique.

Proof

We take the difference w of two solutions \(u_1\) and \(u_2\) from X, we multiply them by w and integrate over \({\mathbb {R}}_+\). The definition of the class X permits us to write

$$\begin{aligned} \frac{1}{2} \frac{d}{dt} \Vert w(t)\Vert ^2_{L^2} =\langle w_t, w \rangle \equiv \int _0^\infty \frac{\partial }{\partial x} D^\alpha _{x_C}w(x,t) w(x,t)\,dx. \end{aligned}$$

We want to integrate by parts, however, performing this operation requires some preparations. Since the integrand on the RHS is integrable, then for any increasing sequence \(R_n\) converging to infinity we have,

$$\begin{aligned} \int _0^\infty \frac{\partial }{\partial x} D^\alpha _{x_C}w(x,t) w(x,t)\,dx = \lim _{n\rightarrow \infty } \int _0^{R_n} \frac{\partial }{\partial x} D^\alpha _{x_C}w(x,t) w(x,t)\,dx. \end{aligned}$$

Since \(w(\cdot ,t)\) is continuous on \({\mathbb {R}}_+\) as well as it is in \(L^2\), then there exists a sequence \(R_n\) converging to infinity, such that \(w(R_n,t) D^\alpha _{x_C} w(R_n,t) \) goes to zero, when \(n\rightarrow \infty .\) Thus, after integration by parts over \([0,R_n]\) the RHS above takes the following form,

$$\begin{aligned} \lim _{n\rightarrow \infty } \int _0^{R_n} \frac{\partial }{\partial x} D^\alpha _{x_C}w(x,t) w(x,t)\,dx = -\lim _{n\rightarrow \infty } \int _0^{R_n} D^\alpha _{x_C}w(x,t)\frac{\partial }{\partial x} w(x,t)\,dx, \end{aligned}$$

where we also take into account the zero Dirichlet data at \(x=0\) or vanishing \(D^\alpha _{x_C} w(0)\). In order to estimate the RHS we recall that [10, Proposition 6.10] implies that

$$\begin{aligned} \int _0^{R_n} D^\alpha _{x_C}w(x,t)\frac{\partial }{\partial x} w(x,t)\,dx \ge 0. \end{aligned}$$

See formula (5) in the proof of [20, Theorem 1] for more details. As a result, we conclude that

$$\begin{aligned} \frac{d}{dt} \Vert w(t)\Vert ^2_{L^2} \le 0. \end{aligned}$$

Hence, for all \(t>0\) we have \(\Vert w(t)\Vert _{L^2}^2 \le \Vert w(0)\Vert _{L^2}^2 =0\). \(\square \)

It is interesting to check when a solution belongs to class X. We do not offer a full answer, however, Lemma 9 gives us a hint. We note:

Corollary 3

Let us suppose that f is in \(C_c({\mathbb {R}}_+^2)\). Then,

(1) If g is in \(W^{1,1}({\mathbb {R}}_+)\) with bounded support and \(g(0) =0\), then \(w_1 + w_3\) is a unique solution to (4.7\(_1\)) with initial condition (4.3\({}_2\)) and boundary condition (4.4\({}_1\)).

(2) If g is in \(W^{1,1}({\mathbb {R}}_+)\) with bounded support, then \(w_2 + w_4\) is a unique solution to (4.7\(_1\)) with initial condition (4.3\({}_2\)) and boundary condition (4.4\({}_2\)).

Proof

Lemma 9 shows that indeed \(w_1\) and \(w_2\) are in class X. The calculations we performed in the course of proof of Theorem 2 show that \(w_3\), \(w_4\) also belong to X. Then, we use Proposition 2 to finish the proof. \(\square \)

One may ask about the possibility to use the maximum principle to deduce uniqueness of regular, i.e. sufficiently differentiable solutions. This is indeed possible for the Dirichlet problem, see Corollary 4 below. However, in the case of fixed slope data such argument does not seem to be known. The argument applicable for solutions of the heat equation, [2, Section 5.2] does not work directly for the fractional equations.

Corollary 4

If \(u_1\), \(u_2 \in C([0,\infty )^2)\cap C^2((0,\infty )^2)\cap L^\infty ((0,\infty )^2)\) are solutions to (4.3) and (4.4)\(_1\), then \(u_1 = u_2.\)

Proof

Step 1. We first show that \(u= u_1\) or \(u= u_2\), then

$$\begin{aligned} \sup _{{\mathbb {R}}_+\times (0,T)} u = \sup _{{\mathbb {R}}_+} u(x,0) . \end{aligned}$$
(4.10)

For this purpose we define \(\varOmega _T = (0, x_0)\times (0,T) \setminus \{ (x,t): t+ x_0\le x\}.\) For a given \(\epsilon >0,\) we set \(x_0 = (\frac{\varGamma (2+\alpha )}{\epsilon }\Vert g\Vert _{L^\infty })^{1/(1+\alpha )}\). Now, the weak maximum principle, see [21, Lemma 8], for

$$\begin{aligned} v_\epsilon (x,t) = u(x,t) - \epsilon (\varGamma (2+\alpha ) t + x^{1+\alpha }) \end{aligned}$$

implies that

$$\begin{aligned} \sup _{\varOmega _T} v_\epsilon (x,t) = \max \left\{ \max _{t\in [0,T]} v_\epsilon (0,t), \max _{x\in [0,x_0]} v_\epsilon (x,0), \max _{t\in [0,T]} v_\epsilon (x_0+t, t)\right\} . \end{aligned}$$

However, by the choice of \(x_0\) function \(v_\epsilon \) attains on \(\{(x_0+t, t):\ t\in [0,T]\}\) values smaller than \(\sup _{x\in [0,x_0]} v_\epsilon (x,0)\) and \(\sup _{{\mathbb {R}}_+\times (0,T)} u - \frac{1}{2} \Vert u\Vert _{L^\infty }\). As a result, since \(g(0) =0 \) we see that

$$\begin{aligned} \sup _{\varOmega _T} v_\epsilon (x,t) = \max _{x\in [0,x_0]} v_\epsilon (x,0). \end{aligned}$$

Now, we pass to the limit with \(\epsilon \rightarrow 0.\) Hence, (4.10) follows.

Step 2. Replacing u (resp. g) with \(-u\) (resp. \(-g\)) we infer from Step 1 that

$$\begin{aligned} \inf _{{\mathbb {R}}_+\times (0,T)} u = \inf _{{\mathbb {R}}_+} u(x,0) . \end{aligned}$$

Step 3. Let us now set \(u= u_2 - u_1\). Steps 1 and 2 applied to u yield that \(u=0.\) \(\square \)

Having established an integral representation of solution we may draw conclusions about their asymptotic behavior. Here we note a decay property.

Proposition 3

Let us suppose that the assumptions of the uniqueness theorem, Proposition 2, hold and \(g\in W^{1,1}({\mathbb {R}}_+)\) has bounded support. If u is the unique solution to (4.3) corresponding solution to g, then

$$\begin{aligned} \sup _{x\in {\mathbb {R}}_+} |u(x, t)| \le C t^{-1/(1+\alpha )} \Vert g\Vert _{L^1} \end{aligned}$$

Proof

We use the representation formula and the boundedness of \({{\mathscr {E}}}\). \(\square \)

We proved in [13] that viscosity solutions depend continuously upon \(\alpha \in [0,1].\) We can use the representation formula to establish the same result. Indeed, we can show:

Proposition 4

If \(w^\alpha _i\) , \(i=1,2\) then for any \(\alpha _0\in (0,1]\)

$$\begin{aligned} \lim _{\alpha \rightarrow \alpha _0} w^\alpha _i = w^{\alpha _0}_i, \qquad i =1,2. \end{aligned}$$

In particular, when \(\alpha _0=1\), then \(w^{\alpha _0}_i\) is the solution to the heat equation.

Proof

We notice that due to (2.9) for any \(R>0\) the family of the Mittag-Leffler functions \(E_{\alpha , 1+1/\alpha , 1/\alpha }\) is bounded on the ball \(\{z\in {\mathbb {C}}: |z|\le R\}\). Hence, due to the Montel Theorem for each compact set \(K\subset {\mathbb {C}}\) this family converges uniformly after extracting a subsequence. However, the Residue Theorem implies that the limit must be equal to \(E_{\alpha _0,1+1/\alpha _0,1/\alpha _0}\), hence the full sequence converges to \(E_{\alpha _0,1+1/\alpha _0,1/\alpha _0}\).

We may pass to the limit under the integral sign due to the compact support of g in formulas (4.1) and (4.2). \(\square \)

Remark 4

This proof breaks down if we try to pass to the limit as \(\alpha \) goes to 0. In this case \(E_{\alpha , 1+1/\alpha , 1/\alpha }\) converges to \(\frac{1}{1-z}\) for \(|z|<1.\) In addition the convergence result of [13] does not apply here, because it was established on a finite interval, we have not proved that solutions we constructed are viscosity solutions either.

The observation made in the proposition above has a bit surprising consequences. It suggests that for small \(\alpha \) we should see phenomena typical for the hyperbolic problems.

We could relate this observation to the behavior the discretization scheme. We presented in [14] the \(\frac{1}{2}\)-shifted Grünwald approximation. We write out the scheme, for the Dirichlet data, in terms of the Grünwald weights. The approximation of \(u(i\varDelta x, k\varDelta t)\) takes the following form,

$$\begin{aligned} \begin{aligned}&u_0^{k+1}=u_0^k, \qquad u_n^{k+1}=u_n^k, \\&u_i^{k+1}=\sum _{j=0}^{i-1}{\beta (g_{i+1-j}-g_{i-j}) u_{j}^k} + \left( 1+\beta (g_1-g_0) \right) u_i^k +\beta g_0 u_{i+1}^k, \end{aligned} \end{aligned}$$
(4.11)

where and the Grünwald weights are given by

$$\begin{aligned} \beta = \frac{\varDelta t}{(\varDelta x)^{1+\alpha }},\quad g_0 =1 , \quad g_i =\frac{i-1-\alpha }{i}g_{i-1},\quad i =1,2, \ldots . \end{aligned}$$
(4.12)

We see that when \(\alpha \rightarrow 0\), then (4.11)–(4.12) converge to an explicit finite difference scheme for the transport equation. This suggests a finite speed of propagation. This is supported by our simulations, presented there which showed an initial pulse tending toward the left.

There are two sides of the same coin. For \(\alpha =1\), eq. (3.1) becomes a heat equation, where the speed of propagation is infinite. It means that if the initial perturbation is non-negative and it has a compact support, then solutions to (3.1) will be positive everywhere for \(t>0\). Interestingly, when \(\alpha \rightarrow 1\), then (4.11)–(4.12) converges to an explicit finite difference scheme for the heat equation. We also noticed the smearing out effect, see Fig. 2 in [14].

Actually, we can show:

Proposition 5

For all \(\alpha \in (0,1]\) the speed of signal propagation for solutions to (3.1) with initial condition \(u(x,0)= f(x)\) and the zero fixed slope data is infinite, i.e. if \(f\ge 0\), \(f\ne 0\) and \(\hbox {supp}\,{f}\subset (0,R)\), where \(R>0\), then for all \(x,t>0\) we have

$$\begin{aligned} w_2(x,t)>0. \end{aligned}$$

Proof

This is a consequence of the definition of \(w_2\) and positivity of \({{\mathscr {E}}}\) and f. \(\square \)

We have chosen the fixed slope data, because of the simplicity of the formula for a solution.