1 Introduction

The study of singularities occurrence in nonlinear evolution problems constitutes a source of intriguing questions deeply related to the mathematical and physical issues. The basic example of a PDE evolution leading to shock formation is given by the so-called Burgers’ equation [actually introduced by Airy (1845)], which represents a simple model for studying the interaction between nonlinear and dissipative phenomena. Moreover, this equation exhibits the basic nonlinear mechanism shared by the more involved nonlinearities inherent to Euler and Navier–Stokes equations (Kiselev and Šverák 2014). In exploiting a possible scenario for singularity formation in nonlocal evolution problems, continuing a line of research pursued in Coclite et al. (2019) from a different perspective, here we investigate the effect of a nonlocal-in-time modification of Burgers’ equation with respect to singularity creation.

Besides their interest from the purely mathematical point of view, the nonlocal operators with respect to the time variable find a number of concrete applications in many emerging fields of research like, for instance, the anomalous transportation problems (see Li and Wang 2003), the heat flow through ramified media (see Arkhincheev and Baskin 1991) and the theory of viscoelastic fluids [see Section 10.2 in Podlubny (1999) and the references therein]. Specifically, we will also present here a concrete model from job market analysis which naturally leads to a fractional Burgers’ equation. See also Chapter 1 in Carbotti et al. (2019) for several explicit motivations for fractional derivative problems.

Focusing on the case of inviscid fluid mechanics, we recall that in the classical Burgers’ equation explicit examples show the possible formation of singularities in finite time, see Bressan (2000). In particular, an initial condition with unitary slope leads to a singularity at a unitary time.

The goal of this paper is to study whether a similar phenomenon persists in nonlocal Burgers’ equations with a time-fractional derivative. That is, we investigate how a memory effect in the equation affects the singularity formation.

Our main results are the following:

  • The memory effect does not prevent singularity formations.

  • For initial data with unitary slopes, the blowup time can be explicitly estimated from above, in a way that is uniform with respect to the memory effect (namely, it is not possible to slow down indefinitely the singularity formation using only memory effects).

  • Explicit bounds from below of the blowup times are also possible.

The precise mathematical setting in which we work is the following. First of all, to describe memory effects, we make use of the left Caputo derivative of order \(\alpha \in (0,1)\) with initial time \(t_0\) for \(t\in (t_0,+\infty )\), defined by

$$\begin{aligned} {}^C \! D^\alpha _{t_0,+} f(t):= \frac{1}{\Gamma (1-\alpha )} \int _{t_0}^t \frac{\dot{f}(\tau )}{(t-\tau )^\alpha }\,\mathrm{d}\tau , \end{aligned}$$
(1.1)

where \(\Gamma \) is the Euler Gamma function.

In this framework, we consider the time-fractional Burgers’ equation driven by the left Caputo derivative, given by

$$\begin{aligned} {\left\{ \begin{array}{ll} {}^C \! D^\alpha _{0,+} u(x,t) + u(x,t)\,\partial _x u(x,t)=0 &{} \text{ for } \text{ all } x\in {\mathbb {R}}\hbox { and } t\in (0,T_\star ),\\ u(x,0)=u_0(x). \end{array}\right. } \end{aligned}$$
(1.2)

In the recent literature, various types of fractional versions of the classical Burgers’ equation were taken into account from different perspectives, see, e.g., Miškinis (2002), Inc (2008), Harris and Garra (2013), Wu and Baleanu (2013), Esen and Tasbozan (2016), Saad et al. (2017), Yokuş and Kaya (2017) and the references therein. (In this paper, we also propose a simple motivation for Eq. (1.2) in Sect. 5.) When \(\alpha =1\), Eq. (1.2) reduces to the classical inviscid Burgers’ equation

$$\begin{aligned} \partial _t u(x,t) + u(x,t)\,\partial _x u(x,t)=0 . \end{aligned}$$
(1.3)

Remark 1.1

We notice that examples of solutions to classical Burgers’ equation, exhibiting instantaneous and spontaneous formation of singularities, work well also in the present case. Indeed, the aim of our study relies in understanding, through quantitative estimates, how the finite-time creation of singularities can be affected by the presence of a fractional-in-time derivative.

We prove that the time-fractional Burgers’ equation driven by the left Caputo derivative may develop singularities in finite time, according to the following result:

Theorem 1.2

There exist a time \(T_\star >0\), a function \(u_0\in C^\infty ({\mathbb {R}})\), a solution \(u:{\mathbb {R}}\times [0,T_\star )\rightarrow {\mathbb {R}}\) of the time-fractional Burgers’ equation in (1.2) and a sequence \(t_n\nearrow T_\star \) as \(n\rightarrow +\infty \) such that

$$\begin{aligned} \lim _{n\rightarrow +\infty } u(x,t_n)={\left\{ \begin{array}{ll} +\infty &{} {\hbox { if }} x\in (-\infty ,0),\\ -\infty &{} {\hbox { if }} x\in (0,+\infty ).\end{array}\right. } \end{aligned}$$

Remark 1.3

The function u in Theorem 1.2 will be constructedFootnote 1 using the separation of variable method, by taking

$$\begin{aligned} u(x,t):=-x\,v(t), \end{aligned}$$
(1.4)

where v is the solution of the time-fractional equation

$$\begin{aligned} {\left\{ \begin{array}{ll} {}^C \! D^\alpha _{0,+} v(t) = v^2(t) &{} { \text{ for } }t\in (0,T_\star ),\\ v(0)=1. \end{array}\right. } \end{aligned}$$
(1.5)

Interestingly, this strategy is compatible with the classical case \(\alpha =1\). Indeed, for \(\alpha =1\), (1.5) boils down to

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{v}(t) = v^2(t) &{} { \text{ for } }t\in (0,T_\star ),\\ v(0)=1, \end{array}\right. } \end{aligned}$$
(1.6)

and it can be checked directly that (1.6) possesses the explicit singular solution

$$\begin{aligned} v(t)=\frac{1}{1-t}. \end{aligned}$$
(1.7)

When \(\alpha =1\), the function in (1.7) can be used, by separation of variables, to construct a singular solution of the classical Burgers’ Equation (1.3), by considering the function

$$\begin{aligned} u(x,t)=-\frac{x}{1-t}. \end{aligned}$$
(1.8)

Indeed, the function in (1.8) solves (1.3) and diverges at time 1. In this sense, both the fractional and the classical cases share the common feature of allowing the construction of singular solutions by multiplying by x a singular solution in t (and the singular solution in t corresponds to Eq. (1.6) in the classical case, and to Eq. (1.5) in the fractional case).

We also point out that in the classical case the blowup time \(T_\star \) of (1.8) is exactly 1: In this sense, we will see that our fractional construction in Theorem 1.2 recovers the classical case in the limit \(\alpha \nearrow 1\) also in terms of blowup time, as will be discussed in Remarks 1.5 and 1.7 .

We also observe that it is possible to give an explicit upper bound on the blowup time for the fractional solution (1.4) in Theorem 1.2, as detailed in the following result:

Theorem 1.4

If \(T_\star \) is the blowup time found in Theorem 1.2, we have that

$$\begin{aligned} T_\star \leqslant \left( \frac{1}{\Gamma (2-\alpha )}\right) ^{1/\alpha }. \end{aligned}$$
(1.9)

In particular, for all \(\alpha \in (0,1)\),

$$\begin{aligned} T_\star \leqslant e^{1-\gamma }=1.52620511\dots , \end{aligned}$$
(1.10)

where \(\gamma \) is the Euler–Mascheroni constant.

Remark 1.5

One can compare the general estimate in (1.10), valid for all \(\alpha \in (0,1)\), with the blowup time for the classical solution in (1.8), in which \(T_\star =1\). Indeed, we point out that the right-hand side of (1.9) approaches 1 as \(\alpha \nearrow 1\). Hence, in view of Remark 1.3, we have that the bound in (1.9) is optimal when \(\alpha \nearrow 1\).

It is also possible to obtain a lower bound on the blowup time involving the right-hand side in (1.9), up to a reminder which is arbitrarily small as \(\alpha \nearrow 1\). Indeed, we have the following result:

Theorem 1.6

If \(T_\star \) is the blowup time found in Theorem 1.2, we have that for any \(\delta >0\) there exists \(c_\delta >0\) such that

$$\begin{aligned} T_\star \geqslant \frac{c_\delta ^{\frac{1-\alpha }{\alpha }}}{1+\delta } \left( \frac{1}{\Gamma (2-\alpha )}\right) ^{1/\alpha }. \end{aligned}$$
(1.11)

Remark 1.7

We observe that the right-hand side of (1.11) approaches \(1/(1+\delta )\) as \(\alpha \nearrow 1\), which, for small \(\delta \), recovers the unitary blowup time of the classical solution in (1.8).

The strategy to prove Theorems 1.4 and 1.6 relies on the construction of appropriate barriers and a comparison result: To this end, a careful choice of structural parameters is needed (and, of course, this choice plays a crucial role in the bounds of the blowup times detected in this note).

Remark 1.8

Of course, the blowup time estimates in Theorems 1.4 and 1.6 are specific for the singular solution in (1.4), and other singular solutions have in general different blowup times. As a matter of fact, by scaling, if u is a solution of (1.2), then so is \(u^{(\lambda )}(x,t):=u(\lambda ^\alpha x,\lambda t)\), for all \(\lambda >0\), with initial datum \(u^{(\lambda )}_0(x):=u_0(\lambda ^\alpha x)\). In particular, if u is the function in (1.4) and \(T_\star \) is its blowup time, then the blowup time of \(u^{(\lambda )}\) is \(T_\star /\lambda \). That is, when \(\lambda \in (1,+\infty )\), the slope of the initial datum increases and accordingly the blowup time becomes smaller. This is the reason for which we choose the setting in (1.4) to normalize the slope of the initial datum to be unitary.

Remark 1.9

It is interesting to observe the specific effect of the Caputo derivative on the solutions in simple and explicit examples. From our perspective, though the Caputo derivative is commonly viewed as a “memory” effect, the system does distinguish between a short-term memory effect, which enhances the role of the forcing terms, and a long-term memory effect, which is more keen to remember the past configurations.

To understand our point of view on this phenomenon, one can consider, for \(\alpha \in (0,1)\), the solution \(u=u(t)\) of the linear equation

$$\begin{aligned} {\left\{ \begin{array}{ll} {}^C \! D^\alpha _{0,+} u(t)=\displaystyle \sum _{k=1}^N \delta _{p_k}(t),\\ u(0)=0,\end{array}\right. } \end{aligned}$$
(1.12)

where \(0<p_1<\dots <p_N\) and \(\delta _p\) is the Dirac delta at the point \(p\in {\mathbb {R}}\).

When \(\alpha =1\), Eq. (1.12) reduces to the ordinary differential equation with impulsive forcing term given by

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{u}}(t)=\displaystyle \sum _{k=1}^N \delta _{p_k}(t),\\ u(0)=0.\end{array}\right. } \end{aligned}$$
(1.13)

Up to negligible sets, the solution of (1.13) is the step function

$$\begin{aligned} u(t) =\sharp \{ k\in \{1,\dots , N\} { \text{ s.t. } } p_k<t\}= \sum _{\begin{array}{c} 1\leqslant k\leqslant N\\ p_k<t \end{array}} 1. \end{aligned}$$
(1.14)

On the other hand, Eq. (1.12) is a Volterra-type problem whose explicit solution is given by

$$\begin{aligned} u(t) =\frac{1}{\Gamma (\alpha )} \sum _{\begin{array}{c} 1\leqslant k\leqslant N \\ p_k<t \end{array}} (t-p_k)^{\alpha -1}. \end{aligned}$$
(1.15)

Notice that the solution in (1.15) recovers (1.14) as \(\alpha \nearrow 1\). Nevertheless, the sharp geometric difference between the solutions in (1.14) and (1.15) is apparent (see Fig. 1).

Indeed, while the classical solutions experience a unit jump at the times where the impulses take place, the structure of the fractional solutions exhibits a more complicated, and “less monotone,” behavior. More specifically, on the one hand, for fractional solutions, the short-term memory effect of each impulse is to create a singularity toward infinity, and in this sense its impact on the solution is much stronger than in the classical case. On the other hand, the solution in (1.15) approaches zero outside the times in which the impulses occur, thus tending to recover the initial datum in view of a long-term memory effect.

Fig. 1
figure 1

Plot of the solutions in (1.14) and (1.15) with the following parameters: \(N=4\), \(p_1=1\), \(p_2=2\), \(p_3=3\) and \(p_4=4\). The different plots correspond to the cases \(\alpha =\frac{1}{10}\), \(\alpha =\frac{1}{4}\), \(\alpha =\frac{1}{2}\), \(\alpha =\frac{3}{4}\), \(\alpha =\frac{7}{8}\), \(\alpha =\frac{9}{10}\), \(\alpha =\frac{99}{100}\) and \(\alpha =1\)

It is interesting to recall that monotonicity, comparison principles and blowup analysis for fractional equations have been also considered in Feng et al. (2018), Feng et al. (2018). See also Li and Liu (2018) for compactness criteria, Allen et al. (2016, 2017) for time-fractional equations of porous medium type and Dipierro et al. (2020) for the analysis of the fundamental solutions of time-fractional equations.

The paper is organized as follows: Sects. 23 and 4 are devoted to the proofs of Theorems 1.21.4 and 1.6 , respectively. In Sect. 5, we propose a job market motivation for Eq. (1.2).

2 Proof of Theorem 1.2

The proof of Theorem 1.2 relies on a separation of variables method (as it will be apparent in the definition of the solution u in (2.14) at the end of this proof). To make this method work, one needs a careful analysis of the solutions of time-fractional equations that we now discuss in detail. Fixed \(M\in {\mathbb {N}}\cap [ 4,+\infty )\), for any \(r\in {\mathbb {R}}\) we define \(f_M(r):=\min \{ r^2,M^2\}\). We let \(v_M\) be the solution of the Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} {}^C \! D^\alpha _{0,+} v_M(t) = f_M(v_M(t)) &{} { \text{ for } }t\in (0,+\infty ),\\ v_M(0)=1. \end{array}\right. } \end{aligned}$$
(2.1)

The existence and uniqueness of the solution \(v_M\), which is continuous up to \(t=0\), is warranted by Theorem 2 on page 304 of Kilbas and Marzan (2004). In addition, by Theorem 1 on page 300 of Kilbas and Marzan (2004), we know that this solution can be represented in an integral form by the relation

$$\begin{aligned} v_M(t)=1+\frac{1}{\Gamma (\alpha )}\int _0^t \frac{f_M(v_M(\tau ))}{(t-\tau )^{1-\alpha }}\,\mathrm{d}\tau . \end{aligned}$$
(2.2)

In particular, since \(f_M\geqslant 0\), we have that \(v_M\geqslant 1\). Also, by continuity at \(t=0\), there exists \(\delta >0\) such that

$$\begin{aligned} v_4(t)\leqslant 2 \text{ for } \text{ all } t\in (0,\delta ). \end{aligned}$$
(2.3)

We claim that

$$\begin{aligned} v_M(t)=v_4(t) \text{ for } \text{ all } t\in (0,\delta ) \text{ and } \text{ all } M\geqslant 4. \end{aligned}$$
(2.4)

Indeed, if \(t\in (0,\delta )\) and \(M\geqslant 4\), we have that

$$\begin{aligned} f_M(v_4(t))=\min \{ v_4^2(t),M^2\}=v_4^2(t)=\min \{ v_4^2(t),4^2\}=f_4(v_4(t)), \end{aligned}$$

thanks to (2.3), and therefore \({}^C \! D^\alpha _{0,+} v_4(t) = f_M(v_4(t))\) for all \(t\in (0,\delta )\). Then, the uniqueness of the solution of the Cauchy problem in (2.1) gives (2.4), as desired.

Furthermore, we observe that if \(M_2\geqslant M_1\), then \(f_{M_2}\geqslant f_{M_1}\) and then

$$\begin{aligned} {}^C \! D^\alpha _{0,+} v_{M_2}(t) = f_{M_2}(v_{M_2}(t)) \geqslant f_{M_1}(v_{M_2}(t)). \end{aligned}$$

Consequently, by the Comparison PrincipleFootnote 2 in Theorem 4.10 on page 2894 in Li and Liu (2018), we conclude that \(v_{M_2}\geqslant v_{M_1}\). Therefore, for every \(t\geqslant 0\), we can define

$$\begin{aligned} v(t):=\lim _{M\rightarrow +\infty } v_M(t)=\sup _{M\in {\mathbb {N}}\cap [4,+\infty )} v_M(t)\in [1,+\infty )\cup \{+\infty \}. \end{aligned}$$
(2.5)

By (2.4), we know that

$$\begin{aligned} v(t)=v_4(t)\leqslant \sup _{[0,\delta ]} v_4<+\infty \qquad \text{ for } \text{ all } t\in (0,\delta ), \end{aligned}$$
(2.6)

and hence we can consider the largest \(T_\star \in (0,+\infty )\cup \{+\infty \}\) such that

$$\begin{aligned} \sup _{t\in [0,T_0]}v(t)<+\infty \qquad \text{ for } \text{ all } T_0\in (0,T_\star ). \end{aligned}$$
(2.7)

By (2.6), we have that \(T_\star \geqslant \delta \). We claim that

$$\begin{aligned} {\left\{ \begin{array}{ll} {}^C \! D^\alpha _{0,+} v(t) = v^2(t) &{} { \text{ for } }t\in (0,T_\star ),\\ v(0)=1. \end{array}\right. } \end{aligned}$$
(2.8)

To prove this, we let \(T_0\in (0,T_\star )\) and we exploit (2.7) to see that

$$\begin{aligned} M_0:=\sup _{t\in [0,T_0]}v(t)<+\infty , \end{aligned}$$

and hence, for every \(t\in (0,T_0)\) and every \(M\geqslant M_0\),

$$\begin{aligned} f_M(v_{M_0}(t))=\min \{v_{M_0}^2(t), M^2\} =v_{M_0}^2(t)=\min \{v_{M_0}^2(t), M_0^2\}=f_{M_0}(v_{M_0}(t)). \end{aligned}$$

This gives that \({}^C \! D^\alpha _{0,+} v_{M_0}(t)=f_{M_0}(v_{M_0}(t)) =f_M(v_{M_0}(t))\) for all \(t\in (0,T_0)\) and \(M\geqslant M_0\), and therefore, by the uniqueness of the solution of the Cauchy problem in (2.1), we find that \(v_M=v_{M_0}\) in \((0,T_0)\). This and (2.5) give that

$$\begin{aligned} M_0\geqslant v(t)=v_{M_0}(t) \qquad {\text{ for } \text{ all } }t\in (0,T_0). \end{aligned}$$

As a consequence, recalling (2.2), we obtain that, for all \(t\in (0,T_0)\), the function v satisfies the integral relation

$$\begin{aligned} v(t)&= v_{M_0}(t)=1+\frac{1}{\Gamma (\alpha )}\int _0^t \frac{f_{M_0}(v_{M_0}(\tau ))}{ (t-\tau )^{1-\alpha }}\,\mathrm{d}\tau \\&=1+\frac{1}{\Gamma (\alpha )}\int _0^t \frac{f_{M_0}(v(\tau ))}{ (t-\tau )^{1-\alpha }}\,\mathrm{d}\tau = 1+\frac{1}{\Gamma (\alpha )}\int _0^t \frac{v^2(\tau )}{ (t-\tau )^{1-\alpha }}\,\mathrm{d}\tau , \end{aligned}$$

and thus, by Theorem 1 in Kilbas and Marzan (2004), we obtain (2.8), as desired.

Now we claim that

$$\begin{aligned} T_\star <+\infty . \end{aligned}$$
(2.9)

To this end, we argue by contradiction and assume that \(T_\star =+\infty \). We let \(\lambda \geqslant 2\), \(T>0\) (which will be taken as large as we wish in what follows), and

$$\begin{aligned} \phi (t):={\left\{ \begin{array}{ll} \left( 1-\displaystyle \frac{t}{T}\right) ^\lambda &{} { \text{ if } }t\in [0,T],\\ 0&{} { \text{ if } }t\in (T,+\infty ). \end{array}\right. } \end{aligned}$$

We know [see Lemmata 1 and 2 in Furati and Kirane (2008)] that

$$\begin{aligned} \begin{aligned}&\int ^T_0 D^\alpha _{T,-}\phi (t)\,\mathrm{d}t=\frac{\lambda \Gamma (\lambda -\alpha )}{ (\lambda -\alpha +1)\,\Gamma (\lambda -2\alpha +1)}\,T^{1-\alpha }, \\&\int ^T_0 \frac{| D^\alpha _{T,-}\phi (t)|^2}{ \phi (t) }\,\mathrm{d}t=\frac{\lambda ^2}{\lambda +1-2\alpha } \left( \frac{\Gamma (\lambda -\alpha )}{\Gamma (\lambda +1-2\alpha )}\right) ^2 T^{1-2\alpha }. \end{aligned} \end{aligned}$$
(2.10)

We also recall the left Riemann–Liouville derivative of order \(\alpha \in (0,1)\) with initial time \(t_0\) for \(t\in (t_0,+\infty )\), given by

and we point out that

$$\begin{aligned} {}^C \! D^\alpha _{t_0,+} f(t)= D^\alpha _{t_0,+} \big ( f(t)-f(t_0)\big ). \end{aligned}$$

This and (2.8) give that

$$\begin{aligned} v^2(t)= {}^C \! D^\alpha _{0,+} v(t) = D^\alpha _{0,+} \big ( v(t)-v(0)\big )=D^\alpha _{0,+} w(t), \end{aligned}$$
(2.11)

where \(w(t):=v(t)-1\).

It is also useful to consider the right Riemann–Liouville derivative of order \(\alpha \in (0,1)\) with final time \(t_0\) for \(t\in (-\infty ,t_0)\), given by

$$\begin{aligned} D^\alpha _{t_0,-} f(t):= -\frac{1}{\Gamma (1-\alpha )}\frac{\mathrm{d}}{\mathrm{d}t} \int ^{t_0}_t \frac{f(\tau )}{(\tau -t)^\alpha }\,\mathrm{d}\tau . \end{aligned}$$

Integrating by parts [see Corollary 2 on page 46 of Samko et al. (1993), or formula (15) in Kirane and Malik (2010)], and recalling (2.11), we obtain that

$$\begin{aligned} \begin{aligned} \int _0^T \phi (t)\,v^2(t)\,\mathrm{d}t=&\int _0^T \phi (t)\,D^\alpha _{0,+} w(t)\,\mathrm{d}t\\ =&\int _0^T D^\alpha _{T,-} \phi (t)\,w(t)\,\mathrm{d}t= \int _0^T D^\alpha _{T,-} \phi (t)\,\big (v(t)-1\big )\,\mathrm{d}t.\end{aligned} \end{aligned}$$
(2.12)

From this and (2.10), we find that

$$\begin{aligned} \int _0^T \phi (t)\,v^2(t)\,\mathrm{d}t= \int _0^T D^\alpha _{T,-} \phi (t)\,v(t)\,\mathrm{d}t-C_1\,T^{1-\alpha }, \end{aligned}$$

for some \(C_1>0\) independent of T.

Furthermore,

$$\begin{aligned} \int _0^T D^\alpha _{T,-} \phi (t)\,v(t)\,\mathrm{d}t&= \int _0^T \frac{D^\alpha _{T,-} \phi (t)}{\sqrt{\phi (t)}}\,{\sqrt{\phi (t)}}\,v(t)\,\mathrm{d}t\\&\leqslant \frac{1}{2}\int _0^T \frac{|D^\alpha _{T,-} \phi (t)|^2}{{\phi (t)}}\,\mathrm{d}t+ \frac{1}{2}\int _0^T {{\phi (t)}}\,v^2(t)\,\mathrm{d}t\\&= C_2\,T^{1-2\alpha }+ \frac{1}{2}\int _0^T {{\phi (t)}}\,v^2(t)\,\mathrm{d}t, \end{aligned}$$

thanks to (2.10), for some \(C_2>0\) independent of T. As a consequence, recalling (2.12), we conclude that

$$\begin{aligned} \frac{1}{2}\int _0^T {{\phi (t)}}\,v^2(t)\,\mathrm{d}t\leqslant C_2\,T^{1-2\alpha }-C_1\,T^{1-\alpha }. \end{aligned}$$

Therefore, recalling that \(v\geqslant 1\) in view of (2.5),

$$\begin{aligned}&C_2\,T^{1-2\alpha }-C_1\,T^{1-\alpha } \geqslant \frac{1}{2}\int _0^T {{\phi (t)}}\,\mathrm{d}t= \frac{1}{2}\int _0^T\left( 1-\displaystyle \frac{t}{T}\right) ^\lambda \,\mathrm{d}t=\frac{T}{2(1+\lambda )}, \end{aligned}$$

and accordingly

$$\begin{aligned} 0=\lim _{T\rightarrow +\infty } C_2\,T^{-2\alpha }-C_1\,T^{-\alpha }\geqslant \frac{1}{2(1+\lambda )}, \end{aligned}$$

which is a contradiction, thus completing the proof of (2.9).

Then, from (2.7) and (2.9), we obtain that

$$\begin{aligned} \limsup _{t\nearrow T_\star } v(t)=+\infty . \end{aligned}$$

Hence, we consider a sequence \(t_n\nearrow T_\star \) such that

$$\begin{aligned} \lim _{n\rightarrow +\infty } v(t_n)=+\infty , \end{aligned}$$
(2.13)

and we define

$$\begin{aligned} u(x,t):= -x\,v(t). \end{aligned}$$
(2.14)

For every \(t\in (0,T_\star )\), we have that

$$\begin{aligned} {}^C \! D^\alpha _{0,+} u(x,t) + u(x,t)\,\partial _x u(x,t)= -x\,{}^C \! D^\alpha _{0,+} v(t)+x\,v^2(t)=0, \end{aligned}$$

thanks to (2.8), and also \(u(x,0)=-x\,v(0)=-x\). These observations and (2.13) prove Theorem 1.2.

3 Proof of Theorem 1.4

We set

$$\begin{aligned} b=b(\alpha ):=\left( \frac{1}{\Gamma (2-\alpha )}\right) ^{1/\alpha } \end{aligned}$$
(3.1)

This choice of b is useful to make a suitable barrier satisfy a convenient inequality, with a precise determination of the coefficients involved, allowing us to use an appropriate comparison result, as it will be apparent in formula (3.2). This strategy will lead to a lower bound on the blowup time, thus proving the desired claim in (1.9). From this, we will obtain the uniform bound in (1.10) by detecting suitable monotonicity properties of the map \(\alpha \mapsto b(\alpha )\) in light of polygamma functions. The technical details are as follows.

For any \(t\in (0,b)\), let also

$$\begin{aligned} w(t):=\frac{b}{b-t}. \end{aligned}$$

Notice that \(w(0)=1\). Moreover, for any \(t\in (0,b)\) and any \(\tau \in (0,t)\), we have that

$$\begin{aligned} {\dot{w}}(\tau )=\frac{b}{(b-\tau )^2} \leqslant \frac{b}{(b-t)^2} =\frac{ w^2(t) }{b} . \end{aligned}$$

Consequently, by (1.1), for all \(t\in (0,b)\),

$$\begin{aligned} \begin{aligned} {}^C \! D^\alpha _{0,+} w(t)&= \frac{1}{\Gamma (1-\alpha )} \int _{0}^t \frac{\dot{w}(\tau )}{(t-\tau )^\alpha }\,\mathrm{d}\tau \leqslant \frac{w^2(t)}{b\,\Gamma (1-\alpha )} \int _{0}^t \frac{\mathrm{d}\tau }{(t-\tau )^\alpha }\\&= \frac{t^{1-\alpha } \,w^2(t)}{b\,\Gamma (1-\alpha )\,(1-\alpha )}= \frac{t^{1-\alpha } \,w^2(t)}{b\,\Gamma (2-\alpha )}\leqslant \frac{b^{1-\alpha } \,w^2(t)}{b\,\Gamma (2-\alpha )}\\ {}&= \frac{w^2(t)}{b^\alpha \,\Gamma (2-\alpha )}=w^2(t). \end{aligned} \end{aligned}$$
(3.2)

Therefore, using the Comparison Principle in Theorem 4.10 on page 2894 in Li and Liu (2018), if v is as in (2.8), we find that \(v\geqslant w\) in their common domain of definition. This, (2.14), and the fact that w diverges at \(t=b\) yield that

$$\begin{aligned} T_\star \leqslant b=b(\alpha ), \end{aligned}$$
(3.3)

which, together with (3.1), establishes (1.9), as desired.

Now we prove (1.10). For this, we first show that the map \((0,1)\ni \alpha \mapsto b(\alpha )\) that was introduced in (3.1) is monotone. To this end, we recall the polygamma functions for \(\tau \in (1,2)\) and \(n\in {\mathbb {N}}\) with their integral representations, namely

We observe, in particular, that, for all \(\tau \in (1,2)\),

$$\begin{aligned} \psi _1(\tau )=\int _{0}^{\infty }{\frac{t e^{-t\tau }}{1-e^{-t}}}\,\mathrm{d}t>0. \end{aligned}$$
(3.4)

Let also, for all \(\tau \in (1,2)\),

$$\begin{aligned} \xi (\tau ):=\log (\Gamma (\tau ))+(2-\tau )\psi _0(\tau ). \end{aligned}$$

We see that

$$\begin{aligned} \xi '(\tau )=\psi _0(\tau )-\psi _0(\tau )+(2-\tau )\psi _1(\tau )>0, \end{aligned}$$

thanks to (3.4) and therefore, for all \(\tau \in (1,2)\),

$$\begin{aligned} 0<\int _\tau ^2\xi '(\sigma )\,d\sigma =\xi (2)-\xi (\tau )= \log (\Gamma (2))-\xi (\tau )=-\xi (\tau ). \end{aligned}$$
(3.5)

Now we define

$$\begin{aligned} \lambda (\tau ):=\frac{\log (\Gamma (\tau ))}{2-\tau }. \end{aligned}$$

We have that, for all \(\tau \in (1,2)\),

$$\begin{aligned} \lambda '(\tau )=\frac{\log (\Gamma (\tau ))}{(2-\tau )^2}+ \frac{\psi _0(\tau )}{2-\tau }=\frac{\xi (\tau )}{(2-\tau )^2}<0, \end{aligned}$$

due to (3.5).

Therefore, the function \((1,2)\ni \tau \mapsto \lambda (\tau )\) is decreasing, and hence so is the function \((1,2)\ni \tau \mapsto e^{\lambda (\tau )}=:\Lambda (\tau )\). Hence, using the substitution \(\tau :=2-\alpha \), with \(\alpha \in (0,1)\), we deduce that the following function is increasing:

$$\begin{aligned} \Lambda (2-\alpha )&= e^{\lambda (2-\alpha )}= \exp \left( \frac{\log (\Gamma (2-\alpha ))}{\alpha }\right) \\ {}&= \exp \left( {\log (\Gamma ^{1/\alpha }(2-\alpha ))}\right) = \Gamma ^{1/\alpha }(2-\alpha )=\frac{1}{b(\alpha )}, \end{aligned}$$

thanks to (3.1).

Consequently, the function \((0,1)\ni \alpha \mapsto b(\alpha )\) is decreasing; hence, it attains its maximum as \(\alpha \searrow 0\). This and (3.3) give that

$$\begin{aligned} T_\star \leqslant \lim _{\alpha \searrow 0} b(\alpha ). \end{aligned}$$
(3.6)

Furthermore, using L’Hôpital’s Rule,

$$\begin{aligned}&\lim _{\alpha \searrow 0}\frac{\log (\Gamma (2-\alpha ))}{\alpha }= -\lim _{\alpha \searrow 0}\psi _0(2-\alpha )=-\psi _0(2)=\gamma -1, \end{aligned}$$

where \(\gamma \) is the Euler–Mascheroni constant, and therefore

$$\begin{aligned} \lim _{\alpha \searrow 0}b(\alpha )= \lim _{\alpha \searrow 0} \exp \left( -\frac{\log (\Gamma (2-\alpha ))}{\alpha }\right) =e^{1-\gamma }. \end{aligned}$$

This and (3.6) give the desired result in (1.10).

4 Proof of Theorem 1.6

To establish Theorem 1.6, we will introduce a suitable barrier [defined in formula (4.6)] that we exploit in combination with a comparison result to obtain lower bounds on the blowup time and prove the desired claim in (1.11). The construction of this auxiliary barrier relies on a careful choice of some parameters that need to be chosen in an algebraically convenient way [for instance, to satisfy the inequality in formula (4.8)]. The computational details of this proof are as follows.

We let \(\delta >0\) as in the statement of Theorem 1.6, and

$$\begin{aligned} \kappa :=\sqrt{1+\delta }-1>0. \end{aligned}$$
(4.1)

We define

$$\begin{aligned} \eta :=\frac{(1+\kappa )^2}{\kappa ^2},&\quad d:=\left( \frac{1}{\Gamma (2-\alpha )\,\kappa \,\eta \,(1+\eta )}\right) ^{\frac{1}{\alpha }},\\ a:=\frac{\Gamma (2-\alpha )}{d^{1-\alpha }},&\quad b:=(1+\kappa )a. \end{aligned}$$

Let also

$$\begin{aligned} T:= \frac{1}{b}-(1+\eta )d. \end{aligned}$$
(4.2)

In light of (4.2), we remark that

$$\begin{aligned} T\,&=\frac{1}{(1+\kappa )a}-(1+\eta )d\nonumber \\&=\frac{d^{1-\alpha }}{(1+\kappa )\,\Gamma (2-\alpha )}-(1+\eta )d\nonumber \\&=(1+\eta )\,d\,\left( \frac{d^{-\alpha }}{(1+\kappa )\,\Gamma (2-\alpha )\,(1+\eta )}-1 \right) \nonumber \\&=(1+\eta )\,d\,\left( \frac{\kappa \eta }{1+\kappa }-1\right) \nonumber \\&=(1+\eta )\,d\,\left( \frac{1+\kappa }{\kappa }-1\right) \\&=\frac{(1+\eta )\,d}{\kappa }\nonumber \\&=\frac{1+\eta }{\kappa }\; \left( \frac{1}{\Gamma (2-\alpha )\,\kappa \,\eta \,(1+\eta )}\right) ^{\frac{1}{\alpha }} \nonumber \\&= \frac{1}{\big ( \Gamma (2-\alpha )\big )^{\frac{1}{\alpha }}\,\kappa ^{\frac{1+\alpha }{\alpha }}\,\eta ^{\frac{1}{\alpha }}\,(1+\eta )^{\frac{1-\alpha }{\alpha }}} \nonumber \\&= \frac{\kappa ^{\frac{3(1-\alpha )}{\alpha }}}{ \big ( \Gamma (2-\alpha )\big )^{\frac{1}{\alpha }}\,(1+\kappa )^{\frac{2}{\alpha }}\,(1+2\kappa +2\kappa ^2)^{\frac{1-\alpha }{\alpha }}}.\nonumber \end{aligned}$$
(4.3)

Recalling (4.1), we can also define

$$\begin{aligned} c_\delta := \frac{\kappa ^{3}}{(1+\kappa )^{2}\,(1+2\kappa +2\kappa ^2)}, \end{aligned}$$

and then (4.3) becomes

$$\begin{aligned} T=\frac{c_\delta ^{\frac{1-\alpha }{\alpha }}}{ \big ( \Gamma (2-\alpha )\big )^{\frac{1}{\alpha }}\,(1+\kappa )^{2}}= \frac{c_\delta ^{\frac{1-\alpha }{\alpha }}}{ \big ( \Gamma (2-\alpha )\big )^{\frac{1}{\alpha }}\,(1+\delta )}, \end{aligned}$$
(4.4)

which coincides with the right-hand side of (1.11).

Therefore, to complete the proof of Theorem 1.6, it is enough to show that

$$\begin{aligned} T_\star \geqslant T. \end{aligned}$$
(4.5)

To this end, for all \(t\in (0,T)\), we define

$$\begin{aligned} z(t):=\frac{b}{a(1-bt)}+1-\frac{b}{a}. \end{aligned}$$
(4.6)

Notice that \(z(0)=1\). Moreover, for all \(t\in (0,T)\) and \(\tau \in (0,t)\) we have that

$$\begin{aligned} \begin{aligned} \frac{b}{a(1-b\tau )}-z(\tau +d)\,&= \frac{b}{a(1-b\tau )}-\frac{b}{a(1-bd-b\tau )}-1+\frac{b}{a}\\&= \frac{1+\kappa }{ 1-b\tau }-\frac{1+\kappa }{1-bd-b\tau }+\kappa \\&= -\frac{(1+\kappa )\,b\,d}{ (1-b\tau )(1-bd-b\tau ) }+\kappa \\&\geqslant -\frac{(1+\kappa )\,b\,d}{ (1-bT)(1-bd-bT) }+\kappa \\&= -\frac{(1+\kappa )}{ (1+\eta )\,\eta \,b\,d }+\kappa . \end{aligned} \end{aligned}$$
(4.7)

Hence, since

$$\begin{aligned} bd=(1+\kappa )ad= (1+\kappa )\,\Gamma (2-\alpha )\,d^{\alpha }= \frac{(1+\kappa )}{\kappa \,\eta \,(1+\eta )}, \end{aligned}$$

we see from (4.7) that

$$\begin{aligned} \frac{b}{a(1-b\tau )}-z(\tau +d)\geqslant -\kappa +\kappa =0, \end{aligned}$$

and therefore

$$\begin{aligned} \frac{b^2}{a^2(1-b\tau )^2}\geqslant z^2(\tau +d). \end{aligned}$$

As a consequence, we conclude that

$$\begin{aligned} \dot{z}(\tau )=\frac{b^2}{a(1-b\tau )^2}\geqslant a\,z^2(\tau +d). \end{aligned}$$

Accordingly, by (1.1), for all \(t\in (0,T)\),

$$\begin{aligned} {}^C \! D^\alpha _{0,+} z(t)= & {} \frac{1}{\Gamma (1-\alpha )} \int _{0}^t \frac{\dot{z}(\tau )}{(t-\tau )^\alpha }\,\mathrm{d}\tau \\\geqslant & {} \frac{a}{\Gamma (1-\alpha )} \int _{t-d}^t \frac{ z^2(\tau +d)}{(t-\tau )^\alpha }\,\mathrm{d}\tau . \end{aligned}$$

Consequently, using the fact that z is increasing,

$$\begin{aligned} {}^C \! D^\alpha _{0,+} z(t)\geqslant \frac{a}{\Gamma (1-\alpha )} \int _{t-d}^t \frac{ z^2(t)}{(t-\tau )^\alpha }\,\mathrm{d}\tau = \frac{a\,d^{1-\alpha }\,z^2(t)}{\Gamma (2-\alpha )}=z^2(t). \end{aligned}$$
(4.8)

Then, recalling (2.8) and exploiting the Comparison Principle in Theorem 4.10 on page 2894 in Li and Liu (2018), we obtain that \(v\leqslant z\) in their common domain of definition. In particular, this gives (4.5), and so the proof of Theorem 1.6 is complete.

5 A Motivation for (1.2) from the Job Market

In this section, we give a simple, but concrete, motivation for the time-fractional Burgers’ equation in (1.2) making a model of an ideal job market from a few basic principles. The discussion that we present here is a modification of classical models proposed for fluid dynamics and traffic flow in a highway.

We fix parameters \(\delta \), \(\varepsilon >0\), and we use the real line to describe the positions available in a company, in which workers can decide to work. More specifically, the working levels in the company are denoted by \(x\in \varepsilon {\mathbb {Z}}\), and the higher the value of x, the higher and more appealing the position is (e.g., \(x=\varepsilon \) corresponds to Brigadier, \(x=2\varepsilon \) to Major, \(x=3\varepsilon \) to Lieutenant, \(x=4\varepsilon \) to General, etc.).

We suppose that the main motivation for a worker to join the company by taking the position \(x\in \varepsilon {\mathbb {Z}}\) at time \(t\in \delta {\mathbb {N}}\) is provided by the possibility of career progression toward the successive level. If we denote by \(\rho \) the number of people employed in a given position at a given time, and by v the velocity of career progression relative to a given position at a given time, the “group velocity” of career progression for a given position at a given time is obtained by the product \(p:=\rho v\).

We suppose that the potential worker who is possibly entering the company at the level \(x\in \varepsilon {\mathbb {Z}}\) will look at the value of p for its perspective position and compare it with the value of p relative to subsequent level \(x+\varepsilon \), and this will constitute, in this model, the main drive for the worker to join the company. At time \(t\in \delta {\mathbb {N}}\), this driving force is therefore quantified by

$$\begin{aligned} {{\mathcal {D}}}(x,t):= {\hat{c}}\big (p(x+\varepsilon ,t)-p(x,t)\big )={\hat{c}}\Big (\rho (x+\varepsilon ,t)\,v(x+\varepsilon ,t)- \rho (x,t)\,v(x,t)\Big ), \end{aligned}$$
(5.1)

for a normalizing constant \({\hat{c}}>0\). Then, we assume that the potential worker bases her or his decision on not only considering the driving force at the present time, but also taking into account the past history of the company. Past events will be weighted by a kernel \({{\mathcal {K}}}\), to make the information coming from remote times less important than the ones relative to the contemporary situation. For concreteness, we suppose that the information coming from the time \(t-\tau \), with \(t=\delta N\), \(N\in {\mathbb {N}}\), and \(\tau \in \{\delta ,2\delta ,\dots , \delta N\}\), is weighted by the kernel

$$\begin{aligned} {{\mathcal {K}}}(\tau ):=\frac{\delta }{\tau ^\beta },\quad { \text{ for } \text{ some } }\beta \in (0,1). \end{aligned}$$
(5.2)

If all the potential workers argue in this way, the number of workers at time \(t=\delta N\) in the working position \(x\in \varepsilon {\mathbb {Z}}\) of the company is given by the initial number of workers, incremented by the effect of the drive function in the history of the company, according to the memory effect that we have described, that is

$$\begin{aligned} \rho (x,t)=\rho (x,\delta N)=\rho (x,0)+c\, \sum _{j=1}^N {{\mathcal {D}}}(x,t-\delta j) \,{{\mathcal {K}}}(\delta j), \end{aligned}$$

for some normalizing constant \(c>0\). Hence, exploiting (5.2),

$$\begin{aligned} \rho (x,t)=\rho (x,0)+c\, \sum _{j=1}^N {{\mathcal {D}}}(x,t-\delta j) \, \frac{\delta }{(\delta j)^\beta }. \end{aligned}$$
(5.3)

Using the Riemann sum approximation of an integral, for small \(\delta \) we can substitute the summation in the right-hand side of (5.3) with an integral, and, with this asymptotic procedure, we replace (5.3) with

$$\begin{aligned} \rho (x,t)=\rho (x,0)+c\, \int _0^t {{\mathcal {D}}}(x,t-\tau ) \, \frac{\mathrm{d}\tau }{\tau ^\beta }. \end{aligned}$$
(5.4)

Then, we define \(\alpha :=1-\beta \in (0,1)\) and, up to a timescale, we choose \(c:=1/\Gamma (\alpha )\). In this way, we can write (5.4) as

$$\begin{aligned} \rho (x,t)= & {} \rho (x,0)+\frac{1}{\Gamma (\alpha )}\, \int _0^t {{\mathcal {D}}}(x,t-\tau ) \, \frac{\mathrm{d}\tau }{\tau ^{1-\alpha }}\\= & {} \rho (x,0)+\frac{1}{\Gamma (\alpha )}\, \int _0^t {{\mathcal {D}}}(x,\sigma ) \, \frac{d\sigma }{(t-\sigma )^{1-\alpha }}, \end{aligned}$$

or, equivalently [see, e.g., Theorem 1 on page 300 of Kilbas and Marzan (2004)],

$$\begin{aligned} {}^C \! D^\alpha _{0,+} \rho (x,t)={{\mathcal {D}}}(x,t). \end{aligned}$$

Thus, recalling (5.1) (and using the normalization \({\hat{c}}:=1/\varepsilon \)),

$$\begin{aligned} {}^C \! D^\alpha _{0,+} \rho (x,t)= \frac{\rho (x+\varepsilon ,t)\,v(x+\varepsilon ,t)- \rho (x,t)\,v(x,t)}{\varepsilon }, \end{aligned}$$

and then, in the approximation of \(\varepsilon \) small,

$$\begin{aligned} {}^C \! D^\alpha _{0,+} \rho (x,t)=\partial _x\Big ( \rho (x,t)\,v(x,t)\Big ). \end{aligned}$$
(5.5)

Now we make the ansatz that the career velocity is mainly influenced by the number of people in a given position, namely this velocity is proportional to the “vacancies” in a given working level. If \(\rho _{\max }\in (0,+\infty )\) is the maximal number of workers that the market allows in any given position, we therefore assume that

$$\begin{aligned} v={\tilde{c}} (\rho _{\max }-\rho ), \end{aligned}$$
(5.6)

for a normalizing constant \({\tilde{c}}>0\). Of course, in more complicated models, one can allow \(\rho _{\max }\) and \({\tilde{c}}\) to vary in space and time, but we will take them to be constant to address the simplest possible case, and in fact, for simplicity, up to scalings, we take \(\rho _{\max }=1\) and \({\tilde{c}}=1\).

Then, plugging (5.6) into (5.5), we obtain

$$\begin{aligned} {}^C \! D^\alpha _{0,+} \rho (x,t)= \partial _x\Big ( \rho (x,t)\,\big (1-\rho (x,t)\big )\Big ). \end{aligned}$$
(5.7)

Now we perform the substitution

$$\begin{aligned} u(x,t):=2\rho (x,t)-1, \end{aligned}$$
(5.8)

and we thereby conclude that

$$\begin{aligned} {}^C \! D^\alpha _{0,+}u(x,t)&= 2{}^C \! D^\alpha _{0,+} \rho (x,t)\\&=2\partial _x\Big ( \rho (x,t)\,\big (1-\rho (x,t)\big )\Big )\\&=2\partial _x\left( \frac{u(x,t)+1}{2}\,\left( 1-\frac{u(x,t)+1}{2}\right) \right) \\&=2\partial _x\left( \frac{u(x,t)+1}{2}-\frac{u^2(x,t)+2u(x,t)+1}{4}\right) \\&=2\partial _x\left( \frac{1}{4}-\frac{u^2(x,t)}{4}\right) \\&=-u(x,t)\,\partial _x u(x,t), \end{aligned}$$

which corresponds to (1.2).

We remark that in the model above, one can interpret \(\rho \in {\mathbb {R}}\) also when it takes negative values, e.g., as a position vacancy. As a matter of fact, since the driving force of Eq. (5.7) can be written as \(\partial _x\rho (1-2\rho )\), we observe that such a drive becomes “stronger” for negative values of \(\rho \) (that is, vacancies in the job markets tend to increase the number of filled positions).

It is also interesting to interpret the result in Theorem 1.2 in light of the motivation discussed here and recalling the setting in (5.8). Indeed, the value 1/2 for the working force \(\rho \) plays a special role in our framework since not only it corresponds to the average between the null working force and the maximal one allowed by the market, but also, and most importantly, to the critical value of the concave function \(\rho (1-\rho )\), whose derivative is the driving force of Eq. (5.7).

In this spirit, recalling (1.4), we have that the solution found in Theorem 1.2 takes the form

$$\begin{aligned} \rho (x,t)=\frac{1-xv(t)}{2}, \end{aligned}$$
(5.9)

for a function v which is diverging in finite time. The expression in (5.9) says that the role corresponding to the job position \(x=0\) has, at the initial time, exactly the critical working force \(\rho =1/2\). Given the linear structure in x of the solution in (5.9), this says that the job position corresponding to \(x=0\) will maintain its critical value \(\rho =1/2\) for all times, while higher level job roles will experience a dramatic loss of number of positions available (and, correspondingly, lower level job roles a dramatic increase). Though it is of course unrealistic that the job market really attains an (either positive or negative) infinite value in a finite time, and the model presented in Eq. (5.7) must necessarily “break” for too large values of \(\rho \) (which, of course, in practice, cannot exceed the total working population), we think that solutions such as (5.9) may represent a concrete case in which the market would in principle allow arbitrarily high-level job positions, but in practice (almost) all the workers end up obtaining a position level below a certain threshold (in this case normalized to \(x=0\)), which constitutes a “de facto” optimal role allowed by the evolution of special preexisting conditions.