1 Introduction

In recent years, fractional differential equations (FDEs) have received increasing attention because of their widespread applications in science and engineering [2, 3, 5, 11, 14, 15, 18, 19]. It is well known that it is difficult to directly derive the analytic solutions of FDEs, particularly for nonlinear FDEs. For this reason, it is important to develop reliable numerical algorithms for FDEs, see [13] and their references.

In this paper, we study the following initial value problem (IVP) with Caputo–Hadamard derivative:

$$\begin{aligned} \left\{ \begin{aligned}&_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}{y(t)} = f(t,y),\quad {0<a<t},\\&y(a) = y_{a}, \end{aligned}\right. \end{aligned}$$
(1.1)

where \(0<\alpha <1\), f(ty) is a nonlinear function on domain \(\Omega \), and initial value \(y_{a}\) is given. Here, \(_{CH}\mathrm{{D}}^{\nu }_{{a, t}}{y(t)}\) is the Caputo–Hadamard derivative with order \(\nu \) [9]:

$$\begin{aligned} _{CH}\mathrm{{D}}^{\nu }_{{a, t}}{y(t)} = \frac{1}{\Gamma (n-\nu )} \int _{a}^{t} \left( \log \frac{t}{s}\right) ^{n-\nu -1}\delta ^{n}y(s) \frac{\mathrm{d}s}{s},\quad {0<a<t}, \end{aligned}$$

with \(\delta ^{n}y(s)=\left( s\frac{\mathrm{{d}}}{\mathrm{{d}}s}\right) ^n y(s), n-1<\nu <n\in {\mathbb {Z}}^+.\)

If f(ty) is continuous in a given domain \(\Omega \), then (1.1) exists at least one solution in the subset of the given domain. This is just the Peano existence theorem for the case of Caputo–Hadamard derivative. To guarantee the uniqueness of solutions to FDEs, for convenience, we often suppose that the nonlinear function f(ty) satisfies Lipschitz condition with respect to y, that is \(|f(t,y_{1})-f(t,y_{2})|\le L|y_{1}-y_{2}|\) with \(L>0\) being the Lipschitz constant. For the continuous function f(ty), IVP (1.1) is equivalent to the following Volterra integral equation [1]:

$$\begin{aligned} y(t)=y_{a}+\frac{1}{\Gamma (\alpha )}\int _{a}^{t} \left( \log \frac{t}{s}\right) ^{\alpha -1}f(s,y(s))\frac{\mathrm{d}s}{s}. \end{aligned}$$
(1.2)

Although Hadamard fractional derivative was proposed early in 1892 [8], there are few studies for this fractional derivative problem except [2, 6, 7, 10, 16, 17]. Especially, Gohar et al. [6] studied the existence and uniqueness of solution to Caputo–Hadamard FDE and the corresponding continuation theorem, where the Euler and predictor–corrector methods were also investigated. Recently, the Hadamard derivative and Hadamard-type fractional differential equations have been found to be useful in the practical problems related to mechanics and engineering, e.g., the fracture analysis, or both planar and three-dimensional elasticities [2]. In this paper, we first investigate smoothness properties of the solution of the problem (1.1) under the vector function f in equation (1.1) fulfills suitable conditions. By borrowing the ideas derived in [12], the fractional rectangular, \({{L}}_{\mathrm{log},1}\) interpolation, and modified predictor–corrector methods are developed for the Volterra integral equation (1.2) which is equivalent to IVP (1.1). With the help of modified Gronwall inequality, stability and error estimates for these three methods are proved in detail. Then, we study the initial-boundary value problem with Hadamard-type derivative using the techniques presented in [20]. The Caputo–Hadamard derivative in temporal direction is approximated by a simple method, which yields a semi-discrete scheme. The proposed semi-discrete scheme is stable in \(H^{1}\) semi-norm and is of \(O(\tau ^{2-\alpha })\) accuracy in \(H^{1}\) norm. A fully discrete scheme is obtained by employing the second-order central difference formula in spatial discretization. Stability and convergence (with \(O(\tau ^{2-\alpha }+h^{2})\)) of the fully discrete scheme are shown as well.

The remainder of this paper is organized as follows. In Sect. 2, we present smooth properties of the solution for the problem (1.1) and establish numerical schemes for Volterra integral equation (1.2) using the fractional rectangular, \({{L}}_{\mathrm{log},1}\) interpolation, and modified predictor–corrector methods. Numerical stability and error estimates of the derived schemes are analyzed. We study numerical methods of FPDE with Caputo–Hadamard derivative in Sect. 3, along with stability and convergent results for semi-discrete and fully discrete schemes. Numerical examples which support the theoretical analysis are provided in Sect. 4. The conclusions and remarks are included in the last section.

Throughout the paper, C denotes a generic constant that depends on parameters \(\alpha , L, a\), and T, but is independent of the stepsize \(\tau \) and h. It can take different values in different situations.

2 FODE with Caputo–Hadamard Derivative

In this section, we aim at investigating numerical methods for fractional ordinary differential equation (FODE) with Caputo–Hadamard derivative. Smoothness properties of the solution to equation (1.1) are first given when the nonlinear function f satisfies certain conditions. Then, by approximating the corresponding equivalent Volterra integral equation (1.2) of IVP (1.1), three kinds of numerical schemes are derived. Stability and convergence are studied as well.

2.1 Smoothness Properties

We start by discussing the relationship between Caputo–Hadamard derivative and Caputo derivative. It follows from the definitions of Caputo–Hadamard and Caputo derivatives with \(0<\alpha <1\) that:

$$\begin{aligned} _{CH}\mathrm{{D}}^{\alpha }_{{a, t}}{y(t)}= & {} \frac{1}{\Gamma (1-\alpha )} \int _{a}^{t} \left( \log \frac{t}{s}\right) ^{-\alpha }\delta y(s) \frac{\mathrm{d}s}{s} \nonumber \\= & {} \frac{1}{\Gamma (1-\alpha )} \int _{a}^{t} \left( \log \frac{t}{a}-\log \frac{s}{a}\right) ^{-\alpha }\delta y(s) \mathrm{d}\log \frac{s}{a} \nonumber \\= & {} \frac{1}{\Gamma (1-\alpha )} \int _{0}^{\log \frac{t}{a}} \left( \log \frac{t}{a}-\tau \right) ^{-\alpha }\frac{\mathrm{{d}}}{\mathrm{{d}}\tau } y(ae^{\tau }) \mathrm{d}\tau \ \ \left( \tau =\log \frac{s}{a}\right) \nonumber \\= & {} \frac{1}{\Gamma (1-\alpha )} \int _{0}^{t'} \left( t'-\tau \right) ^{-\alpha }\frac{\mathrm{{d}}}{\mathrm{{d}}\tau } y(ae^{\tau }) \mathrm{d}\tau \ \ \left( t'=\log \frac{t}{a}\right) \nonumber \\= & {} \frac{1}{\Gamma (1-\alpha )} \int _{0}^{t'} \left( t'-\tau \right) ^{-\alpha }\frac{\mathrm{{d}}}{\mathrm{{d}}\tau } {\widetilde{y}}_{a}(\tau ) \mathrm{d}\tau \ \ (y(ae^{\tau })={\widetilde{y}}_{a}(\tau ))\nonumber \\= & {} \,_{C}\mathrm{{D}}^{\alpha }_{0,t'}{{\widetilde{y}}_{a}(t')}. \end{aligned}$$
(2.1)

From the relation (2.1), it is evident that the initial value problem with Caputo–Hadamard derivative (1.1) and the Volterra integral equation (1.2) are equivalent to the initial value problem with Caputo derivative:

$$\begin{aligned} \left\{ \begin{aligned}&_{C}\mathrm{{D}}^{\alpha }_{0,t'}{{\widetilde{y}}_{a}(t')} = f({ae}^{t'},{\widetilde{y}}_{a}),\ \ t'>0,\, a>0,\\&{\widetilde{y}}_{a}(0) = y_{a}. \end{aligned}\right. \end{aligned}$$
(2.2)

It immediately follows from (2.2) that:

$$\begin{aligned} {\widetilde{y}}_{a}(t')=y_{a}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t'} \left( t'-\tau \right) ^{\alpha -1}f({ae}^{\tau },{\widetilde{y}}_{a}(\tau ))\mathrm{d}\tau . \end{aligned}$$
(2.3)

Therefore:

$$\begin{aligned} y(t)=y(a e^{\log \frac{t}{a}})={\widetilde{y}}_{a}(t') \end{aligned}$$

solves (1.1).

Then, according to Diethelm’s smoothness results [4], we immediately get similar theorems. Following [6], there exists a \(T>a>0\), such that the uniquely determined solution y of the problem (1.1) holds on the interval [aT]. Here, the assumption that the function f is continuous and satisfies a Lipschitz condition with respect to y on the considered domain \(\Omega \) is used.

Theorem 2.1

  1. 1.

    Suppose that \(0<\alpha <1\) and \(f\in C^{2}(\Omega )\). Then, there exists a function \(\phi \in C^{1}[a,T]\), such that the solution y(t) of the IVP (1.1) can be expressed in the form:

    $$\begin{aligned} y(t)=\phi (t)+\sum _{k=1}^{\lceil 1/\alpha \rceil -1} \lambda _{k}\left( \log \frac{t}{a}\right) ^{\alpha k}, \end{aligned}$$

    where \(\lambda _{k}\in {\mathbb {R}}\,(k=1,2,\ldots ,\lceil 1/\alpha \rceil -1)\) and \(\lceil 1/\alpha \rceil \) denotes the smallest integer greater than or equal to \(1/\alpha \).

  2. 2.

    Suppose that \(0<\alpha <1\) and \(f\in C^{3}(\Omega )\). Then, there exists a function \(\varphi \in C^{2}[a,T]\), such that the solution y(t) of the IVP (1.1) can be expressed in the form:

    $$\begin{aligned} y(t)=\varphi (t)+\sum _{k=1}^{\lceil 2/\alpha \rceil -1} \lambda _{k}\left( \log \frac{t}{a}\right) ^{\alpha k} +\sum _{l=1}^{\lceil 1/\alpha \rceil -1} \mu _{l}\left( \log \frac{t}{a}\right) ^{1+\alpha l}, \end{aligned}$$

    where \(\lambda _{k}\in {\mathbb {R}}\,(k=1,2,\ldots ,\lceil 2/\alpha \rceil -1)\) and \(\mu _{l}\in {\mathbb {R}}\,(l=1,2,\ldots ,\lceil 1/\alpha \rceil -1)\).

Theorem 2.2

Assume that \(0<\alpha <n\,(n\in {\mathbb {Z}}^+)\) and \(y\in C^{n}[a, T]\). Then, there exists a certain function \(\psi \in C^{n-\lceil \alpha \rceil }[a, T]\), such that:

$$\begin{aligned} _{CH}\mathrm{{D}}^{\alpha }_{{a, t}}{y(t)}=\sum _{k=0}^{n-\lceil \alpha \rceil -1} \frac{\delta ^{(k+\lceil \alpha \rceil )}y(a)}{\Gamma (\lceil \alpha \rceil -\alpha +k+1)} \left( \log \frac{t}{a}\right) ^{\lceil \alpha \rceil -\alpha +k}+\psi (t). \end{aligned}$$

Moreover, \(\delta ^{(n-\lceil \alpha \rceil )}\psi \) fulfills a Lipschitz condition of order \(\lceil \alpha \rceil -\alpha \).

Corollary 2.1

Let \(0<\alpha <n\,(n\in {\mathbb {Z}}^+)\) and \(y\in C^{n}[a, T]\). Then, \(_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}{y(t)}\in C[a, T]\).

2.2 Numerical Schemes

For given time \(T(>a>0)\) and a positive integer N, we divide the interval [aT] into \(a=t_{0}<t_{1}<\cdots<t_{k}<t_{k+1}<\cdots <t_{N}=T\), with uniform mesh \(\tau =t_{k+1}-t_{k},\, k=0,1,\ldots ,N-1\). In the following, we do not make any distinction between \(y_a\) and \(y_0\) just for convenience.

To numerically solve the IVP (1.1), we only need to approximate the integral of formula (1.2), since (1.1) is equivalent to (1.2). Denote:

$$\begin{aligned} I_{k+1}&= \int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} f(s,y(s))\frac{\mathrm{d}s}{s}\\&= \sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}f(s,y(s))\frac{\mathrm{d}s}{s}, \ \ k=0,1,\ldots ,N-1. \end{aligned}$$

Then:

$$\begin{aligned} I_{k+1}\approx \sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} {\widetilde{f}}_{j}(s,y(s))\frac{\mathrm{d}s}{s}, \end{aligned}$$

where \({\widetilde{f}}_{j}(s,y(s))\approx f(s,y(s)), \ s\in [t_{j}, t_{j+1}],\ j=0,1,\ldots ,k\). By adopting different approaches to approximate function f(sy(s)), we get three kinds of numerical schemes as follows. Let \(y_{j}\approx y(t_{j}),\, j=0,1,\ldots ,k,\ k=0,1,\ldots ,N-1\).

(1) The fractional rectangular method

Choosing \({\widetilde{f}}_{j}(s,y(s))=f(t_{j},y_{j}),\, j=0,1,\ldots ,k,\ k=0,1,\ldots ,N-1\), (1.2) yields:

$$\begin{aligned} y_{k+1}=y_{a}+\sum _{j=0}^{k}w_{j,k+1}f(t_{j},y_{j}), \end{aligned}$$
(2.4)

where

$$\begin{aligned} w_{j,k+1}&= \frac{1}{\Gamma (\alpha )}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \frac{\mathrm{d}s}{s}\\&= \frac{1}{\Gamma (\alpha +1)}\left[ \left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha }- \left( \log \frac{t_{k+1}}{t_{j+1}}\right) ^{\alpha }\right] , \ \ j=0,1,\ldots ,k. \end{aligned}$$

(2) The fractional \({{L}}_{\mathrm{log},1}\) interpolation method

Choosing \({\widetilde{f}}_{j}(s,y(s))=\frac{\log \frac{s}{t_{j+1}}}{\log \frac{t_{j}}{t_{j+1}}}f(t_{j},y_{j}) +\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}} f(t_{j+1},y_{j+1}),\, j=0,1,\ldots ,k,\ k=0\), which can be regarded as “one order” interpolation in the sense of base of the logarithmic function (denoted as one \({{L}}_{\mathrm{log},1}\) interpolation for convenience), obtains from (1.2) that:

$$\begin{aligned} y_{k+1} = y_{a}+\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1}f(t_{j},y_{j}), \end{aligned}$$
(2.5)

where

$$\begin{aligned}&{\widetilde{w}}_{j,k+1}=\frac{1}{\Gamma (\alpha +2)} \left\{ \begin{aligned}&\frac{1}{\log \frac{t_{1}}{a}}A_{0}, \ \ j=0,\\&\frac{1}{\log \frac{t_{j+1}}{t_{j}}}A_{j}+\frac{1}{\log \frac{t_{j-1}}{t_{j}}}B_{j}, \ \ j=1,2,\ldots ,k,\\&\left( \log \frac{t_{k+1}}{t_{k}}\right) ^{\alpha }, \ \ j=k+1, \end{aligned}\right. \\ A_{j}= & {} \left( \log \frac{t_{k+1}}{t_{j+1}}\right) ^{\alpha +1} -\left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha +1}\\&+(\alpha +1)\left( \log \frac{t_{j+1}}{t_{j}}\right) \left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha }, \ j=0,1,\ldots ,k,\\ B_{j}= & {} \left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha +1} -\left( \log \frac{t_{k+1}}{t_{j-1}}\right) ^{\alpha +1}\\&+(\alpha +1)\left( \log \frac{t_{j}}{t_{j-1}}\right) \left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha }, \ j=1,2,\ldots ,k. \end{aligned}$$

In fact:

$$\begin{aligned} I_{k+1}\approx&\sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} {\widetilde{f}}_{j}(s,y(s))\frac{\mathrm{d}s}{s}\\ =&\sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \left[ \frac{\log \frac{s}{t_{j+1}}}{\log \frac{t_{j}}{t_{j+1}}}f(t_{j},y_{j}) +\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}} f(t_{j+1},y_{j+1})\right] \frac{\mathrm{d}s}{s}\\ =&\sum _{j=0}^{k}\Bigg [\frac{f(t_{j},y_{j})}{\log \frac{t_{j}}{t_{j+1}}} \int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \log \frac{s}{t_{j+1}}\frac{\mathrm{d}s}{s}\\&+\frac{f(t_{j+1},y_{j+1})}{\log \frac{t_{j+1}}{t_{j}}} \int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \log \frac{s}{t_{j}}\frac{\mathrm{d}s}{s}\Bigg ]\\ =&\sum _{j=0}^{k}\Bigg [\frac{f(t_{j},y_{j})}{\log \frac{t_{j}}{t_{j+1}}} \left( -\frac{1}{\alpha }\right) \left( \log \frac{s}{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha }\Big |_{t_{j}}^{t_{j+1}} -\int _{t_{j}}^{t_{j+1}}\left( \log \frac{t_{k+1}}{s}\right) ^{\alpha } \frac{\mathrm{d}s}{s}\right) \\&+\frac{f(t_{j+1},y_{j+1})}{\log \frac{t_{j+1}}{t_{j}}}\left( -\frac{1}{\alpha }\right) \left( \log \frac{s}{t_{j}}\left( \log \frac{t_{k+1}}{s}\right) ^{\alpha }\Big |_{t_{j}}^{t_{j+1}} -\int _{t_{j}}^{t_{j+1}}\left( \log \frac{t_{k+1}}{s}\right) ^{\alpha } \frac{\mathrm{d}s}{s}\right) \Bigg ]\\ =&\frac{1}{\alpha (\alpha +1)}\frac{f(t_{0},y_{0})}{\log \frac{t_{1}}{a}} \Bigg [\left( \log \frac{t_{k+1}}{t_{1}}\right) ^{\alpha +1} -\left( \log \frac{t_{k+1}}{a}\right) ^{\alpha +1}\\&+(\alpha +1)\log \frac{t_{1}}{a}\left( \log \frac{t_{k+1}}{a}\right) ^{\alpha }\Bigg ]\\&+\frac{1}{\alpha (\alpha +1)}\sum _{j=1}^{k}\frac{f(t_{j},y_{j})}{\log \frac{t_{j+1}}{t_{j}}} \Bigg [\left( \log \frac{t_{k+1}}{t_{j+1}}\right) ^{\alpha +1} -\left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha +1}\\&+(\alpha +1)\log \frac{t_{j+1}}{t_{j}} \left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha }\Bigg ]\\&+\frac{1}{\alpha (\alpha +1)}\sum _{j=1}^{k}\frac{f(t_{j},y_{j})}{\log \frac{t_{j-1}}{t_{j}}} \Bigg [\left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha +1} -\left( \log \frac{t_{k+1}}{t_{j-1}}\right) ^{\alpha +1}\\&+(\alpha +1)\log \frac{t_{j}}{t_{j-1}}\left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha }\Bigg ] +\frac{1}{\alpha (\alpha +1)}\left( \log \frac{t_{k+1}}{t_{k}}\right) ^{\alpha } f(t_{k+1},y_{k+1}). \end{aligned}$$

Remark 2.1

Scheme (2.5) is different from that in [6]. In [6], f(sy(s)) in subinterval \([t_j,t_{j+1}]\) is approximated by \({\widetilde{f}}_{j}(s,y(s)) =\frac{1}{2}(f(t_{j},y_{j})+f(t_{j+1},y_{j+1})).\) The former has higher accuracy. Besides, it is not difficult to verify that the coefficients \(w_{j,k+1}\) and \({\widetilde{w}}_{j,k+1}\) in (2.4) and (2.5) satisfy the following properties:

$$\begin{aligned} w_{j,k+1}>0,\ \ j=0,1,\ldots ,k,\ k=0,1,\ldots ,N-1,\\ {\widetilde{w}}_{j,k+1}>0,\ \ j=0,1,\ldots ,k+1,\ k=0,1,\ldots ,N-1. \end{aligned}$$

(3) The modified predictor–corrector method

Based on the left fractional rectangular scheme (2.4) and the \({{L}}_{\mathrm{log},1}\) interpolation scheme (2.5), the modified predictor-corrector scheme is given by:

$$\begin{aligned} \left\{ \begin{aligned}&y_{k+1}^{P}=y_{a}+\sum _{j=0}^{k}w_{j,k+1}f(t_{j},y_{j}),\\&y_{k+1}=y_{a}+\sum _{j=0}^{k}{\widetilde{w}}_{j,k+1}f(t_{j},y_{j}) +{\widetilde{w}}_{k+1,k+1}f(t_{k+1},y_{k+1}^{P}). \end{aligned}\right. \end{aligned}$$
(2.6)

with \(k=0,1,\ldots ,N-1\).

2.3 Several Lemmas

In the following, we present some properties of the coefficients in (2.4) and (2.5) along with several useful lemmas.

Lemma 2.1

Suppose \(0<\alpha <1\), \(k=0,1,\ldots ,N-1\), and N is a positive integer. Then, the coefficients in (2.4) and (2.5) satisfy the following inequalities:

$$\begin{aligned} w_{j,k+1}\le C\log \frac{t_{j+1}}{t_{j}} \left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha -1},\ \ j=0,1,\ldots ,k, \end{aligned}$$
(2.7)

and

$$\begin{aligned} {\widetilde{w}}_{j,k+1}\le C \left\{ \begin{aligned}&\log \frac{t_{1}}{a} \left( \log \frac{t_{k+1}}{a}\right) ^{\alpha -1},\ \ j=0,\\&\log \frac{t_{j+1}}{t_{j}} \left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha -1}\\&+\log \frac{t_{j}}{t_{j-1}} \left( \log \frac{t_{k+1}}{t_{j-1}}\right) ^{\alpha -1},\ \ j=1,2,\ldots ,k,\\&\log \frac{t_{k+1}}{t_{k}}\left( \log \frac{t_{k+1}}{t_{k}}\right) ^{\alpha -1},\ \ j=k+1, \end{aligned}\right. \end{aligned}$$
(2.8)

where C is a constant depending on \(\alpha , a,\) and T.

The proof of this lemma can be referred to that of Lemma 4.1 in [6]. Next, we present the following modified Gronwall inequality (Lemma 4.3 in [6]) which is crucial to the proof of the stability and error analysis for the above derived numerical methods.

Lemma 2.2

Assume that \(0<\alpha <1,\,T>a>0\), N is a positive integer, and \(b_{j, k} = \left( \log \frac{t_{k}}{t_{j}}\right) ^{\alpha -1}\log \frac{t_{j+1}}{t_{j}}\ ( j=0,1,\ldots ,k-1,\ k=1,2,\ldots ,N )\) with \(a=t_{0}<t_{1}<\cdots<t_{k}<\cdots <t_{N}=T\). Suppose \(\rho _{0}\) is positive and the positive sequence \(\{\sigma _{k}\}\) satisfies:

$$\begin{aligned} \left\{ \begin{aligned}&\sigma _{0}\le \rho _{0},\\&\sigma _{k}\le \sum _{j=0}^{k-1}b_{j, k}\sigma _{j}+\rho _{0}. \end{aligned}\right. \end{aligned}$$
(2.9)

Then:

$$\begin{aligned} \sigma _{k}\le C\rho _{0}, \ \ k=1, 2, \ldots , N, \end{aligned}$$
(2.10)

where C is a positive constant independent of k.

Lemma 2.3

If \(g(t)\in C^{1}[a,T]\), then:

$$\begin{aligned} \left| \frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}g(s)\frac{\mathrm{d}s}{s} -\sum _{j=0}^{k}w_{j,k+1}g(t_{j})\right| \le \frac{C||\delta g||_{\infty }}{\Gamma (\alpha +1)} \left( \log \frac{T}{a}\right) ^{\alpha }\tau . \end{aligned}$$

Proof

By the mean value theorem, there exists \(\xi _{j}\in (t_{j}, t_{j+1})\), such that:

$$\begin{aligned}&\left| \frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}g(s)\frac{\mathrm{d}s}{s} -\sum _{j=0}^{k}w_{j,k+1}g(t_{j})\right| \\&\quad = \frac{1}{\Gamma (\alpha )} \left| \sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} (g(s)-g(t_{j}))\frac{\mathrm{d}s}{s}\right| \\&\quad \le \frac{1}{\Gamma (\alpha )}\sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \left| \log \frac{s}{t_{j}}\cdot \frac{g(s)-g(t_{j})}{\log \frac{s}{t_{j}}}\right| \frac{\mathrm{d}s}{s}\\&\quad = \frac{1}{\Gamma (\alpha )}\sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \left| \log \frac{s}{t_{j}}\cdot \delta g(\xi _{j})\right| \frac{\mathrm{d}s}{s}\\&\quad \le \frac{||\delta g||_{\infty }}{\Gamma (\alpha )}\sum _{j=0}^{k}\log \frac{t_{j+1}}{t_{j}} \int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}\frac{\mathrm{d}s}{s}\\&\quad \le \frac{||\delta g||_{\infty }}{\Gamma (\alpha )}\max _{0\le l\le N-1}\log \frac{t_{l+1}}{t_{l}} \int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}\frac{\mathrm{d}s}{s}\\&\quad = \frac{||\delta g||_{\infty }}{\Gamma (\alpha +1)} \left( \log \frac{t_{k+1}}{a}\right) ^{\alpha } \log \frac{t_{1}}{a}\\&\quad \le \frac{C||\delta g||_{\infty }}{\Gamma (\alpha +1)} \left( \log \frac{T}{a}\right) ^{\alpha }\tau . \end{aligned}$$

The proof is thus complete. \(\square \)

Lemma 2.4

If \(g(t)\in C^{2}[a,T]\), then:

$$\begin{aligned} \left| \frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}g(s)\frac{\mathrm{d}s}{s} -\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1}g(t_{j})\right| \le \frac{C||\delta ^{2} g||_{\infty }}{\Gamma (\alpha +1)} \left( \log \frac{T}{a}\right) ^{\alpha }\tau ^{2}. \end{aligned}$$

Proof

For \(s\in (t_{j}, t_{j+1})\), there holds:

$$\begin{aligned}&g(s)-\frac{\log \frac{s}{t_{j+1}}}{\log \frac{t_{j}}{t_{j+1}}}g(t_{j}) -\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}}g(t_{j+1})\\&\quad = \frac{\log \frac{s}{t_{j+1}}}{\log \frac{t_{j}}{t_{j+1}}}(g(s)-g(t_{j})) +\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}}(g(s)-g(t_{j+1}))\\&\quad = \frac{\log \frac{s}{t_{j+1}}}{\log \frac{t_{j}}{t_{j+1}}}\log \frac{s}{t_{j}}\cdot \delta g(\eta _{j}) +\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}}\log \frac{s}{t_{j+1}}\cdot \delta g(\zeta _{j})\\&\quad = \frac{\log \frac{s}{t_{j+1}}\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}} \log \frac{\zeta _{j}}{\eta _{j}}\cdot \delta ^{2} g(\xi _{j}), \end{aligned}$$

where \(t_{j}<\eta _{j}<\xi _{j}<\zeta _{j}<t_{j+1}\).

As a result, one obtains:

$$\begin{aligned}&\left| \frac{1}{\Gamma (\alpha )} \int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}g(s) \frac{\mathrm{d}s}{s}-\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1}g(t_{j})\right| \\&\quad =\frac{1}{\Gamma (\alpha )} \left| \sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \left( g(s)-\frac{\log \frac{s}{t_{j+1}}}{\log \frac{t_{j}}{t_{j+1}}}g(t_{j}) -\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}}g(t_{j+1})\right) \frac{\mathrm{d}s}{s}\right| \\&\quad \le \frac{1}{\Gamma (\alpha )} \sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \left| \frac{\log \frac{s}{t_{j+1}}\log \frac{s}{t_{j}}}{\log \frac{t_{j+1}}{t_{j}}}\log \frac{\zeta _{j}}{\eta _{j}} \cdot \delta ^{2}g(\xi _{j})\right| \frac{\mathrm{d}s}{s}\\&\quad \le \frac{1}{\Gamma (\alpha )}\sum _{j=0}^{k}\int _{t_{j}}^{t_{j+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1} \left| \log \frac{s}{t_{j+1}}\log \frac{s}{t_{j}}\cdot \delta ^{2} g(\xi _{j})\right| \frac{\mathrm{d}s}{s}\\&\quad \le \frac{||\delta ^{2} g||_{\infty }}{\Gamma (\alpha )} \left( \max _{0\le l\le N-1}\log \frac{t_{l+1}}{t_{l}}\right) ^{2} \int _{a}^{t_{k+1}}\left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}\frac{\mathrm{d}s}{s}\\&\quad \le \frac{C||\delta ^{2} g||_{\infty }}{\Gamma (\alpha +1)} \left( \log \frac{T}{a}\right) ^{\alpha }\tau ^{2}. \end{aligned}$$

All this completes the proof. \(\square \)

2.4 Stability Analysis

In this subsection, we derive stability of the proposed numerical schemes.

Theorem 2.3

Suppose that \(y_{j}\ (j=1,2,\ldots ,k)\) are the solutions of scheme (2.4) and that f(ty) is Lipschitz continuous with respect to y with a Lipschitz constant L. Then, the fractional rectangular scheme (2.4) is stable.

The proof of this theorem can be referred to [6] or left to the interested readers.

Theorem 2.4

Suppose that \(y_{j}\ (j=1,2,\ldots ,k)\) are the solutions of scheme (2.5) and that f(ty) satisfies the Lipschitz condition with respect to the second argument y with a Lipschitz constant L. Then, the \({\hbox {L}}_{\mathrm{log},1}\) interpolation scheme (2.5) is stable.

Proof

Let \({\widetilde{y}}_{a}\) and \({\widetilde{y}}_{j}\ (j=1,\ldots ,k+1)\) be the perturbations of \(y_{a}\) and \(y_{j}\), respectively. Then, the perturbation equation is:

$$\begin{aligned} {\widetilde{y}}_{k+1}={\widetilde{y}}_{a}+\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1} \left( f(t_{j},y_{j}+{\widetilde{y}}_{j})-f(t_{j},y_{j})\right) . \end{aligned}$$

Thus:

$$\begin{aligned} |{\widetilde{y}}_{k+1}|&= \left| {\widetilde{y}}_{a}+\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1} \left( f(t_{j},y_{j}+{\widetilde{y}}_{j})-f(t_{j},y_{j})\right) \right| \\&\le |{\widetilde{y}}_{a}|+L\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1}|{\widetilde{y}}_{j}|, \end{aligned}$$

that is:

$$\begin{aligned} (1-L\cdot {\widetilde{w}}_{k+1,k+1})|{\widetilde{y}}_{k+1}| \le |{\widetilde{y}}_{a}|+L\sum _{j=0}^{k}{\widetilde{w}}_{j,k+1}|{\widetilde{y}}_{j}|. \end{aligned}$$

For any given positive constant \(\epsilon \in (0,1)\), the inequality \(1-L\cdot {\widetilde{w}}_{k+1,k+1}\ge \epsilon \) holds as long as we choose \(\tau \le a\left[ e^{\left( \frac{\Gamma (2+\alpha )(1-\epsilon )}{L}\right) ^{1/\alpha }}-1\right] \). For this choice, it follows from Lemmas 2.1 and 2.2 that:

$$\begin{aligned} |{\widetilde{y}}_{k+1}|\le C|{\widetilde{y}}_{a}|. \end{aligned}$$

Hence, the proof is completed. \(\square \)

Theorem 2.5

Suppose that \(y_{j}\ (j=1,2,\ldots ,k)\) are the solutions of scheme (2.6) and that f(ty) satisfies the Lipschitz condition with respect to the second argument y with a Lipschitz constant L. Then the modified predictor–corrector scheme (2.6) is stable.

Proof

Let \({\widetilde{y}}_{a}\), \({\widetilde{y}}_{j}\ (j=1,\ldots ,k+1)\), and \({\widetilde{y}}_{k+1}^{P}\ (k=0,1,\ldots ,N-1)\) be the perturbations of \(y_{a}\), \(y_{j}\), and \(y_{k+1}^{P}\), respectively. Then:

$$\begin{aligned} \left\{ \begin{aligned}&{\widetilde{y}}_{k+1}^{P}={\widetilde{y}}_{a}+\sum _{j=0}^{k}w_{j,k+1} \left( f(t_{j},y_{j}+{\widetilde{y}}_{j})-f(t_{j},y_{j})\right) ,\\&{\widetilde{y}}_{k+1}={\widetilde{y}}_{a}+\sum _{j=0}^{k}{\widetilde{w}}_{j,k+1} \left( f(t_{j},y_{j}+{\widetilde{y}}_{j})-f(t_{j},y_{j})\right) \\&\quad +{\widetilde{w}}_{k+1,k+1} \left( f(t_{k+1},y_{k+1}^{P}+{\widetilde{y}}_{k+1}^{P})-f(t_{k+1},y_{k+1}^{P})\right) . \end{aligned}\right. \end{aligned}$$

Noticing that

$$\begin{aligned} {\widetilde{w}}_{k+1,k+1}\le \frac{1}{\Gamma (\alpha +2)}\left( \log \frac{T}{a}\right) ^{\alpha }, \end{aligned}$$

there holds:

$$\begin{aligned} |{\widetilde{y}}_{k+1}| =&\Big |{\widetilde{y}}_{a}+\sum _{j=0}^{k}{\widetilde{w}}_{j,k+1} \left( f(t_{j},y_{j}+{\widetilde{y}}_{j})-f(t_{j},y_{j})\right) \\&+{\widetilde{w}}_{k+1,k+1}\left( f(t_{k+1},y_{k+1}^{P} +{\widetilde{y}}_{k+1}^{P})-f(t_{k+1},y_{k+1}^{P})\right) \Big |\\ \le&|{\widetilde{y}}_{a}|+L\sum _{j=0}^{k}{\widetilde{w}}_{j,k+1}|{\widetilde{y}}_{j}| +L{\widetilde{w}}_{k+1,k+1}|{\widetilde{y}}_{k+1}^{P}|\\ \le&|{\widetilde{y}}_{a}|+L\sum _{j=0}^{k}{\widetilde{w}}_{j,k+1}|{\widetilde{y}}_{j}| +\frac{L}{\Gamma (\alpha +2)}\left( \log \frac{T}{a}\right) ^{\alpha }|{\widetilde{y}}_{k+1}^{P}|\\ \le&\left( 1+\frac{L}{\Gamma (\alpha +2)}\left( \log \frac{T}{a}\right) ^{\alpha }\right) |{\widetilde{y}}_{a}|\\&+L\sum _{j=0}^{k}\left( {\widetilde{w}}_{j,k+1}+\frac{L}{\Gamma (\alpha +2)} \left( \log \frac{T}{a}\right) ^{\alpha }w_{j,k+1}\right) |{\widetilde{y}}_{j}|\\ \le&\left( 1+\frac{L}{\Gamma (\alpha +2)}\left( \log \frac{T}{a}\right) ^{\alpha }\right) |{\widetilde{y}}_{a}|\\&+L C_{1}\left( 2+\frac{L}{\Gamma (\alpha +2)}\left( \log \frac{T}{a}\right) ^{\alpha }\right) \sum _{j=0}^{k} \log \frac{t_{j+1}}{t_{j}}\left( \log \frac{t_{k+1}}{t_{j}}\right) ^{\alpha -1}|{\widetilde{y}}_{j}|\\ \le&C_{2}\left( 1+\frac{L}{\Gamma (\alpha +2)}\left( \log \frac{T}{a}\right) ^{\alpha }\right) |{\widetilde{y}}_{a}|\\ \le&C|{\widetilde{y}}_{a}|, \end{aligned}$$

where Lemmas 2.1 and 2.2 are utilized. Therefore, the proof is ended. \(\square \)

2.5 Error Analysis

Using Lemmas 2.3 and 2.4, it is easy to get Theorems 2.6 and 2.7.

Theorem 2.6

Assume that \(_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}{y(t)}\in C^{1}[a,T]\). Then, the left rectangular scheme (2.4) for equation (1.2) has the following estimate:

$$\begin{aligned} |y_{k+1}-y(t_{k+1})|\le C\tau , \quad k=0,1,\ldots ,N-1. \end{aligned}$$

Theorem 2.7

Assume that \(_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}{y(t)}\in C^{2}[a,T]\). Then, the \({\hbox {L}}_{\mathrm{log},1}\) interpolation scheme (2.5) for equation (1.2) has the following estimate:

$$\begin{aligned} |y_{k+1}-y(t_{k+1})|\le C\tau ^{2}, \quad \ k=0,1,\ldots ,N-1. \end{aligned}$$

Now, we study error estimate of the modified predictor–corrector scheme (2.6).

Theorem 2.8

Assume that \(_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}y(t)\in C^{2}[a,T]\). Then, modified predictor-corrector the scheme (2.6) for equation (1.2) has the following estimate:

$$\begin{aligned} |y_{k+1}-y(t_{k+1})|\le C\tau ^{1+\alpha }, \quad \ k=0,1,\ldots ,N-1. \end{aligned}$$

Proof

Since \(f(t,y)=\) \(_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}{y(t)}\in C^{2}[a,T]\) is bounded, there exists a constant \(M>0\), such that \(|\delta f|\le M\) and \(|\delta ^{2} f|\le M\). Denote \(e_{k+1}=|y(t_{k+1})-y_{k+1}|, \ k=0,1,\ldots ,N-1\), then:

$$\begin{aligned} e_{k+1}=&\Bigg |\frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}f(s,y(s))\frac{\mathrm{d}s}{s}\\&- \left( \sum _{j=0}^{k}{\widetilde{w}}_{j,k+1}f(t_{j},y_{j}) +{\widetilde{w}}_{k+1,k+1}f(t_{k+1},y_{k+1}^{P})\right) \Bigg |\\ \le&\left| \frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}f(s,y(s))\frac{\mathrm{d}s}{s} -\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1}f(t_{j},y(t_{j}))\right| \\&+ \left| \sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1}f(t_{j},y(t_{j})) -\left( \sum _{j=0}^{k}{\widetilde{w}}_{j,k+1}f(t_{j},y_{j}) +{\widetilde{w}}_{k+1,k+1}f(t_{k+1},y_{k+1}^{P})\right) \right| \\ \le&\left| \frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}f(s,y(s))\frac{\mathrm{d}s}{s} -\sum _{j=0}^{k+1}{\widetilde{w}}_{j,k+1}f(t_{j},y(t_{j}))\right| \\&+ \left| \sum _{j=0}^{k}{\widetilde{w}}_{j,k+1} \left( f(t_{j},y(t_{j}))-f(t_{j},y_{j})\right) \right| \\&+ \left| {\widetilde{w}}_{k+1,k+1}\left( f(t_{k+1},y(t_{k+1})) -f(t_{k+1},y_{k+1}^{P})\right) \right| \\ =&I_{1}+I_{2}+I_{3}. \end{aligned}$$

By Lemma 2.4, one gets:

$$\begin{aligned} I_{1}\le C_{1}\tau ^{2}. \end{aligned}$$

For \(I_{2}\), by virtue of the Lipschitz condition of f, it is evident that:

$$\begin{aligned} I_{2}&= \left| \sum _{j=0}^{k}{\widetilde{w}}_{j,k+1} \left( f(t_{j},y(t_{j}))-f(t_{j},y_{j})\right) \right| \\&\le \sum _{j=0}^{k}{\widetilde{w}}_{j,k+1} \left| f(t_{j},y(t_{j}))-f(t_{j},y_{j}))\right| \\&=L\sum _{j=0}^{k}{\widetilde{w}}_{j,k+1}e_{j}. \end{aligned}$$

For \(I_{3}\), since:

$$\begin{aligned} {\widetilde{w}}_{k+1,k+1}=\frac{1}{\Gamma (\alpha +2)} \left( \log \frac{t_{k+1}}{t_{k}}\right) ^{\alpha } \le \frac{C_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)}, \end{aligned}$$

then

$$\begin{aligned} I_{3}=&\left| {\widetilde{w}}_{k+1,k+1} \left( f(t_{k+1},y(t_{k+1})) -f(t_{k+1},y_{k+1}^{P})\right) \right| \\ \le&\frac{C_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)} \left| f(t_{k+1},y(t_{k+1}))-f(t_{k+1},y_{k+1}^{P})\right| \\ \le&\frac{LC_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)} \left| y(t_{k+1})-y_{k+1}^{P}\right| \\ =&\frac{LC_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)} \left| \frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}f(s,y(s))\frac{\mathrm{d}s}{s} -\sum _{j=0}^{k}w_{j,k+1}f(t_{j},y_{j})\right| \\ \le&\frac{LC_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)} \left| \frac{1}{\Gamma (\alpha )}\int _{a}^{t_{k+1}} \left( \log \frac{t_{k+1}}{s}\right) ^{\alpha -1}f(s,y(s))\frac{\mathrm{d}s}{s} -\sum _{j=0}^{k}w_{j,k+1}f(t_{j},y(t_{j}))\right| \\&+\frac{LC_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)} \left| \sum _{j=0}^{k}w_{j,k+1}f(t_{j},y(t_{j})) -\sum _{j=0}^{k}w_{j,k+1}f(t_{j},y_{j})\right| \\ \le&\frac{LC_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)} \frac{C_{3}||\delta f||_{\infty }}{\Gamma (\alpha +1)} \left( \log \frac{T}{a}\right) ^{\alpha } \tau +\frac{LC_{2}\tau ^{\alpha }}{\Gamma (\alpha +2)} \sum _{j=0}^{k}w_{j,k+1}|f(t_{j},y(t_{j}))-f(t_{j},y_{j})| \\ \le&C_{4}\tau ^{1+\alpha } +\frac{L^{2}C_{2}T^{\alpha }}{\Gamma (\alpha +2)}\sum _{j=0}^{k}w_{j,k+1}e_{j}, \end{aligned}$$

where Lemma 2.3 is utilized and \(C_{4}=\frac{LM C_{2}C_{3}}{\Gamma (\alpha +1) \Gamma (\alpha +2)} \left( \log \frac{T}{a}\right) ^{\alpha }\).

Consequently:

$$\begin{aligned} e_{k+1}&\le I_{1}+I_{2}+I_{3} \\&\le C_{1} \tau ^{2}+C_{4}\tau ^{1+\alpha }+L\sum _{j=0}^{k} \left( {\widetilde{w}}_{j,k+1}+ \frac{C_{2}LT^{\alpha }}{\Gamma (\alpha +2)}w_{j,k+1}\right) e_{j} \\&\le C\tau ^{1+\alpha }. \end{aligned}$$

The proof is then finished.

\(\square \)

3 Extension to FPDE with Caputo–Hadamard Derivative

In this section, we propose a numerical method for fractional partial differential equation (FPDE) with Caputo–Hadamard derivative. We get a discrete scheme of Caputo–Hadamard derivative by linear interpolation, and then show the stability and convergence of a semi-discrete scheme. Finally, by applying the second-order central difference in spatial direction, we obtain the fully discrete scheme along with its stability and convergence analysis.

Consider the following initial-boundary value problem with \(0<\alpha <1\):

$$\begin{aligned} \left\{ \begin{aligned}&_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}u(x,t)-\Delta u(x,t)=f(x,t), \ \ a<t\le T, \ 0<x<1, \\&u(x,a)=u_{a}(x),\ \ 0< x< 1, \\&u(0,t)=u(1,t)=0,\ \ a<t\le T. \end{aligned}\right. \end{aligned}$$
(3.1)

3.1 Numerical Approximation of Caputo–Hadamard Derivative

We first approximate Caputo–Hadamard derivative. For a given positive number \(T>a\), we divide the interval [aT] into N subintervals with \(a=t_{0}<t_{1}<\cdots<t_{k-1}<t_{k}<\cdots <t_{N}=T\) with the uniform stepsize \(\tau =t_{k}-t_{k-1}, 1\le k\le N\).

At \(t=t_{k}, 1\le k\le N\), there holds:

$$\begin{aligned}&_{CH}\mathrm{{D}}^{\alpha }_{{a, t}}g(t)\Big |_{t=t_{k}} =\frac{1}{\Gamma (1-\alpha )} \int _{a}^{t_{k}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha }\delta g(s) \frac{\mathrm{d}s}{s} \nonumber \\&\quad =\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha }\delta g(s)\frac{\mathrm{d}s}{s} \nonumber \\&\quad = \frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k}\frac{g(t_{j}) -g(t_{j-1})}{\log \frac{t_{j}}{t_{j-1}}}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha }\frac{\mathrm{d}s}{s}+R^{k} \nonumber \\&\quad = \sum _{j=1}^{k}\frac{1}{\Gamma (2-\alpha )}\frac{1}{\log \frac{t_{j}}{t_{j-1}}} \left( \left( \log \frac{t_{k}}{t_{j-1}}\right) ^{1-\alpha } {-}\left( \log \frac{t_{k}}{t_{j}}\right) ^{1-\alpha }\right) \left( g(t_{j}){-}g(t_{j-1})\right) {+}R^{k} \nonumber \nonumber \\&\quad = \sum _{j=1}^{k}a_{j,k}\left( g(t_{j})-g(t_{j-1})\right) +R^{k},\nonumber \\ \end{aligned}$$
(3.2)

where

$$\begin{aligned} a_{j,k}=\frac{1}{\Gamma (2-\alpha )}\frac{1}{\log \frac{t_{j}}{t_{j-1}}} \left( \left( \log \frac{t_{k}}{t_{j-1}}\right) ^{1-\alpha } -\left( \log \frac{t_{k}}{t_{j}}\right) ^{1-\alpha }\right) , \end{aligned}$$
(3.3)

and

$$\begin{aligned} R^{k}=\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left( \delta g(s)-\frac{g(t_{j})-g(t_{j-1})}{\log \frac{t_{j}}{t_{j-1}}}\right) \frac{\mathrm{d}s}{s}.\nonumber \\ \end{aligned}$$
(3.4)

Remark 3.1

Formula (3.2) can be rewritten as the following convolutional form:

$$\begin{aligned} _{CH}\mathrm{{D}}^{\alpha }_{{a, t}}g(t)\Big |_{t=t_{k}} =&\sum _{j=1}^{k}a_{j,k}(g(t_{j})-g(t_{j-1}))+R^{k}, \\ =&a_{k,k}g(t_{k})+\sum _{j=1}^{k-1}(a_{j,k}-a_{j+1,k}) g(t_{j})-a_{1,k}g(t_{0})+R^{k} \\ =&b_{0,k}g(t_{k})+\sum _{j=1}^{k-1}b_{k-j,k}g(t_{j})+b_{k,k}g(t_{0})+R^{k} \\ =&\sum _{j=0}^{k}b_{k-j,k}g(t_{j})+R^{k}, \end{aligned}$$

where

$$\begin{aligned} b_{k-j,k}= \left\{ \begin{aligned}&-a_{1,k},\ \ j=0,\\&a_{j,k}-a_{j+1,k},\ \ j=1,2,\ldots ,k-1, \\&a_{k,k},\ \ j=k. \end{aligned}\right. \end{aligned}$$

This can be regarded as L1 scheme for Caputo–Hadamard derivative.

Lemma 3.1

For \(0<\alpha <1\), the coefficients \(a_{j,k}\ (1\le j\le k, 1\le k\le N)\) in (3.3) satisfy:

$$\begin{aligned} a_{k,k}>a_{k-1,k}>\cdots>a_{j,k}>a_{j-1,k}>\cdots>a_{1,k}>0. \end{aligned}$$

Proof

According to the mean value theorem, for \(j=1,2,\ldots ,k:\)

$$\begin{aligned} a_{j,k}&=\frac{1}{\Gamma (2-\alpha )}\frac{1}{\log \frac{t_{j}}{t_{j-1}}} \left( \left( \log \frac{t_{k}}{t_{j-1}}\right) ^{1-\alpha } -\left( \log \frac{t_{k}}{t_{j}}\right) ^{1-\alpha }\right) \\&=\frac{1}{\Gamma (2-\alpha )}\frac{1}{\log \frac{t_{j}}{t_{j-1}}} (1-\alpha )\xi _{j}^{-\alpha }\log \frac{t_{j}}{t_{j-1}} \\&=\frac{1}{\Gamma (1-\alpha )}\xi _{j}^{-\alpha }>0. \end{aligned}$$

Since \(\log \frac{t_{k}}{t_{j}}<\xi _{j}<\log \frac{t_{k}}{t_{j-1}}<\xi _{j-1}<\log \frac{t_{k}}{t_{j-2}}\) and \(x^{-\alpha }\) is monotonically decreasing, we thus have \(a_{j,k}>a_{j-1,k}\). All this completes the proof. \(\square \)

Lemma 3.2

If \(0<\alpha <1\) and \(g(t)\in C^{2}[a,T]\), then the local truncation error \(R^{k}\ (1\le k\le N)\) in (3.4) has the following estimate:

$$\begin{aligned} |R^{k}|\le&\left( \frac{1}{\Gamma (2-\alpha )}\left( \log \frac{t_{k}}{t_{k-1}}\right) ^{2} +\frac{1}{\Gamma (1-\alpha )}\max _{1\le l\le N} \left( \log \frac{t_{l}}{t_{l-1}}\right) ^{2}\right) \\&\times \left( \log \frac{t_{k}}{t_{k-1}}\right) ^{-\alpha } \max _{a\le t\le t_{k}}|\delta ^{2}g(t)|. \end{aligned}$$

Proof

According to (3.4), one has:

$$\begin{aligned} R^{k}=&\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left( \delta g(s)-\frac{g(t_{j})-g(t_{j-1})}{\log \frac{t_{j}}{t_{j-1}}}\right) \frac{\mathrm{d}s}{s} \\ =&\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k-1}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left( \delta g(s)-\frac{g(t_{j})-g(t_{j-1})}{\log \frac{t_{j}}{t_{j-1}}}\right) \frac{\mathrm{d}s}{s} \\&+\frac{1}{\Gamma (1-\alpha )}\int _{t_{k-1}}^{t_{k}}\left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left( \delta g(s)-\frac{g(t_{k})-g(t_{k-1})}{\log \frac{t_{k}}{t_{k-1}}}\right) \frac{\mathrm{d}s}{s} \\ =&R_{1}^{k}+R_{2}^{k}. \end{aligned}$$

On one hand:

$$\begin{aligned} R_{1}^{k}=&\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k-1}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left( \delta g(s)-\frac{g(t_{j})-g(t_{j-1})}{\log \frac{t_{j}}{t_{j-1}}}\right) \frac{\mathrm{d}s}{s} \\ =&\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k-1}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \mathrm{d}\left( g(s)-\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j-1}}{t_{j}}}g(t_{j-1}) -\frac{\log \frac{s}{t_{j-1}}}{\log \frac{t_{j}}{t_{j-1}}}g(t_{j})\right) \\ =&\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{k-1}\left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left( g(s)-\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j-1}}{t_{j}}}g(t_{j-1}) -\frac{\log \frac{s}{t_{j-1}}}{\log \frac{t_{j}}{t_{j-1}}}g(t_{j})\right) \Bigg |_{t_{j-1}}^{t_{j}} \\&-\frac{\alpha }{\Gamma (1-\alpha )}\sum _{j=1}^{k-1}\int _{t_{j-1}}^{t_{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha -1} \left( g(s)-\frac{\log \frac{s}{t_{j}}}{\log \frac{t_{j-1}}{t_{j}}}g(t_{j-1}) -\frac{\log \frac{s}{t_{j-1}}}{\log \frac{t_{j}}{t_{j-1}}}g(t_{j})\right) \frac{\mathrm{d}s}{s} \\ =&\frac{\alpha }{\Gamma (1-\alpha )}\sum _{j=1}^{k-1} \int _{t_{j-1}}^{t_{j}}\delta ^{2}g(\xi _{j})\frac{\log \frac{t_{j}}{s} \log \frac{s}{t_{j-1}}}{\log \frac{t_{j}}{t_{j-1}}}\log \frac{\zeta _{j}}{\eta _{j}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha -1}\frac{\mathrm{d}s}{s}, \end{aligned}$$

where \(t_{j-1}<\eta _{j}<\xi _{j}<\zeta _{j}<t_{j}\). Hence:

$$\begin{aligned} |R_{1}^{k}|&\le \frac{\alpha }{\Gamma (1-\alpha )}\max _{a\le t\le t_{k-1}} |\delta ^{2}g(t)|\sum _{j=1}^{k-1} \left( \log \frac{t_{j}}{t_{j-1}}\right) ^{2} \int _{t_{j-1}}^{t_{j}}\left( \log \frac{t_{k}}{s}\right) ^{-\alpha -1}\frac{\mathrm{d}s}{s} \\&\le \frac{\alpha }{\Gamma (1-\alpha )}\max _{1\le l\le N} \left( \log \frac{t_{l}}{t_{l-1}}\right) ^{2}\max _{a\le t\le t_{k-1}}|\delta ^{2}g(t)| \int _{a}^{t_{k-1}}\left( \log \frac{t_{k}}{s}\right) ^{-\alpha -1}\frac{\mathrm{d}s}{s} \\&=\frac{\alpha }{\Gamma (1-\alpha )}\max _{1\le l\le N} \left( \log \frac{t_{l}}{t_{l-1}}\right) ^{2}\max _{a\le t\le t_{k-1}}|\delta ^{2}g(t)| \frac{1}{\alpha }\left( \left( \log \frac{t_{k}}{t_{k-1}}\right) ^{-\alpha } -\left( \log \frac{t_{k}}{a}\right) ^{-\alpha }\right) \\&\le \frac{1}{\Gamma (1-\alpha )}\max _{a\le t\le t_{k-1}}|\delta ^{2}g(t)| \max _{1\le l\le N}\left( \log \frac{t_{l}}{t_{l-1}}\right) ^{2} \left( \log \frac{t_{k}}{t_{k-1}}\right) ^{-\alpha }. \end{aligned}$$

On the other hand:

$$\begin{aligned} |R_{2}^{k}|&=\left| \frac{1}{\Gamma (1-\alpha )} \int _{t_{k-1}}^{t_{k}}\left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left( \delta g(s)-\frac{g(t_{k})-g(t_{k-1})}{\log \frac{t_{k}}{t_{k-1}}}\right) \frac{\mathrm{d}s}{s}\right| \\&\le \frac{1}{\Gamma (1-\alpha )}\int _{t_{k-1}}^{t_{k}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha } \left| \delta g(s)-\frac{g(t_{k})-g(t_{k-1})}{\log \frac{t_{k}}{t_{k-1}}}\right| \frac{\mathrm{d}s}{s} \\&\le \frac{1}{\Gamma (1-\alpha )}\log \frac{t_{k}}{t_{k-1}} \max _{t_{k-1}\le t\le t_{k}}|\delta ^{2}g(t)| \int _{t_{k-1}}^{t_{k}} \left( \log \frac{t_{k}}{s}\right) ^{-\alpha }\frac{\mathrm{d}s}{s} \\&\le \frac{1}{\Gamma (2-\alpha )}\max _{t_{k-1}\le t\le t_{k}}|\delta ^{2}g(t)| \left( \log \frac{t_{k}}{t_{k-1}}\right) ^{2-\alpha }. \end{aligned}$$

Combining the above estimates of \(R_{1}^{k}\) and \(R_{2}^{k}\), one can obtain the desired result. All this ends the proof. \(\square \)

Remark 3.2

In the case of uniform mesh, the local truncation error for (3.4) is:

$$\begin{aligned} |R^{k}|\le C\tau ^{2-\alpha }. \end{aligned}$$

For equation (3.1), at the time level \(t_{k}, 1\le k\le N\), we have:

$$\begin{aligned} \left\{ \begin{aligned}&\sum _{j=1}^{k}a_{j,k}\left( u(x,t_{j})-u(x,t_{j-1})\right) -\Delta u(x,t_{k})=f(x,t_{k})+R^{k}(x), \\&u(x,a)=u_{a}(x), \ \ 0< x< 1, \\&u(0,t_{k})=u(1,t_{k})=0, \ \ 1\le k\le N, \end{aligned}\right. \end{aligned}$$
(3.5)

on the basis of (3.2), where \(R^{k}(x)\) is local truncation error.

Let \(u^{k}(x)\approx u(x,t_{k})\) with \(1\le k\le N\) be the numerical approximation. Omitting the truncation error \(R^{k}(x)\) in (3.5), we derive the following difference scheme:

$$\begin{aligned} \left\{ \begin{aligned}&\sum _{j=1}^{k}a_{j,k}\left( u^{j}(x)-u^{j-1}(x)\right) -\Delta u^{k}(x)=f^{k}(x), \\&u^{0}(x)=u_{a}(x), \ \ 0< x< 1, \\&u^{k}(0)=u^{k}(1)=0, \ \ 1\le k\le N, \end{aligned}\right. \end{aligned}$$
(3.6)

where \(f^{k}(x)=f(x,t_{k}), \ 1\le k\le N\).

3.2 Stability and Convergence of Semi-discrete Scheme

We present \(L^{2}\) inner product, \(L^{2}\) norm, \(H^{1}\) semi-norm, and \(H^{1}\) norm as follows:

$$\begin{aligned} (v,w)= & {} \int _{0}^{1}vwdx,\ \ ||v||=\sqrt{(v,v)}, \ \ ||\nabla v||=\sqrt{(\nabla v,\nabla v)},\\ ||v||_{1}= & {} \sqrt{||v||^{2}+||\nabla v||^{2}}. \end{aligned}$$

The following theorem gives stability of difference scheme (3.6) with the given initial value \(u_{a}\) and right-hand side term f.

Theorem 3.1

Let \(u^{k}(x)\) be the solution of (3.6). Then, difference scheme (3.6) is stable. That is:

$$\begin{aligned} ||\nabla u^{k}||^{2}\le ||\nabla u_{a}||^{2}+\frac{\Gamma (1-\alpha )}{2} \left( \log \frac{T}{a}\right) ^{\alpha } \max _{1\le l\le N}||f^{l}||^{2}, \ \ 1\le k\le N. \end{aligned}$$

Proof

The first formula of (3.6) is rewritten as:

$$\begin{aligned} a_{k,k}u^{k}(x)-\Delta u^{k}(x)= & {} \sum _{j=1}^{k-1} \left( a_{j+1,k}-a_{j,k}\right) u^{j}(x)\\&+a_{1,k}u^{0}(x)+f^{k}(x),\ \ 1\le k\le N. \end{aligned}$$

Taking the inner product with \(-2\Delta u^{k}(x)\) on both sides of the above equality, then:

$$\begin{aligned}&a_{k,k}(u^{k}, -2\Delta u^{k})-(\Delta u^{k}, -2\Delta u^{k})\\&\quad =\sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})(u^{j}, -2\Delta u^{k})+a_{1,k}(u^{0}, -2\Delta u^{k}) +(f^{k}, -2\Delta u^{k}). \end{aligned}$$

It follows from Lemma 3.1 and Cauchy–Schwarz inequality that:

$$\begin{aligned}&2a_{k,k}||\nabla u^{k}||^{2}+2||\Delta u^{k}||^{2}\\&\quad = 2\sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})(\nabla u^{j}, \nabla u^{k}) +2a_{1,k}(\nabla u^{0}, \nabla u^{k})-2(f^{k}, \Delta u^{k})\\&\quad \le \sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})(||\nabla u^{j}||^{2}+||\nabla u^{k}||^{2}) +a_{1,k}(||\nabla u^{0}||^{2}+||\nabla u^{k}||^{2}) \\&\qquad +\frac{1}{2}||f^{k}||^{2}+2||\Delta u^{k}||^{2}=\sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})||\nabla u^{j}||^{2}+a_{k,k}||\nabla u^{k}||^{2} \\&\qquad +a_{1,k}||\nabla u^{0}||^{2}+\frac{1}{2}||f^{k}||^{2}+2||\Delta u^{k}||^{2}. \end{aligned}$$

Therefore:

$$\begin{aligned} a_{k,k}||\nabla u^{k}||^{2}\le \sum _{j=1}^{k-1}(a_{j+1,k} -a_{j,k})||\nabla u^{j}||^{2}+a_{1,k}||\nabla u^{0}||^{2} +\frac{1}{2}||f^{k}||^{2}, \ \ 1\le k\le N. \end{aligned}$$

Notice that:

$$\begin{aligned} a_{1,k}=\frac{1}{\Gamma (1-\alpha )}\xi _{1}^{-\alpha } \ge \frac{1}{\Gamma (1-\alpha )}\left( \log \frac{t_{k}}{a}\right) ^{-\alpha } \ge \frac{1}{\Gamma (1-\alpha )}\left( \log \frac{T}{a}\right) ^{-\alpha }, \end{aligned}$$

or

$$\begin{aligned} \frac{1}{a_{1,k}}\le \Gamma (1-\alpha )\left( \log \frac{T}{a}\right) ^{\alpha }, \end{aligned}$$

where \(\log \frac{t_{k}}{t_{1}}<\xi _{1}<\log \frac{t_{k}}{a}\). Thus, there holds:

$$\begin{aligned} a_{k,k}||\nabla u^{k}||^{2}\le & {} \sum _{j=1}^{k-1} (a_{j+1,k}-a_{j,k})||\nabla u^{j}||^{2}\nonumber \\&\quad +a_{1,k}\left( ||\nabla u^{0}||^{2}+\frac{\Gamma (1-\alpha )}{2} \left( \log \frac{T}{a}\right) ^{\alpha }||f^{k}||^{2}\right) ,\ \ 1\le k\le N.\nonumber \\ \end{aligned}$$
(3.7)

Let

$$\begin{aligned} I=||\nabla u^{0}||^{2}+\frac{\Gamma (1-\alpha )}{2} \left( \log \frac{T}{a}\right) ^{\alpha }\max _{1\le l\le N}||f^{l}||^{2}. \end{aligned}$$

Inequality (3.7) is written as:

$$\begin{aligned} a_{k,k}||\nabla u^{k}||^{2}\le \sum _{j=1}^{k-1} (a_{j+1,k}-a_{j,k})||\nabla u^{j}||^{2}+a_{1,k}I, \ \ 1\le k\le N. \end{aligned}$$

Using mathematic induction, we shall prove:

$$\begin{aligned} ||\nabla u^{k}||^{2}\le I, \ \ 1\le k\le N. \end{aligned}$$

For \(k=1\), it is easy to see that \(||\nabla u^{1}||^{2}\le I\). Suppose that for \(k=1,2,\ldots ,m-1\), one has \(||\nabla u^{k}||^{2}\le I\). Then, for \(k=m\):

$$\begin{aligned} a_{m,m}||\nabla u^{m}||^{2}&\le \sum _{j=1}^{m-1} (a_{j+1,m}-a_{j,m})||\nabla u^{j}||^{2}+a_{1,m}I \\&\le \sum _{j=1}^{m-1}(a_{j+1,m}-a_{j,m})I+a_{1,m}I \\&=a_{m,m}I; \end{aligned}$$

that is:

$$\begin{aligned} ||\nabla u^{m}||^{2}\le I. \end{aligned}$$

The proof is thus finished. \(\square \)

Denote

$$\begin{aligned} e^{k}(x)= u(x,t_{k})-u^{k}(x), \ \ 1\le k\le N. \end{aligned}$$

In view of (3.5) and (3.6), we derive the following error equation:

$$\begin{aligned} \left\{ \begin{aligned}&\sum _{j=1}^{k}a_{j,k}\left( e^{j}(x)-e^{j-1}(x)\right) -\Delta e^{k}(x)=R^{k}(x), \\&e^{0}(x)=0, \ \ 0< x< 1, \\&e^{k}(0)=e^{k}(1)=0, \ \ 1\le k\le N. \end{aligned}\right. \end{aligned}$$

It follows from Theorem 3.1 that:

$$\begin{aligned} ||\nabla e^{k}||^{2}\le \frac{\Gamma (1-\alpha )}{2} \left( \log \frac{T}{a}\right) ^{\alpha } \max _{1\le l\le N}||R^{l}||^{2}, \ \ 1\le k\le N. \end{aligned}$$
(3.8)

Using Poincaré inequality, inequality (3.8), and Lemma 3.2, the following theorem holds for \(\frac{\partial ^2u}{\partial t^2}\in C[a, T]\).

Theorem 3.2

Suppose that u(xt) which is twice continuous differentiable with respect to t in the interval [aT] is the solution of the initial-boundary value problem (3.1), and \(u^{k}(x)\) with \(1\le k\le N\) is the solution of semi-discrete difference scheme (3.6). Then, we have:

$$\begin{aligned} ||u(\cdot ,t_{k})-u^{k}(\cdot )||_{1}\le C\sqrt{\frac{\Gamma (1-\alpha )}{2} \left( \log \frac{T}{a}\right) ^{\alpha }}\tau ^{2-\alpha }. \end{aligned}$$

3.3 A Fully Discrete Difference Scheme

Let \(0=x_{0}<x_{1}<\cdots<x_{i-1}<x_{i}<\cdots <x_{M}=1\), \(x_{i}=ih\) with \(h=\frac{1}{M}\ (0\le i\le M)\). For any grid function \(V=\{v_{i}|0\le i\le M\}\), denote:

$$\begin{aligned} \delta _{x}v_{i-\frac{1}{2}}=\frac{v_{i}-v_{i-1}}{h},\ \ \ \delta _{x}^{2}v_{i}=\frac{v_{i+1}-2v_{i}+v_{i-1}}{h^{2}} =\frac{\delta _{x}v_{i+\frac{1}{2}}-\delta _{x}v_{i-\frac{1}{2}}}{h}. \end{aligned}$$

It is well known that for suitable smooth function g(x):

$$\begin{aligned} \frac{\partial ^{2}g(x)}{\partial x^{2}}\Big |_{x=x_{i}}=\delta _{x}^{2}g_{i}+R_{i}, \end{aligned}$$
(3.9)

where the truncation error \(|R_{i}|\le Ch^{2}\).

Define the grid functions:

$$\begin{aligned} U_{i}^{k}=u(x_{i},t_{k}), \ \ f_{i}^{k}=f(x_{i},t_{k}), \ \ 0\le i\le M, \ 0\le k\le N. \end{aligned}$$

Equations (3.5) and (3.9) imply:

$$\begin{aligned} \sum _{j=1}^{k}a_{j,k}\left( U_{i}^{j}-U_{i}^{j-1}\right) -\delta _{x}^{2}U_{i}^{k} =f_{i}^{k}+R_{i}^{k},\ \ 1\le i\le M-1,\ 1\le k\le N,\nonumber \\ \end{aligned}$$
(3.10)

where \(R_{i}^{k}=R^{k}+R_{i}\).

Let \(u_{i}^{k}\) with \(1\le i\le M-1\) and \(1\le k\le N\) be the numerical solution to (3.10). Omitting the small term \(R_{i}^{k}\) and using the initial and boundary conditions, we get the fully discrete difference scheme as follows:

$$\begin{aligned} \left\{ \begin{aligned}&\sum _{j=1}^{k}a_{j,k}\left( u_{i}^{j}-u_{i}^{j-1}\right) -\delta _{x}^{2}u_{i}^{k}=f_{i}^{k}, \\&u_{i}^{0}=u_{a}(x_{i}), \ \ 0\le i\le M, \\&u_{0}^{k}=u_{M}^{k}=0, \ \ 1\le k\le N. \end{aligned}\right. \end{aligned}$$
(3.11)

We next introduce some notations about the discrete inner product and the corresponding norm. Let \({\mathcal {V}}_{h}=\{v|v=(v_{0}, v_{1}, \ldots , v_{M}), v_{0}=v_{M}=0 \}\). For any \(v\in {\mathcal {V}}_{h}\), define:

$$\begin{aligned} (v,w)_{h}=h\sum _{i=1}^{M-1}v_{i}w_{i},\ \ \ ||v||_{h}=\sqrt{(v,v)_{h}}, \ \ \ ||\delta _{x}v||_{h}=\sqrt{h\sum _{i=1}^{M}(\delta _{x}v_{i-\frac{1}{2}})^{2}}. \end{aligned}$$

Lemma 3.3

[13] For any \(v,w\in {\mathcal {V}}_{h}\), there holds:

$$\begin{aligned} (v,\delta _{x}^{2}w)_{h}=-(\delta _{x}v,\delta _{x}w)_{h}. \end{aligned}$$

Lemma 3.4

[20] For any \(v\in {\mathcal {V}}_{h}\), there holds:

$$\begin{aligned} ||v||_{h}\le \frac{1}{\sqrt{6}}||\delta _{x}v||_{h}. \end{aligned}$$

Theorem 3.3

Let \(u_{i}^{k}\) with \(1\le i\le M-1\) and \(1\le k\le N\) be the solution of the difference scheme (3.11). Then, there holds:

$$\begin{aligned} ||\delta _{x} u^{k}||_{h}^{2}\le ||\delta _{x} u_{a}||_{h}^{2} +\frac{\Gamma (1-\alpha )}{2}\left( \log \frac{T}{a}\right) ^{\alpha } \max _{1\le l\le N}||f^{l}||_{h}^{2}, \ \ 1\le k\le N. \end{aligned}$$

Proof

We first write (3.11) as:

$$\begin{aligned} a_{k,k}u_{i}^{k}-\delta _{x}^{2} u_{i}^{k}=\sum _{j=1}^{k-1} (a_{j+1,k}-a_{j,k})u_{i}^{j}+a_{1,k}u_{i}^{0}+f_{i}^{k}. \end{aligned}$$

Multiplying \(-2h\delta _{x}^{2} u_{i}^{k}\) on the both sides of the above equality and summing over i for \(i=1,2,\ldots ,M-1\), there holds:

$$\begin{aligned}&-2a_{k,k}h\sum _{i=1}^{M-1}u_{i}^{k}\delta _{x}^{2} u_{i}^{k}+2h\sum _{i=1}^{M-1} \delta _{x}^{2} u_{i}^{k}\delta _{x}^{2} u_{i}^{k} \\&\quad =-2\sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})h\sum _{i=1}^{M-1}u_{i}^{j}\delta _{x}^{2} u_{i}^{k} -2a_{1,k}h\sum _{i=1}^{M-1}u_{i}^{0}\delta _{x}^{2} u_{i}^{k} -2h\sum _{i=1}^{M-1}f_{i}^{k}\delta _{x}^{2} u_{i}^{k}; \end{aligned}$$

that is:

$$\begin{aligned}&-2a_{k,k}(u^{k}, \delta _{x}^{2} u^{k})_{h} +2(\delta _{x}^{2} u^{k}, \delta _{x}^{2} u^{k})_{h} \\&\quad =-2\sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})(u^{j}, \delta _{x}^{2} u^{k})_{h} -2a_{1,k}(u^{0}, \delta _{x}^{2} u^{k})_{h}-2(f^{k}, \delta _{x}^{2} u^{k})_{h}. \end{aligned}$$

Using Lemma 3.3 yields:

$$\begin{aligned}&2a_{k,k}(\delta _{x} u^{k}, \delta _{x} u^{k})_{h} +2(\delta _{x}^{2} u^{k}, \delta _{x}^{2} u^{k})_{h} \\&\quad =2\sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})(\delta _{x} u^{j}, \delta _{x} u^{k})_{h} +2a_{1,k}(\delta _{x} u^{0}, \delta _{x} u^{k})_{h} -2(f^{k}, \delta _{x}^{2} u^{k})_{h}. \end{aligned}$$

Lemma 3.1 and Cauchy–Schwarz inequality give:

$$\begin{aligned}&2a_{k,k}||\delta _{x} u^{k}||_{h}^{2}+2||\delta _{x}^{2} u^{k}||_{h}^{2}\\&\quad \le \sum _{j=1}^{k-1}(a_{j+1,k}-a_{j,k})(||\delta _{x} u^{j}||_{h}^{2} +||\delta _{x} u^{k}||_{h}^{2}) +a_{1,k}(||\delta _{x} u^{0}||_{h}^{2}+||\delta _{x} u^{k}||_{h}^{2})\\&\qquad +\frac{1}{2}||f^{k}||_{h}^{2}+2||\delta _{x}^{2} u^{k}||_{h}^{2}. \end{aligned}$$

Consequently:

$$\begin{aligned} a_{k,k}||\delta _{x} u^{k}||_{h}^{2}\le \sum _{j=1}^{k-1} (a_{j+1,k}-a_{j,k})||\delta _{x} u^{j}||_{h}^{2} +a_{1,k}||\delta _{x} u^{0}||_{h}^{2}+\frac{1}{2}||f^{k}||_{h}^{2}, \ \ 1\le k\le N. \end{aligned}$$

By the induction principle, it is easy to derive:

$$\begin{aligned} ||\delta _{x} u^{k}||_{h}^{2}\le ||\delta _{x} u^{0}||_{h}^{2} +\frac{\Gamma (1-\alpha )}{2}\left( \log \frac{T}{a}\right) ^{\alpha } \max _{1\le l\le N}||f^{l}||_{h}^{2}, \ \ 1\le k\le N. \end{aligned}$$

Hence, the proof is shown. \(\square \)

Now, we study the error estimate of the fully discrete scheme (3.11). Let \(e_{i}^{k}=u(x_{i}, t_{k})-u_{i}^{k}, \ \ 0\le i\le M, \ 1\le k\le N\). Then, the error equation is written as:

$$\begin{aligned} \left\{ \begin{aligned}&\sum _{j=1}^{k}a_{j,k}\left( e_{i}^{j}-e_{i}^{j-1}\right) -\delta _{x}^{2}e_{i}^{k}=R_{i}^{k}, \\&e_{i}^{0}=0, \ \ 0\le i\le M, \\&e_{0}^{k}=e_{M}^{k}=0, \ \ 1\le k\le N. \end{aligned}\right. \end{aligned}$$

By virtue of Lemma 3.4 and Theorem 3.3, the following error estimate holds for \(u(x,t)\in C_{x,t}^{4,2}([0,1]\times [a, T])\).

Theorem 3.4

Suppose that u(xt) which is twice continuous differentiable with respect to t in the interval [aT] is the solution of the initial-boundary value problem (3.1), and \(u^{k}_{i}\) with \(0\le i\le M\) and \(1\le k\le N\) is the solution of fully discrete difference scheme (3.11). Then, we have:

$$\begin{aligned} ||e^{k}||_{h}+||\delta _{x} e^{k}||_{h}\le C\sqrt{\frac{\Gamma (1-\alpha )}{2} \left( \log \frac{T}{a}\right) ^{\alpha }}(\tau ^{2-\alpha }+h^{2}), \ \ 1\le k\le N. \end{aligned}$$

4 Numerical Examples

This section gives two numerical examples to verify the proposed schemes for nonlinear FODE (1.1) and FPDE (3.1).

Example 4.1

Consider the following nonlinear FODE with \(\alpha \in (0, 1)\):

$$\begin{aligned} \left\{ \begin{aligned}&_{CH}\mathrm{{D}}^{\alpha }_{{1, t}}{y(t)}=f(t,y),\ \ {1<t}, \\&y(1)=0, \end{aligned}\right. \end{aligned}$$
(4.1)

where

$$\begin{aligned} f(t,y)&=\frac{\Gamma (6)}{\Gamma (6-\alpha )}(\log t)^{5-\alpha } -\frac{\Gamma (5)}{\Gamma (5-\alpha )}(\log t)^{4-\alpha } +\frac{2\Gamma (4)}{\Gamma (4-\alpha )}(\log t)^{3-\alpha } \\&\quad -y^{2}+[(\log t)^{5}-(\log t)^{4}+2(\log t)^{3}]^{2}. \end{aligned}$$

The exact solution of this equation is \(y(t)=(\log t)^{5}-(\log t)^{4}+2(\log t)^{3}\), and:

$$\begin{aligned} _{CH}\mathrm{{D}}^{\alpha }_{1^{+}}{y(t)}=\frac{\Gamma (6)}{\Gamma (6-\alpha )}(\log t)^{5-\alpha } -\frac{\Gamma (5)}{\Gamma (5-\alpha )}(\log t)^{4-\alpha } +\frac{2\Gamma (4)}{\Gamma (4-\alpha )}(\log t)^{3-\alpha }. \end{aligned}$$

It is not difficult to verify that \(_{CH}\mathrm{{D}}^{\alpha }_{{1, t}}{y(t)}\in C^{2}[1, T]\) with \(T>1\). Thus, the conditions of Theorems 2.62.8 are satisfied. The numerical results of Example 4.1 are displayed in Tables 1, 2, and 3. From Table 1, we can see that the experimental order of convergence (EOC) for rectangular scheme is 1 which is consistent with the theoretical convergent order in Theorem 2.6. For Tables 2 and 3, the similar situations also occur.

Table 1 Absolute errors at \(t=2\) for IVP (4.1) using the left rectangular scheme (2.4)
Table 2 Absolute errors at \(t=2\) for IVP (4.1) using the \({{L}}_{\mathrm{log},1}\) interpolation scheme (2.5)
Table 3 Absolute errors at \(t=2\) for IVP (4.1) using the modified predictor–corrector scheme (2.6)

Example 4.2

Consider the following fractional partial differential equation:

$$\begin{aligned} \left\{ \begin{aligned}&_{CH}\mathrm{{D}}^{\alpha }_{{1, t}}u(x,t)-\Delta u(x,t)=f(x,t), \ \ 1<t\le T, \ 0<x<1, \\&u(x,1)=0,\ \ 0< x< 1, \\&u(0,t)=u(1,t)=0,\ \ 1<t\le T, \end{aligned}\right. \end{aligned}$$
(4.2)

where \(\alpha \in (0, 1)\) and the source term:

$$\begin{aligned} f(x,t)=\frac{2}{\Gamma (3-\alpha )}(\log t)^{2-\alpha }x(1-x)+2(\log t)^{2}. \end{aligned}$$

The exact solution of this equation is \(u(x,t)=x(1-x)(\log t)^{2}\). We mainly present the temporal errors in the sense of maximum norm and the corresponding convergence order, where the spatial stepsize is taken as \(h=\frac{1}{1024}\). Table 4 shows the experimental results of Example 4.2. It is easy to observed that the experimental convergent order is exactly \(2-\alpha \), which supports theoretical analysis derived in Theorems 3.2 and 3.4.

Table 4 Maximum norm error at \(t=2\) for (4.2) using scheme (3.11)

5 Conclusion

This paper focuses on numerical simulation for fractional differential equation with Caputo–Hadamard derivative. We discuss the smoothness properties of the solution to equation (1.1). Stability, convergence, and error estimate of the left fractional rectangular, \({\hbox {L}}_{\mathrm{log},1}\) interpolation, and modified predictor–corrector methods are studied in the setting of uniform meshes. Then, we construct numerical schemes to approximate to Caputo–Hadamard fractional partial differential equation with initial and boundary conditions, via using linear interpolation in temporal direction and the standard second-order central difference in spatial direction. We prove the stability and error estimates which turns out to be \(O(\tau ^{2-\alpha }+h^{2})\) for a semi-discrete and a fully discrete scheme in \(H^{1}\) norm.