1 Introduction

In this paper we consider the numerical solution of a class of boundary value problems for linear fractional integro-differential equations of the form

$$\begin{aligned} (D_*^{\alpha }y)(t)+ h(t)y(t) + \int _0^t K(t,s)y(s) ds=f(t), \; 0\le t\le b, \; 0 < \alpha < 1, \end{aligned}$$
(1.1)
$$\begin{aligned} \gamma _0y (0) + \gamma _1 y(b_1)= \gamma ,\; 0<b_1\le b,\; \gamma _0, \gamma _1, \gamma \in {\mathbb R}:=(-\infty ,\infty ), \end{aligned}$$
(1.2)

where \(D_*^{\alpha }y\) is the Caputo fractional derivative of y and hfK are some given continuous functions: \(h,f \in C[0,b], \, K \in C(\varDelta ), \varDelta =\{(t,s) : 0 \le s \le t \le b\}\). The Caputo differential operator \(D_*^{\alpha }\) of order \(\alpha \in (0,1) \) is defined by (see, e.g., [1])

$$ (D_*^{\alpha }y)(t):=(D^{\alpha }[y-y(0)])(t),\; t>0. $$

Here \(D^{\alpha }y\) is the Riemann–Liouville fractional derivative of y :

$$ (D^{\alpha }y)(t):=\frac{d}{dt}(J^{1-\alpha }y)(t), \quad 0<\alpha < 1,\quad t>0, $$

with \(J^{\beta }\), the Riemann–Liouville integral operator, defined by the formula

$$\begin{aligned} (J^{\beta }y)(t):=\frac{1}{\varGamma (\beta )}\int _0^t (t-s)^{\beta - 1}\, y(s)\,ds, \quad t>0, \quad \beta >0. \end{aligned}$$
(1.3)

where \(\varGamma \) is the Euler gamma function.

It is well known (see, e.g., [2]) that \(J^{\beta },\ \beta >0,\) is linear, bounded and compact as an operator from \(L^{\infty }(0,b)\) into C[0, b], and we have for any \(y\in L^{\infty }(0,b)\) that (see, e.g. [3])

$$\begin{aligned} J^{\beta }y\in C[0,b],\quad (J^{\beta }y)(0)=0,\quad \beta >0, \end{aligned}$$
(1.4)
$$\begin{aligned} D^{\delta }J^{\beta }y=D_*^{\delta }J^{\beta }y= J^{\beta -\delta }y,\quad 0<\delta \le \beta . \end{aligned}$$
(1.5)

Fractional differential equations arise in various areas of science and engineering. In the last few decades theory and numerical analysis of fractional differential equations have received an increasing attention (see, e.g. [1, 35]). Some recent results about the numerical solution of fractional differential equations can be found in [1, 612].

In the present paper, the numerical solution of (1.1)-(1.2) by piecewise polynomial collocation techniques is considered. We use an integral equation reformulation of the problem and special non-uniform grids reflecting the possible singular behavior of the exact solution. Our aim is to study the attainable order of the proposed algorithms in a situation where the higher order (usual) derivatives of h(t) and f(t) may be unbounded at \(t=0\). Our approach is based on some ideas and results of [10]. In particular, the case where (1.1)-(1.2) is an initial value problem (\(\gamma _1 = 0\)) or a terminal (boundary) value problem (\(\gamma _0 = 0\), see [7, 8]) is under consideration.

2 Smoothness of the Solution

In order to characterize the behavior of higher order derivatives of a solution of equation (1.1), we introduce a weighted space of smooth functions \(C^{q,\nu }(0,b]\) (cf., e.g., [2, 13]). For given \(q\in {\mathbb N}\) and \(-\infty <\nu <1\), by \(C^{q,\nu }(0,b]\) we denote the set of continuous functions \(y:[0,b]\rightarrow {\mathbb R}\) which are q times continuously differentiable in (0, b] and such that for all \(t\in (0,b]\) and \(i=1,\ldots ,q\) the following estimates hold:

$$ \big |y^{(i)}(t)\big |\le c\left\{ \begin{array}{lll} 1\, &{}\mathrm{if} &{}i<1-\nu \,,\\ 1+|\log t|\, &{}\mathrm{if} &{}i=1-\nu \,,\\ t^{1-\nu -i} &{}\mathrm{if} &{}i>1-\nu \,. \end{array}\right. $$

Here \(c=c(y)\) is a positive constant. Clearly,

$$\begin{aligned} C^{q}[0,b]\subset C^{q,\nu }(0,b]\subset C^{m,\mu }(0,b]\subset C[0,b], \quad q\ge m\ge 1,\quad \nu \le \mu <1. \end{aligned}$$
(2.1)

Note that a function of the form \(y(t)=g_1(t)\,t^{\mu }+g_2(t)\) is included in \(C^{q,\nu }(0,b]\) if \(\mu \ge 1-\nu >0\) and \(g_j\in C^{q}[0,b]\,,\ j=1,2.\)

In what follows we use an integral equation reformulation of (1.1)-(1.2). Let \(y\in C[0,b]\) be such that \(D_*^{\alpha }y \in C[0,b]\). Introduce a new unknown function \(z:=D_*^{\alpha }y\). Then (see [1, 3])

$$\begin{aligned} y(t)=(J^{\alpha }z)(t)+ c, \end{aligned}$$
(2.2)

where c is an arbitrary constant. The function (2.2) satisfies the boundary conditions (1.2) if and only if (see (1.4))

$$ c(\gamma _0 + \gamma _1) = \gamma - \gamma _1 (J^{\alpha }z)(b_1). $$

In the sequel we assume that \(\gamma _0 + \gamma _1 \ne 0\). Therefore

$$\begin{aligned} c = \frac{\gamma }{\gamma _0 + \gamma _1} - \frac{\gamma _1}{\gamma _0 + \gamma _1}(J^{\alpha }z)(b_1). \end{aligned}$$
(2.3)

Thus, the function (2.2) satisfies the conditions (1.2) if and only if

$$\begin{aligned} y(t)=(J^{\alpha }z)(t) + \frac{\gamma }{\gamma _0 + \gamma _1} - \frac{\gamma _1}{\gamma _0 + \gamma _1}(J^{\alpha }z)(b_1),\quad 0\le t\le b. \end{aligned}$$
(2.4)

Substituting (2.4) into (1.1) and using (1.5), we obtain for z an operator equation of the form

$$\begin{aligned} z=Tz+g, \end{aligned}$$
(2.5)

with an operator T, defined by formula

$$\begin{aligned} (Tz)(t)&= -\frac{h(t)}{\varGamma (\alpha )}\int _0^t (t - s) ^{\alpha - 1} z(s)ds -\frac{1}{\varGamma (\alpha )}\int _0^t K(t,s) \int _0^s(s-\tau )^{\alpha - 1} z(\tau )d\tau ds \nonumber \\&\quad +\frac{\gamma _1}{\varGamma (\alpha )(\gamma _0 + \gamma _1)}\left[ h(t) + \int _0^t K(t,s)ds\right] \int _0^{b_1}(b_1 - s)^{\alpha - 1}z(s)ds \end{aligned}$$
(2.6)

and

$$\begin{aligned} g(t) = f(t) -\frac{\gamma }{\gamma _0 + \gamma _1}\left( h(t) + \int _0^tK(t,s)ds\right) , \; 0 \le t \le b. \end{aligned}$$
(2.7)

We observe that equation (2.5) is a linear weakly singular Fredholm integral equation of the second kind with respect to z.

The existence and regularity of a solution to (1.1)-(1.2) is described by the following lemma which can be proved similarly to Theorem 2.1 in [10].

Lemma 1

Assume that \(K \in C^q(\varDelta )\) and \(h,f\in C^{q,\mu }(0,b]\), where \(q\in {\mathbb N}\) and \(-\infty <\mu <1\). Moreover, assume that \(\gamma _0 + \gamma _1 \ne 0\) and the boundary value problem (1.1)-(1.2) with \(f = 0\) and \(\gamma = 0\) has in C[0, b] only the trivial solution \(y = 0\).

Then problem (1.1)-(1.2) possesses a unique solution \(y\in C[0,b] \) such that \(D_*^{\alpha }y\in C^{q,\nu }(0,b]\), where \(0 < \alpha < 1\) and

$$\begin{aligned} \nu :=\max \{ \mu ,1 - \alpha \}. \end{aligned}$$

3 Numerical Method

Let \(N\in {\mathbb N}\) and let \(\Pi _N:=\{t_0,\ldots ,t_{N}\}\) be a partition (a graded grid) of the interval [0, b] with the grid points

$$\begin{aligned} t_j:=\displaystyle b\left( \frac{j}{N}\right) ^r,\quad j=0,1,\ldots ,N\,, \end{aligned}$$
(3.1)

where the grading exponent \(r\in {\mathbb R},\ r\ge 1\). If \(r=1\), then the grid points (3.1) are distributed uniformly; for \(r>1\) the points (3.1) are more densely clustered near the left endpoint of the interval [0, b].

For given integer \(k\ge 0\) by \(S_k^{(-1)}(\Pi _N)\) is denoted the standard space of piecewise polynomial functions :

$$ S_k^{(-1)}(\Pi _ N):=\big \{v: v\big |_{(t_{j-1},t_j)}\in \pi _{k},\,j=1,\ldots ,N\big \}. $$

Here \(v\big |_{(t_{j-1},t_j)}\) is the restriction of \(v:[0,b]\rightarrow {\mathbb R}\) onto the subinterval \((t_{j-1},t_j)\) \(\subset [0,b]\) and \(\pi _{k}\) denotes the set of polynomials of degree not exceeding k. Note that the elements of \(S_k^{(-1)}(\Pi _N)\) may have jump discontinuities at the interior points \(t_1,\ldots ,t_{N-1}\) of the grid \(\Pi _N\).

In every interval \([t_{j-1},t_j]\), \(j=1,\ldots ,N\), we define \(m\in {\mathbb N}\) collocation points \(t_{j1},\dots ,t_{jm}\) by formula

$$\begin{aligned} t_{jk}:=t_{j-1}+\eta _k(t_j-t_{j-1})\,,\quad k=1,\ldots ,m,\ j=1,\dots ,N, \end{aligned}$$
(3.2)

where \(\eta _1\ldots ,\eta _m\) are some fixed (collocation) parameters which do not depend on j and N and satisfy

$$\begin{aligned} 0\le \eta _1< \eta _2<\ldots <\eta _m\le 1\,. \end{aligned}$$
(3.3)

We look for an approximate solution \(y_N\) to (1.1)-(1.2) in the form (cf. (2.4))

$$\begin{aligned} y_N(t) = (J^{\alpha }z_N)(t) + \frac{\gamma }{\gamma _0 + \gamma _1} - \frac{\gamma _1}{\gamma _0 + \gamma _1}(J^{\alpha }z_N)(b_1),\quad 0\le t\le b, \end{aligned}$$
(3.4)

where \(z_N\in S_{m-1}^{(-1)}(\Pi _N)\) \((m,N\in {\mathbb N})\) is determined by the following collocation conditions:

$$\begin{aligned} z_N(t_{jk})=(Tz_N)(t_{jk})+g(t_{jk}),\quad k=1,\ldots ,m,\ j=1,\ldots ,N. \end{aligned}$$
(3.5)

Here Tg and \(t_{jk}\) are defined by (2.6), (2.7) and (3.2), respectively. If \(\eta _1=0\), then by \(z_N(t_{j1})\) we denote the right limit \(\lim _{t\rightarrow t_{j-1},t>t_{j-1}}z_N(t)\). If \(\eta _m=1\), then \(z_N(t_{jm})\) denotes the left limit \(\lim _{t\rightarrow t_{j},t<t_{j}}z_N(t)\). Conditions (3.5) have an operator equation representation

$$\begin{aligned} z_N={\mathcal P}_NTz_N+{\mathcal P}_Ng \end{aligned}$$
(3.6)

with an interpolation operator \({\mathcal P}_N={\mathcal P}_{N,m}:C[0,T]\rightarrow S_{m-1}^{(-1)}(\Pi _ N)\) defined for any \(v\in C[0,b]\) by the following conditions:

$$\begin{aligned} {\mathcal P}_Nv\in S_{m-1}^{(-1)}(\Pi _ N),\ ({\mathcal P}_Nv)(t_{jk})=v(t_{jk}),\ k=1,\dots ,m,\ j=1,\dots ,N. \end{aligned}$$
(3.7)

The collocation conditions (3.5) form a system of equations whose exact form is determined by the choice of a basis in \(S_{m-1}^{(-1)}(\Pi _N)\). If \(\eta _1>0\) or \(\eta _m<1\) then we can use the Lagrange fundamental polynomial representation:

$$\begin{aligned} z_N(t)=\sum _{\lambda =1}^{N}\sum _{\mu =1}^{m}c_{\lambda \mu } \varphi _{\lambda \mu }(t)\,,\quad t\in [0,b]\,, \end{aligned}$$
(3.8)

where \(\varphi _{\lambda \mu }(t):=0\) for \(t\not \in [t_{\lambda -1},t_{\lambda }]\) and

$$ \varphi _{\lambda \mu }(t):=\prod \limits _{i=1,i\not =\mu }^{m} \frac{t-t_{\lambda i}}{t_{\lambda \mu }-t_{\lambda i}}\quad \mathrm{for} \quad t\in [t_{\lambda -1},t_\lambda ],\ \mu =1,\ldots ,m, \,\lambda =1,\ldots ,N. $$

Then \(z_N\in S_{m-1}^{(-1)}(\Pi _N)\) and \(z_N(t_{jk})=c_{jk},\ k=1,\dots ,m,\ j=1,\dots ,N\). Searching the solution of (3.5) in the form (3.8), we obtain a system of linear algebraic equations with respect to the coefficients \(c_{jk} =z_N(t_{jk})\):

$$\begin{aligned} c_{jk}=\sum _{\lambda =1}^{N} \sum _{\mu =1}^m (T\varphi _{\lambda \mu })(t_{jk})c_{\lambda \mu }+ g(t_{jk}),\quad k=1,\ldots ,m,\ j=1,\ldots ,N. \end{aligned}$$
(3.9)

Note that this algorithm can be used also in the case if in (3.3) \(\eta _1=0\) and \(\eta _m=1\). In this case we have \(t_{jm}=t_{j+1,1}=t_j,\ c_{jm}=c_{j+1,1}=z_N(t_j)\ (j=1,\dots ,N-1)\), and hence in the system (3.9) there are \((m-1)N+1\) equations and unknowns.

4 Convergence Estimates

In this section we formulate two theorems about the convergence and convergence order of the proposed algorithms.

Theorem 1

Let \(m\in {\mathbb N}\) and assume that the collocation points (3.2) with grid points (3.1) and arbitrary parameters \(\eta _1,\dots ,\eta _m\) satisfying (3.3) are used. Assume that \(h, f\in C[0,b]\) and \(K \in C(\varDelta )\). Moreover, assume that \(\gamma _0 + \gamma _1 \ne 0\) and the problem (1.1)-(1.2) with \(f=0\) and \(\gamma =0\) has in C[0, b] only the trivial solution \(y=0\).

Then (1.1)-(1.2) has a unique solution \(y\in C[0,b]\) such that \(D_*^{\alpha }y\in C[0,b]\). Moreover, there exists an integer \(N_0\) such that for all \(N\ge N_0\) equation (3.6) possesses a unique solution \(z_N\in S_{m-1}^{(-1)}(\Pi _N)\) and

$$\begin{aligned} \Vert y-y_N\Vert _\infty \rightarrow 0\quad \mathrm{as} \quad N\rightarrow \infty \end{aligned}$$
(4.1)

where \(y_N\) is defined by (3.4).

If, in addition, \(K \in C^{m}(\varDelta )\) and \(h, f\in C^{m,\mu }(0,b]\) with \(-\infty < \mu <1\), then for all \(N\ge N_0\) and \(r\ge 1\) (given by (3.1)) the following error estimate holds:

$$\begin{aligned} \Vert y-y_N\Vert _\infty \le c\,\left\{ \begin{array}{llll} N^{-r(1-\nu )} &{} for \quad 1\le &{}r<\frac{m }{1 -\nu }\,,\\ N^{-m } &{} for &{} r\ge \frac{m}{1 -\nu } .\, \end{array}\right. \end{aligned}$$
(4.2)

Here c is a constant which is independent of N, \(\nu = \max \{\mu , 1-\alpha \}\), \(0 < \alpha < 1\) and

$$ \Vert v\Vert _\infty :=\sup _{0<t<b} |v(t)|,\quad v\in L^{\infty }(0,b). $$

It follows from Theorem 1 that in the case of sufficiently smooth hf and K, using sufficiently large values of the grid parameter r, for method (3.4), (3.6) by every choice of collocation parameters \(0\le \eta _1<\dots <\eta _m\le 1\) a convergence of order \(O(N^{-m})\) can be expected. The following result shows that by a careful choice of parameters \(\eta _1,\dots ,\eta _m\) it is possible to establish a faster convergence of this method.

Theorem 2

Let the following conditions be fulfilled:

(i) \({\mathcal P}_N={\mathcal P}_{N,m}\ (N,m\in {\mathbb N})\) is defined by (3.7) where the interpolation nodes (3.2) with grid points (3.1) and parameters (3.3) are used;

(ii) the assumptions of Lemma 1 hold with \(q:=m+1\);

(iii) the quadrature approximation

$$\begin{aligned} \int _0^1 F(x)\,dx\approx \sum _{k=1}^{m}w_k\,F(\eta _k), \end{aligned}$$
(4.3)

with the knots \(\{\eta _k\}\) satisfying (3.3) and appropriate weights \(\{w_k\}\) is exact for all polynomials of degree m.

Then (1.1)-(1.2) has a unique solution \(y\in C[0,b]\) such that \(D_*^{\alpha }y\in C^{q,\nu }(0,b]\). There exists an integer \(N_0\) such that, for \(N\ge N_0\), equation (3.6) possesses a unique solution \(z_N\in S_{m-1}^{(-1)}(\Pi _ N)\), determining by (3.4) a unique approximation \(y_N\) to y, the solution of (1.1)-(1.2), and the following error estimate holds:

$$\begin{aligned} \Vert y-y_N\Vert _\infty \, \le c \, \left\{ \begin{array}{llll} N^{-r(1+\alpha -\nu )} &{} for \quad 1\le &{}r<\frac{m+\alpha }{1+\alpha -\nu }\,,\\ N^{-m-\alpha } &{}for &{} r\ge \frac{m+\alpha }{1+\alpha -\nu }\,. \end{array}\right. \end{aligned}$$
(4.4)

Here \(0 <\alpha < 1\), \(\,\nu = \max \{\mu , 1-\alpha \}\), \(r\in [1,\infty )\) is the grading exponent of the grid (see (3.1)) and c is a positive constant not depending on N.

The proofs of Theorems 1 and 2 are based on Lemma 1 and are similar to the corresponding proofs of Theorems 4.1 and 4.2 in [10].

5 Numerical Illustration

We consider the following boundary value problem:

$$\begin{aligned} (D_*^{\frac{1}{2}}y)(t)+h(t)y(t) + \int _0^tK(t,s)y(s)ds=f(t),\quad y(0)+y(1)=2,\quad 0\le t\le 2, \end{aligned}$$
(5.1)

where \(K(t,s) := 1\) for \(0 \le s \le t \le 2\) and

$$ h(t):=t^{\frac{1}{2}}, \quad f(t):=\frac{5\varGamma (\frac{7}{4})}{2\varGamma (\frac{9}{4})}\,t^{\frac{1}{4}}+2\,t^{\frac{5}{4}}+\frac{8}{7} t^{\frac{7}{4}},\quad 0 \le t \le 2. $$

This is a special problem of (1.1)-(1.2) with \(\alpha =\frac{1}{2}, b=2, b_1=1, \gamma _0=\gamma _1 = 1\) and \(\gamma = 2\). Clearly, \(h,f\in C^{q,\mu }(0,2]\) with \(\mu =\frac{3}{4}\) and arbitrary \(q\in {\mathbb N}\). To solve (5.1) by (3.4)-(3.6) we set \(z:=D_*^{\frac{1}{2}}y\). For z we have equation (2.5) with T and g given by (2.6) and (2.7), respectively. Approximations \(z_N\in S_{m-1}^{(-1)}(\Pi _N)\) for \(m=2\) and \(N\in {\mathbb N}\) to the solution z of equation (2.5) on the interval [0, 2] are found by (3.5) using \(m=2\) and (3.2) with \(\eta _1=(3-\sqrt{3})/6,\ \eta _2=1-\eta _1\), the knots of the Gaussian quadrature formula (4.3). Actually, \(z_N(t_{jk})=c_{jk} \ (k=1,2,\ j=1,\dots ,N)\) and \(z_N(t)\) for \(t\in [0,\,2]\) are determined by (3.9) and (3.8), respectively. After that the approximate solution \(y_N\) for the boundary value problem (5.1) has been found by formula (3.4).

Table 1. Numerical results for the problem (5.1).

In Table 1 some results of numerical experiments for different values of the parameters N and r are presented. The errors \(\varepsilon _N\) in Table 1 are calculated as follows:

$$\begin{aligned} \begin{array}{lll} \varepsilon _N:=\max \limits _{j=1,\ldots ,N}\,\max \limits _{k=0,\ldots ,10} |y(\tau _{jk})-y_N(\tau _{jk})|\,, \\ \end{array} \end{aligned}$$
(5.2)

where \(\tau _{jk}:=t_{j-1}+k(t_j-t_{j-1})/10,\ k=0,\ldots ,10,\ j=1,\ldots ,N\) (the grid points \(t_j\) and collocation points \(t_{jk}\) are determined by (3.1) and (3.2), respectively). In (5.2) we have taken into account that the exact solution of (5.1) is

$$ y(t)=2\,t^{\frac{3}{4}}, \quad t \in [0,2], $$

and thus

$$ z(t)=(D_*^{\frac{1}{2}}y)(t)=\frac{5\varGamma (\frac{7}{4})}{2\varGamma (\frac{9}{4})}\,t^{\frac{1}{4}}, \quad t \in [0,2]. $$

The ratios

$$\begin{aligned} \varrho _N:=\frac{\varepsilon _{N/2}}{\varepsilon _N }\,, \quad \end{aligned}$$

characterizing the observed convergence rate, are also presented.

Since \(\alpha =\frac{1}{2}\), \(\mu = \frac{3}{4}\) and \(\nu = \max \{ \mu , 1 - \alpha \} = \frac{3}{4}\) we obtain from Theorem 2 (see (4.4)) that, for sufficiently large N,

$$\begin{aligned} \varepsilon _N \le c\,\left\{ \begin{array}{lll} N^{-0.75r}&{}\mathrm{if} &{}1\le r< \frac{10}{3},\\ N^{-2.5} &{}\mathrm{if} &{}r\ge \frac{10}{3}. \end{array}\right. \end{aligned}$$
(5.3)

Due to (5.3) the ratios \(\varrho _N\) for \(r=1\), \(r=2\) and \(r \ge \frac{10}{3}\) ought to be approximatively \(2^{0.75} \approx 1.68\), \(2^{1.5} \approx 2.83\) and \(2^{2.5}\approx 5.66\), respectively. These values are given in the last row of Table 1. As we can see from Table 1 the estimate (4.4) expresses well enough the actual rate of convergence of \(y_N\) to y.