1 Introduction

Over the last two decades, the theory of fractional differential equations (FDEs) has been extensively studied in [15, 17, 18, 28]. Especially, the Riesz fractional derivative, which appears frequently in spatial fractional models (such as various diffusion models), has been widely studied. Some finite-difference based numerical schemes for approximating fractional derivatives and solving linear or nonlinear FDEs are presented in [7, 25,26,27, 30, 34, 40]. There are also works that apply finite element method and DG method to solve Riesz FDEs; some of them concentrate on theoretical analysis, [3, 19, 22], and some are mainly about improving algorithms, such as fast algorithms [11, 20].

On the other hand, spectral methods are promising candidates for solving FDEs since their global nature fits well with the nonlocal definition of fractional operators. Using integer-order orthogonal polynomials as basis functions, spectral methods [4, 8, 10, 12, 13, 33, 37, 42] help enormously with the alleviation of the memory cost for the discretization of fractional derivatives. The authors of [5, 9, 35, 36] designed suitable bases to deal with singularities, which usually appear in fractional problems. In particular, Mao et al. [16] proposed a spectral Petrov–Galerkin method, which is based on generalized Jacobi functions, for solving Riesz FDEs, and provided rigorous error analysis.

In this work, we study the superconvergence phenomenon for some spectral interpolation of the Riesz fractional derivative. In the literature, superconvergence of the h-version finite element method has been well studied and understood, see, e.g., [14, 29], while there have been some relatively recent superconvergence studies of polynomial spectral interpolation and spectral methods in the case of integer-order derivatives, see, e.g., [31, 32, 38, 39]. As for fractional-order derivatives, Zhao and Zhang studied the Riemann–Liouville case recently [41] and found systematically some superconvergence points in the spirit of an earlier work on integral-order spectral methods [39].

A major difficulty in the investigation of superconvergence of spectral methods for fractional problems, compared with integer-order derivatives, is the nonlocality of fractional operators and the complicated forms of fractional derivatives. The second challenge is the construction of a good basis for a spectral scheme. Given a suitable basis, one can then begin the analysis of the approximation error in order to locate the superconvergence points.

One objective of this work is to consider Lobatto-type polynomial interpolants of a sufficiently smooth function, and identify those points where the values of the Riesz fractional derivative of order \(\alpha \) are superconvergent. Note that \(x=\pm 1\) are interpolation points for the Legendre–Lobatto interpolation; this fact guarantees that after taking the derivative of order \(\alpha \in (0,1)\), the global error doesn’t blow up. In this case, superconvergence points are zeros of the Riesz fractional derivative of the corresponding Legendre–Lobatto polynomials. Furthermore, according to a series of numerical experiments, we observe that the convergence rate is at least \(O(N^{-2})\) better than the global convergence rate.

Comparing with the Riemann–Liouville fractional derivative, the main difficulty in studying the Riesz case is to deal with the left and right fractional derivatives simultaneously, see the definition of (2.2)–(2.3). Fractional derivatives of order \(\alpha >1\) create stronger singularities at \(x=\pm 1\), which restrain the use of Lobatto-type polynomials. To handle the stronger singularity, the fractional interpolation using generalized Jacobi functions (GJF) is introduced (in Sect. 4), and its well-posedness, convergence and superconvergence properties are theoretically analyzed. When \(0<\alpha <2\), if the given function is GJF-interpolated at zeros of the Jacobi polynomial \(P_{N+1}^{\frac{\alpha }{2},\frac{\alpha }{2}}(x)\), then the superconvergence points for a fractional derivative of order \(\alpha \) are exactly the interpolation points. Moreover, the convergence rate at the superconvergence points is \(O(N^{-\frac{\alpha +3}{2}})\) and \(O(N^{-2})\) higher than the global convergence rate for \(0<\alpha <1\) and \(1<\alpha <2\), respectively.

To demonstrate the usefulness of our discovery of these superconvergence points, we use the GJF as a basis to solve a model fractional differential equation by both Petrov–Galerkin and spectral collocation methods. We observe that convergence rates at the predicted superconvegrence points are indeed much better than the best possible global rates.

The organization of this paper is as follows. In Sect. 2, definitions and properties of fractional derivatives, Jacobi polynomials, generalized Jacobi functions, and Gegenbauer polynomials are introduced. Section 3 is about the Legendre–Lobatto polynomial interpolation and Sect. 4 deals with the GJF fractional interpolation, along with numerical examples. Section 5 considers some applications of superconvergence theory. Finally, we draw some conclusions in Sect. 6.

2 Preliminaries

We begin with some basic definitions and properties. Throughout the paper, \({\mathbb {Z}}^+\) denotes the set of all positive integers, \({\mathbb {N}}\) denotes the set of all nonnegative integers. \({\mathcal {M}}_{n}({\mathbb {R}})\) denotes the space of all of \(n \times n\) matrices defined on real number field.

2.1 Definitions and Properties of Fractional Derivatives

First, we recall the definitions and properties of Riesz fractional derivatives.

Definition 2.1

Let \(\gamma \in (0,1)\), the left and right fractional integral are defined respectively, as follows:

$$\begin{aligned} {_{-1}I_x^{\gamma }}u(x):= & {} \frac{1}{\Gamma (\gamma )} \int _{-1}^x \frac{u(\tau )}{(x-\tau )^{1-\gamma }} d\tau , \ x \in (-1,1], \\ {_{x}I_1^{\gamma }}u(x):= & {} \frac{1}{\Gamma (\gamma )} \int _{x}^1 \frac{u(\tau )}{(\tau -x)^{1-\gamma }} d\tau , \ \ x \in [-1,1). \end{aligned}$$

Then for \(\alpha \in (k-1,k)\), where \(k \in {\mathbb {Z}}^{+}\), the left and right Riemann–Liouville derivatives are defined respectively by:

$$\begin{aligned} {}_{-1}D_x^{\alpha } u(x)= & {} D^k (_{-1}I_x^{k-\alpha }u(x)), \\ {}_{x}D_1^{\alpha } u(x)= & {} (-1)^k D^k (_xI_1^{k-\alpha }u(x)), \end{aligned}$$

where \(D^k:=\frac{d^k}{dt^k}\) is the k-th (weak) derivative.

Definition 2.2

Let \(\gamma \in (0,1)\), the one dimensional Riesz potentials are defined as follows:

$$\begin{aligned} I^{\gamma }_o u(x):= & {} \frac{c_1}{\Gamma (\gamma )}\int _{-1}^1 \frac{sign(x-\tau ) u(\tau )}{|x-\tau |^{1-\gamma }} d\tau = c_1 (_{-1}I_x^{\gamma }-{_x I_1^{\gamma }})u(x) \\ I^{\gamma }_e u(x):= & {} \frac{c_2}{\Gamma (\gamma )}\int _{-1}^1 \frac{u(\tau )}{|x-\tau |^{1-\gamma }} d\tau = c_2 (_{-1}I_x^{\gamma }+{_x I_1^{\gamma }})u(x) \end{aligned}$$

where sign(x) is the sign function, \(c_1=\frac{1}{2 \sin (\pi \gamma /2)}\), \(c_2=\frac{1}{2 \cos (\pi \gamma /2)}\). Then for \(\alpha \in (k-1,k)\), we can therefore define the Riesz fractional derivative:

$$\begin{aligned} ^R D^{\alpha }u(x):= \left\{ \begin{array}{ll} D^k I_o^{k-\alpha }u(x)=c_1 (_{-1}D_x^{\alpha }+ {_{x}D_1^{\alpha }} )u(x), \ k \ is \ odd\\ D^k I_e^{k-\alpha }u(x)=c_2 (_{-1}D_x^{\alpha }+ {_{x}D_1^{\alpha }} )u(x), \ k \ is \ even \end{array}\right. \end{aligned}$$
(2.1)

Definition 2.3

For any positive real number \(\alpha \in (k-1,k)\), we define:

$$\begin{aligned} ^R D^{\alpha }_o u(x):=D^k I_o^{k-\alpha }u(x)=c_1 (_{-1}D_x^{\alpha }+ (-1)^k {_{x}D_1^{\alpha }} )u(x), \end{aligned}$$
(2.2)

and

$$\begin{aligned} ^R D^{\alpha }_e u(x):=D^k I_e^{k-\alpha }u(x)=c_2 (_{-1}D_x^{\alpha }+ (-1)^k {_{x}D_1^{\alpha }} )u(x). \end{aligned}$$
(2.3)

Let us recall the Leibniz rule for fractional derivatives.

Lemma 2.4

(see [21], Chap.2) Let \(\alpha \in {\mathbb {R}}^+\), \(n \in {\mathbb {Z}}^+\), and \(\alpha \in (n-1,n)\). If both f(x) and g(x) along with all their derivatives are continuous in \([-1,1]\), then the Leibniz rule for the left Riemann–Liouville fractional differentiation takes the following form

$$\begin{aligned} _{-1}D_x^{\alpha }[f(x)g(x)]=\sum _{k=0}^{\infty } \frac{\Gamma (\alpha +1)}{\Gamma (k+1)\Gamma (\alpha -k+1)} (_{-1}D_{x}^{\alpha -k} f(x)) g^{(k)}(x). \end{aligned}$$
(2.4)

By changing variables, we can derive the Leibniz rule for the right Riemann–Liouville derivative.

Lemma 2.5

(see [23], Chap.15) Under the same conditions as Lemma 2.4, the Leibniz rule for the right Riemann–Liouville differentiation takes the following form

$$\begin{aligned} _xD_1^{\alpha }[f(x)g(x)]=\sum _{k=0}^{\infty } (-1)^k \frac{\Gamma (\alpha +1)}{\Gamma (k+1)\Gamma (\alpha -k+1)} (_xD_1^{\alpha -k} f(x)) g^{(k)}(x). \end{aligned}$$
(2.5)

Lemma 2.6

(see [21], Chap.2) Let \(\gamma \), \(\eta >0\), and let u(x) be a continuous function. Then for any \(x \geqslant -1\), we have:

$$\begin{aligned} {_{-1}I_x^{\gamma }}\Big ({_{-1}I_x^{\eta }}u(x) \Big ) = {_{-1}I_x^{\gamma +\eta }} u(x), \end{aligned}$$
(2.6)

and

$$\begin{aligned} {_{x}I_1^{\gamma }}\Big ({_{x}I_1^{\eta }}u(x) \Big ) = {_{x}I_1^{\gamma +\eta }} u(x). \end{aligned}$$
(2.7)

2.2 Jacobi Polynomials and Generalized Jacobi Functions

We start from the definition of Generalized Jacobi Functions and Gegenbauer polynomials.

Definition 2.7

Let \(\alpha >-1\), the Generalized Jacobi Functions (GJF) is defined as follows:

$$\begin{aligned} {\mathcal {J}}_n^{-\alpha ,-\alpha }(x):=(1-x^2)^{\alpha }P_n^{\alpha ,\alpha }(x) \end{aligned}$$

where \(P_n^{\alpha ,\alpha }(x)\) is the n-th Jacobi polynomial with respect to the weight function \(\omega (x)=(1-x)^{\alpha }(1+x)^{\alpha }\).

Definition 2.8

We define the Gegenbauer polynomials by Jacobi polynomials:

$$\begin{aligned} C_n^{(\lambda )}(x):=c_{(\lambda ,n)}P_n^{\left( \lambda -\frac{1}{2},\lambda -\frac{1}{2}\right) }(x)=\frac{(2 \lambda )_n}{(\lambda +\frac{1}{2})_n}P_n^{\left( \lambda -\frac{1}{2},\lambda -\frac{1}{2}\right) }(x). \end{aligned}$$

The Gegenbauer polynomials have the following properties by [32]:

$$\begin{aligned} \frac{d}{dx}C_n^{(\lambda )}(x)=2 \lambda C_{n-1}^{(\lambda +1)}(x), \end{aligned}$$
(2.8)

and

$$\begin{aligned} \max _{-1\leqslant x \leqslant 1} \{ C_n^{(\lambda )}(x) \} \left\{ \begin{array}{ll} =C_n^{(\lambda )}(1)=\frac{\Gamma (n+2\lambda )}{\Gamma (n+1) \Gamma (2 \lambda )} \sim \frac{N^{2\lambda -1}}{\Gamma (2\lambda )}, \ \hbox {when} \ \lambda >0 \\ \leqslant D_{\lambda } N^{\lambda -1}, \ \ \ \ \ \ \ \ \ \hbox {when} \ -\frac{1}{2}<\lambda <0 \end{array}\right. \end{aligned}$$
(2.9)

where \(D_{\lambda }\) is a positive constant independent of n; and

$$\begin{aligned} \vert C_{n+1}^{(\lambda )}(z) \vert \geqslant \frac{n^{\lambda -1} \rho ^{n+1}}{2 \Gamma (\lambda )} (1+\rho ^{-2})^{-\lambda }, \ \forall z \in {\mathcal {E}}_{\rho }, \end{aligned}$$
(2.10)

where \({\mathcal {E}}_{\rho }\) is the \(Berstein \ ellipse\) defined in (3.4). When we set \(\lambda = \frac{\alpha +1}{2}\), we have:

$$\begin{aligned} P_{N+1}^{\frac{\alpha }{2},\frac{\alpha }{2}}(1) =\frac{\Gamma (N+2+\frac{\alpha }{2})}{\Gamma (N+2)\Gamma (\frac{\alpha }{2}+1)} \sim N^{\frac{\alpha }{2}}, \end{aligned}$$

and

$$\begin{aligned} C_{N+1}^{(\frac{\alpha +1}{2})}(1) = \frac{\Gamma (N+2+\alpha )}{\Gamma (N+2) \Gamma (\alpha +1)}\sim N^{\alpha }, \end{aligned}$$

consequently

$$\begin{aligned} c_{\left( \frac{\alpha +1}{2},N+1\right) } \sim N^{\frac{\alpha }{2}}, \end{aligned}$$
(2.11)

where \(C_{N+1}^{(\frac{\alpha +1}{2})}(x)=c_{(\frac{\alpha +1}{2},N+1)} P_{N+1}^{\frac{\alpha }{2},\frac{\alpha }{2}}(x)\).

Then, the following lemma shows the connection between the GJF and Riesz fraction derivatives.

Lemma 2.9

(see [16], Theorem 2) Let \(\alpha \in (k-1,k)\), \(k \in {\mathbb {Z}}^+\), then we have

$$\begin{aligned} I_{\nu }^{k-\alpha } {\mathcal {J}}_n^{-\frac{\alpha }{2},-\frac{\alpha }{2}}(x)=C(k)\frac{\Gamma (n+\alpha +1-k)}{2^{-k}n!}P_{n+k}^{\frac{\alpha }{2}-k,\frac{\alpha }{2}-k}(x); \end{aligned}$$
(2.12)

moreover, for \(m=0,1,\ldots ,k-1\),

$$\begin{aligned} ^R D_{\nu }^{\alpha -m} {\mathcal {J}}_n^{-\frac{\alpha }{2},-\frac{\alpha }{2}}(x)=C(k)\frac{\Gamma (n+\alpha +1-m)}{2^{-m}n!}P_{n+m}^{\frac{\alpha }{2}-m,\frac{\alpha }{2}-m}(x); \end{aligned}$$
(2.13)

where \(\nu =o\), \(C(k)=(-1)^{\frac{k-1}{2}}\), if k is odd; \(\nu =e\), \(C(k)=(-1)^{\frac{k}{2}}\), if k is even. Especially:

$$\begin{aligned} ^R D^{\alpha } {\mathcal {J}}_n^{-\frac{\alpha }{2},-\frac{\alpha }{2}}(x)=C(k)\frac{\Gamma (n+\alpha +1)}{n!}P_{n}^{\frac{\alpha }{2},\frac{\alpha }{2}}(x). \end{aligned}$$
(2.14)

3 Legendre–Lobatto Interpolation for \(0<\alpha <1\)

According to Definition 2.2, the Riesz fractional derivative of order \(\alpha \) is equivalent to the two-sided Riemann–Liouville fractional derivatives of the same order. This leads to singularities at \(x=\pm 1\). In order to get rid of the singularities, we consider interpolating u(x) by Lobatto-type polynomials, in particular the Legendre–Lobatto polynomials. By doing so, \(x= \pm 1\) are zero points of multiplicity 1 of the error \(u(x)-u_N(x)\) since \(x=\pm 1\) are two of the interpolation points. This guarantees that, after taking derivatives of order \(\alpha \in (0,1)\), the global error is finite. In this section, we always assume that u(x) is analytic on \([-1,1]\), and can be analytically extended to a certain \(Berstein \ ellipse\).

3.1 Interpolation of Analytic Functions

Let \(\{ x_i \}_{i=0}^N\) be the \((N+1)\) interpolation points, where \(-1 \leqslant x_0<x_1< \cdots <x_N \leqslant 1\). Let \(u_N(x) \in {\mathbb {P}}_{N}([-1,1])\) be the interpolation polynomial, s.t.

$$\begin{aligned} u(x_i)=u_N(x_i), \ i=0,1,\cdots ,N. \end{aligned}$$
(3.1)

And we define

$$\begin{aligned} \omega _{N+1}(x)=\prod _{i=0}^N (x-x_i). \end{aligned}$$
(3.2)

If \(\{ x_i \}_{i=0}^N\) is the set of zero points of \((N+1)\) degree Legendre–Lobatto polynomial, then

$$\begin{aligned} \omega _{N+1}(x) \eqsim L_{N-1}(x)-L_{N+1}(x), \end{aligned}$$
(3.3)

where \(L_{n}(x)\) represents the Legendre polynomial of degree n, and the right hand side is exactly the Legendre–Lobatto polynomial of degree \((N+1)\).

Suppose that u(x) is analytic on \([-1,1]\), it is well known that u(x) can be analytically extended to a domain enclosed by the so-callded Bersteinellipse, with the foci \(\pm 1\):

$$\begin{aligned} {\mathcal {E}}_{\rho }:=\left\{ z: z=\frac{1}{2}(\rho e^{i\theta }+\rho e^{-i \theta }), \ 0 \leqslant \theta \leqslant 2\pi \right\} , \ \rho > 1 \end{aligned}$$
(3.4)

where \(i=\sqrt{-1}\) is the imaginary unit, \(\rho \) is the sum of semimajor and semiminor axes. Then we have the following bounds for \({\mathcal {L}}({\mathcal {E}}_{\rho })\), the perimeter of the ellipse, and \({\mathcal {D}}_{\rho }\), the shortest distance from \({\mathcal {E}}_{\rho }\) to \([-1,1]\) respectively:

$$\begin{aligned} {\mathcal {L}}({\mathcal {E}}_{\rho }) \leqslant \pi (\rho +\rho ^{-1})^{\frac{1}{2}}, \ \hbox {and} \ {\mathcal {D}}_{\rho }=\frac{1}{2}(\rho +\rho ^{-1})-1. \end{aligned}$$

For convenience, we define:

$$\begin{aligned} M_u=\sup _{z \in {\mathcal {E}}_{\rho }} |u(z)|. \end{aligned}$$

To study the property of superconvergence, by introducing the \(Hermite's \ contour \ integral\) as presented in [1, 2, 6], we have the following point-wise error expression:

$$\begin{aligned} u(x)-u_N(x)=\frac{1}{2\pi i} \oint _{{\mathcal {E}}_\rho } \frac{\omega _{N+1}(x)}{z-x} \frac{u(z)}{\omega _{N+1}(z)} dz, \ \forall x \in [-1,1]. \end{aligned}$$
(3.5)

The following analysis is based on this error expression.

3.2 Theoretical Statements

Unfortunately there is not an estimation of \(^R D^{\alpha -m} \omega _{N+1}(x)\), \(m=0,1,\cdots \) so far. But with the following reasonable assumption:

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{\Vert ^R D^{\alpha -1} \omega _{N+1}(x) \Vert _{L^{\infty }[-1,1]}}{\Vert ^R D^{\alpha } \omega _{N+1}(x) \Vert _{L^{\infty }[-1,1]}} = 0, \end{aligned}$$
(3.6)

we can still have the following theorem:

Theorem 3.1

Suppose \(0<\alpha <1\). Let u(x) be an analytic on and within the complex ellipse \({\mathcal {E}}_{\rho }\), where \(\rho > 3+2\sqrt{2}\), and let \(u_N\) be the spectral interpolant using the Gauss–Lobatto points. Then under the assumption of (3.6), the \(\alpha \)-th Riesz fractional derivative of the error \(^R D^{\alpha } (u(x)-u_N(x))\) is convergent, and is dominated by the leading term of the infinity series.

Proof

The proof starts with (3.5), according to (2.1), (2.4), (2.5), \(\forall x \in [-1,1]\), we have:

$$\begin{aligned}&{^RD}^{\alpha } (u(x)-u_N(x)) \nonumber \\&\quad =\frac{1}{2\pi i} \oint _{{\mathcal {E}}_\rho } {^RD}^{\alpha }\Big ( \frac{\omega _{N+1}(x)}{z-x}\Big )\frac{u(z)}{\omega _{N+1}(z)} dz \nonumber \\&\quad =\frac{c_1}{2 \pi i} \oint _{{\mathcal {E}}_\rho } (_{-1}D_x^{\alpha }+{_x D_1^{\alpha }})\Big ( \frac{\omega _{N+1}(x)}{z-x} \Big )\frac{u(z)}{\omega _{N+1}(z)} dz \nonumber \\&\quad =\frac{c_1}{2 \pi i} \oint _{{\mathcal {E}}_\rho } \sum _{m=0}^{\infty }{\alpha \atopwithdelims ()m} \Gamma (m+1) \frac{(_{-1}D_x^{\alpha -m}+(-1)^k{_x D_1^{\alpha -m}})\omega _{N+1}(x)}{(z-x) ^{m+1}} \frac{u(z)}{\omega _{N+1}(z)} dz \nonumber \\&\quad = \frac{1}{2 \pi i} \Big ( \oint _{{\mathcal {E}}_\rho } \frac{^R D^{\alpha } \omega _{N+1}(x)}{(z-x)}\frac{u(z)}{\omega _{N+1}(z)} dz + \alpha \oint _{{\mathcal {E}}_\rho } \frac{I_o^{1-\alpha } \omega _{N+1}(x)}{(z-x) ^{2}}\frac{u(z)}{\omega _{N+1}(z)} dz \nonumber \\&\qquad + \oint _{{\mathcal {E}}_\rho } \sum _{m=2}^{\infty } {\alpha \atopwithdelims ()m} \Gamma (m+1) \frac{I_o^{m-\alpha } \omega _{N+1}(x)}{(z-x) ^{m+1}}\frac{u(z)}{\omega _{N+1}(z)} dz \Big ). \end{aligned}$$
(3.7)

If we denote:

$$\begin{aligned} \phi _0:=\Vert ^R D^{\alpha } \omega _{N+1}(x) \Vert _{L^{\infty }[-1,1]}, \ \phi _m := \Vert I_o^{m-\alpha } \omega _{N+1}(x) \Vert _{L^{\infty }[-1,1]}, \ m=1,2,\cdots , \end{aligned}$$

and by [31], there exists \(C_{\rho }>0\), such that

$$\begin{aligned} |\omega _{N+1}(z)| \geqslant C_{\rho } N^{-\frac{1}{2}}\rho ^{N+1}, \ \forall z \in {\mathcal {E}}_{\rho }. \end{aligned}$$

Then

$$\begin{aligned}&\vert ^R D^{\alpha }(u(x)-u_N(x)) \vert \nonumber \\&\quad \leqslant \frac{1}{2 \pi } \Big ( \oint _{{\mathcal {E}}_\rho } \frac{ \phi _0}{\vert z-x \vert } \frac{\vert u(z) \vert }{\vert \omega _{N+1}(z) \vert } d \vert z \vert + \alpha \oint _{{\mathcal {E}}_\rho } \frac{ \phi _1}{\vert z-x \vert ^{2}} \frac{\vert u(z) \vert }{\vert \omega _{N+1}(z) \vert } d \vert z \vert \end{aligned}$$
(3.8)
$$\begin{aligned}&\qquad + \oint _{{\mathcal {E}}_\rho } \sum _{m=2}^{\infty } {\alpha \atopwithdelims ()m}\Gamma (m+1) \frac{ \phi _m}{\vert z-x \vert ^{m+1}} \frac{\vert u(z) \vert }{\vert \omega _{N+1}(z) \vert } d \vert z \vert \Big ) \nonumber \\&\quad \leqslant \frac{C_{\rho }M_u \cdot {\mathcal {L}}({\mathcal {E}}_{\rho })}{2 \pi } \frac{N^{\frac{1}{2}}}{\rho ^{N+1}} \Big ( \frac{\phi _0}{{\mathcal {D}}_{\rho }} + \sum _{m=1}^{\infty } {\alpha \atopwithdelims ()m} \Gamma (m+1) \frac{ \phi _m }{({\mathcal {D}}_{\rho }) ^{m+1}} \Big ). \end{aligned}$$
(3.9)

We estimate the infinity sum next. When \(m\geqslant 2\), by Lemma 2.6 and Holder’s inequality,

$$\begin{aligned}&I_o^{m-\alpha }\omega _{N+1}(x) \leqslant \Big ({_{-1}I_x^{m-1}}+{_xI_1^{m-1}}\Big )\vert I_o^{1-\alpha }\omega _{N+1}(x) \vert \nonumber \\&\qquad \leqslant \frac{1}{\Gamma (m-1)} \Big (\int _{-1}^1 [(x-t)^{m-2}+(t-x)^{m-2}] \vert I_o^{1-\alpha }\omega _{N+1}(t) \vert dt \vert \Big ) \leqslant \frac{2^{m}}{\Gamma (m)}\phi _1. \nonumber \\ \end{aligned}$$
(3.10)

for \(\forall x \in [-1,1]\), so

$$\begin{aligned} \phi _m \leqslant \frac{2^{m}}{\Gamma (m)} \phi _1, \end{aligned}$$

which leads to

$$\begin{aligned} \frac{\Gamma (m+1) \phi _m }{({\mathcal {D}}_{\rho }) ^{m+1}} \leqslant \frac{\Gamma (m+1) 2^{m}}{\Gamma (m) ({\mathcal {D}}_{\rho })^{m+1}} \phi _1=\frac{1}{ {\mathcal {D}}_{\rho }} m \Big (\frac{2}{{\mathcal {D}}_{\rho }}\Big )^m \phi _1 \end{aligned}$$

if we strictly have \({\mathcal {D}}_{\rho }>2\), i.e. \(\rho >3+2\sqrt{2}\), then \(\exists c_{\rho },d_{\rho }>0\), s.t.

$$\begin{aligned} \frac{2d_{\rho }}{{\mathcal {D}}_{\rho }}<1, \ \hbox {and} \ m \leqslant c_{\rho }(d_{\rho })^m, \end{aligned}$$

so we can have:

$$\begin{aligned} \frac{\Gamma (m+1) \phi _m }{({\mathcal {D}}_{\rho }) ^{m+1}} \leqslant \frac{c_{\rho }}{ {\mathcal {D}}_{\rho }} \left( \frac{2d_{\rho }}{{\mathcal {D}}_{\rho }}\right) ^m \phi _1. \end{aligned}$$
(3.11)

Therefore, by (3.11), the infinity sum in (3.9) can be estimated below:

$$\begin{aligned}&\sum _{m=0}^{\infty } {\alpha \atopwithdelims ()m} \frac{\Gamma (m+1) \phi _m }{({\mathcal {D}}_{\rho }) ^{m+1}} = \frac{\phi _0}{{\mathcal {D}}_{\rho }}+\alpha \frac{\phi _1}{({\mathcal {D}}_{\rho })^2}+\sum _{m=2}^{\infty } {\alpha \atopwithdelims ()m} \frac{\Gamma (m+1) \phi _m }{({\mathcal {D}}_{\rho }) ^{m+1}} \nonumber \\&\quad \leqslant \frac{\phi _0}{{\mathcal {D}}_{\rho }}+ \frac{c_{\rho } \phi _1}{ {\mathcal {D}}_{\rho }}\sum _{m=1}^{\infty } {\alpha \atopwithdelims ()m} \left( \frac{2d_{\rho }}{{\mathcal {D}}_{\rho }}\right) ^m \leqslant \frac{1}{{\mathcal {D}}_{\rho }}\phi _0+ \frac{c_{\rho }}{ {\mathcal {D}}_{\rho }} \left( 1+\frac{2d_{\rho }}{{\mathcal {D}}_{\rho }}\right) ^{\alpha } \phi _1, \end{aligned}$$
(3.12)

by the formula \((1+x)^{\alpha } = \sum _{m=0}^{\infty } {\alpha \atopwithdelims ()m} x^m\). Under the assumption (3.6),

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{\phi _1}{\phi _0} = 0. \end{aligned}$$

so \(^R D^{\alpha }(u(x)-u_N(x))\) is convergent, and the leading term dominates the error. \(\square \)

Parallel to the conclusion in [41], we have the following theorem.

Theorem 3.2

Under the same assumptions in Theorem 3.1, the \(\alpha \)-th Riesz fractional derivative superconverges at \(\{ \xi _i^{\alpha } \}\), which satisfies

$$\begin{aligned} ^R D^{\alpha } \omega _{N+1}(\xi _i^{\alpha })=0,\ i=0,1,\ldots ,N \end{aligned}$$

where \(\omega _{N+1}(x)\) is defined by (3.3).

Proof

According to the analysis in the previous theorem, the decay of the error is dominated by the leading term:

$$\begin{aligned} \oint _{{\mathcal {E}}_\rho } \frac{^R D^{\alpha } \omega _{N+1}(x)}{(z-x) }\frac{u(z)}{\omega _{N+1}(z)} dz. \end{aligned}$$

When \(x=\xi _i^{\alpha }, i=0,1,\ldots ,N\), the leading term vanishes, and the remaining terms have higher convergent rates. \(\square \)

Next, we describe a method to compute \(\{ \xi _i^{\alpha } \}_{i=0}^N\). For \(0<\mu <1\), we start from

$$\begin{aligned} _{-1}D_x^\mu L_n(x)= & {} \frac{\Gamma (n+1)}{\Gamma (n-\mu +1)}(1+x)^{-\mu } P_n^{\mu ,-\mu }(x), \ x \in [-1,1], \\ _{x}D_1^\mu L_n(x)= & {} \frac{\Gamma (n+1)}{\Gamma (n-\mu +1)}(1-x)^{-\mu } P_n^{-\mu ,\mu }(x), \ x \in [-1,1], \end{aligned}$$

to have:

$$\begin{aligned}&^R D^{\alpha } (L_{N-1}-L_{N+1})(x) \\&\quad = c_1(_{-1}D_x^\alpha +_{x}D_1^\alpha )(L_{N-1}(x)-L_{N+1}(x)) \\&\quad = c_1(_{-1}D_x^\alpha L_{N-1}(x)+{_{x}D_1^\alpha } L_{N-1}(x) -{_{-1}D_x^\alpha } L_{N+1}(x)-{_{x}D_1^\alpha }L_{N+1}(x)) \\&\quad = \frac{\Gamma (N)}{2 \sin (\pi \gamma /2) \Gamma (N-\mu )}[(1+x)^{-\mu } P_{N-1}^{\mu ,-\mu }(x)+(1-x)^{-\mu } P_{N-1}^{-\mu ,\mu }(x)] \\&\qquad - \frac{\Gamma (N+2)}{2 \sin (\pi \gamma /2) \Gamma (N-\mu +2)}[(1+x)^{-\mu } P_{N+1}^{\mu ,-\mu }(x)+(1-x)^{-\mu } P_{N+1}^{-\mu ,\mu }(x)]. \end{aligned}$$

In the numerical experiments, we observe that all of the zeros of \(^R D^{\alpha } (L_{N-1}-L_{N+1})(x)\) are multiplicity 1 and lie in \((-1,1)\). So they can be calculated by Newton iteration:

  1. (1)

    The initial value may be produced by observing the plot.

  2. (2)

    The function \(^R D^{\alpha } (L_{N-1}-L_{N+1})(x)\) is differentiable when \(x \ne \pm 1\), so the Newton iteration may be applied. And it converges fast.

3.3 Numerical Statements and Validations

3.3.1 Numerical Validations for Superconvergence Points

In this subsection, some numerical examples are presented to show the superconvergence.

Example 1

We consider the function \(u(x)=(1+x)^9(1-x)^9.\)

Fig. 1
figure 1

(Example 1) Curves \(^RD^\alpha (u-u_{11})(x)\) for \(\alpha =0.1,0.3,0.5,0.7,0.9\). For each \(\alpha \), the roots of \(^RD^\alpha \omega _{12}(x)\) are highlighted by \(*\)

Here u(x) is interpolated at \(N+1=12\) zero points of \(\omega _{12}(x)\), where \(\omega _{12}(x)\) is the Legendre–Lobatto polynomial of degree 12. We set \(\alpha =0.1\), 0.3, 0.5, 0.7, and 0.9, respectively. Figure 1 plots \(^RD^{\alpha }{(u-u_{11})(x)}\) on \([-1,1]\) with different \(\alpha \), and the asterisks indicate the superconvergence points, i.e. the zeros of \(^RD^{\alpha } \omega _{12}(x)\), predicted by Theorem 3.2.

Example 2

We consider the function \(u(x)=\sin (4\pi x)+\cos (4.5 \pi x)\). In the left panel of Fig. 2, we set \(N=34\) and \(\alpha =0.79\). Again, it shows the error of \(^RD^{\alpha }{(u-u_{N})(x)}\), and the superconvergence points, predicted by Theorem 3.2, are highlighted. To further compare the superconvergence phenomenon with different \(\alpha \) values, we set a smaller \(N=14\), and \(\alpha =0.11\), 0.46, 0.79 respectively in the right panel.

In all of the figures, we see that the error at those points is much smaller than the global maximal error. In addition, we observe that both the global maximal error and the errors at the superconvergence points increase, when \(\alpha \) increases.

3.3.2 Numerical Observation of Superconvergent Rates

In order to quantify the superconvergence rate, we define the following ratio:

$$\begin{aligned} \hbox {ratio}=\frac{\max _{-1\leqslant x \leqslant 1}{\vert ^R D^{\alpha }(u-u_N)(x) \vert }}{\max _{0\leqslant i \leqslant N}{\vert ^R D^{\alpha }(u-u_N)(\xi _i^{\alpha }) \vert }}. \end{aligned}$$
(3.13)

Example 1

Consider \(u(x)=(1+x)^9(1-x)^9\). Then we plot the ratio in the log-log chart with \(N=8,10,12,14,16\) in Fig. 3 (left) for \(\alpha =0.01,0.1,0.2,0.3,0.4,0.5\), and Fig. 3 (right) for \(\alpha = .5,0.6,0.7,0,8,0.9,0.99\).

Example 2

Consider \(u(x)=\sin (4\pi x)+\cos (4.5 \pi x)\). Then we set \(\alpha =0.1, 0.3,0.5, 0.7, 0.9\), and plot the ratio in the log-log chart with N from 12 to 30, respectively (Fig. 4).

In both of the examples, two lines \(N^2\) and \(N^3\) are also plotted as reference slopes.

Fig. 2
figure 2

(Example 2) Left: curves \(^RD^\alpha (u-u_{32})(x)\) for \(\alpha =0.79\), the roots of \(^RD^\alpha \omega _{33}(x)\) are highlighted by \(*\). Right: curves \(^RD^\alpha (u-u_{14})(x)\) for \(\alpha =0.11,0.46,0.79\). For each \(\alpha \), the roots of \(^RD^\alpha \omega _{15}(x)\) are highlighted by \(*\)

Fig. 3
figure 3

(Example 1) Superconvergence ratio of (3.13) for different \(\alpha \)-derivatives

Fig. 4
figure 4

(Example 2) Superconvergence ratio of (3.13) for different \(\alpha \)-derivatives

We see that at the superconvergence points, the convergent rate is at least \(O(N^{-2})\) faster than the global rate.

4 GJF Fractional Interpolation for Arbitrary Positive \(\alpha \)

4.1 Theoretical Statements

When \(\alpha >1\), interpolation by the Lobatto-type polynomials, which provides zeros of multiplicity 1 at \(x=\pm 1\) does not work anymore, since it is not able to control the two-sided Riemann–Liouville fractional derivatives of order \(\alpha > 1\). Inspired by Lemma 2.8, the GJF fractional interpolation is introduced here. On the other hand, due to singularities at \(x=\pm 1\), some more strict conditions are required for u(x). Let \(\alpha >1\) be the order of Riesz fractional derivatives, we always assume that \((1-x^2)^{-\frac{\alpha }{2}}u(x)\) is analytic on \([-1,1]\), and can be analytically extended to a \(Berstein \ ellipse\) with an appropriate \(\rho \). In this section, we mainly concentrate on the analysis of the situation \(0<\alpha <2\). The conclusion can be generalized to the cases \(\alpha >2\). Let’s start with the definition of GJF fractional interpolation:

Definition 4.1

Let \(\alpha \in (k-1,k)\) be a given positive real number, where \(k \in {\mathbb {Z}}^+\). Define

$$\begin{aligned} \alpha ^*=\left\{ \begin{array}{ll} \alpha , \ \ 0<\alpha<2, \\ \alpha -k+1, \ \ 2<\alpha \ \hbox {and} \ k \ \hbox {is odd} \\ \alpha -k+2, \ \ 2<\alpha \ \hbox {and} \ k \ \hbox {is even} \end{array}\right. \end{aligned}$$
(4.1)

Suppose \(v(x):=(1-x^2)^{-\frac{\alpha ^*}{2}}u(x) \in C[-1,1]\), the goal is to find

$$\begin{aligned} u_N(x)=(1-x^2)^{\frac{\alpha ^*}{2}}v_N(x), \end{aligned}$$
(4.2)

where \(v_N(x) \in {\mathbb {P}}_N[-1,1]\), such that

$$\begin{aligned} u_N(x_k)=u(x_k), \ \ -1\leqslant x_0<x_1<\cdots <x_N \leqslant 1, \end{aligned}$$
(4.3)

then \(u_N(x)\) is called the GJF fractional interpolant with order \(\frac{\alpha ^*}{2}\) of u(x).

In fact, (4.3) is equivalent to find \(v_N(x) \in {\mathbb {P}}_N[-1,1]\), such that

$$\begin{aligned} v_N(x_k)=v(x_k)=(1-x_k^2)^{-\frac{\alpha ^*}{2}}u(x_k), \ \ -1\leqslant x_0<x_1<\cdots <x_N \leqslant 1. \end{aligned}$$
(4.4)

Since \(v_N(x)\) is a polynomial approximation, by (3.5), (4.2), (4.3), \(\forall x \in [-1,1]\), the error can be expressed by:

$$\begin{aligned} u(x)-u_N(x)= & {} (1-x^2)^{\frac{\alpha ^*}{2}}(v(x)-v_N(x)) \nonumber \\= & {} \frac{1}{2\pi i} \oint _{{\mathcal {E}}_\rho } \frac{(1-x^2)^{\frac{\alpha ^*}{2}}\omega _{N+1}(x)}{z-x} \frac{v(z)}{\omega _{N+1}(z)} dz. \end{aligned}$$
(4.5)

Therefore, we may analyze the superconvergence points by using the same framework as the previous section.

Theorem 4.2

Suppose \(0<\alpha <2\). Let u(x) be a function such that \((1-x^2)^{-\frac{\alpha }{2}}u(x)\) is analytic on and within the complex ellipse \({\mathcal {E}}_{\rho }\), where \(\rho > 3+2\sqrt{2}\), and let \(u_N(x)\) be the GJF fractional interpolant of u(x) at \(\{ \xi _j^{\alpha } \}_{j=0}^N\), the roots of \(P_{N+1}^{\frac{\alpha }{2},\frac{\alpha }{2}}(x)\). Then the \(\alpha \)-th Riesz fractional derivative of the error \(^R D^{\alpha } (u(x)-u_N(x))\) is convergent, and is dominated by the leading term of the infinity series.

Proof

By taking the fractional derivative of (4.5), we have:

$$\begin{aligned}&^R D^{\alpha }(u(x)-u_N(x)) \nonumber \\&\quad =\frac{1}{2\pi i} \oint _{{\mathcal {E}}_\rho } {^R D^{\alpha }}\left[ \frac{(1-x^2)^{\frac{\alpha ^*}{2}}\omega _{N+1}(x)}{z-x}\right] \frac{v(z)}{\omega _{N+1}(z)} dz \nonumber \\&\quad = \frac{1}{2 \pi i} \oint _{{\mathcal {E}}_\rho } \sum _{m=0}^{\infty } \frac{\Gamma (\alpha +1)}{\Gamma (\alpha -m+1)} \frac{{^R D_{\nu }^{\alpha -m}} \left[ c_{\left( \frac{\alpha ^*+1}{2},N+1\right) }{\mathcal {J}}_{N+1}^{-\frac{\alpha ^*}{2},-\frac{\alpha ^*}{2}}(x)\right] }{ (z-x) ^{m+1}} \frac{v(z)}{\omega _{N+1}(z)} dz \nonumber \\ \end{aligned}$$
(4.6)

where \(\nu =o\) when k is odd and \(\nu =e\) when k is even. If we set \(\{x_i \}_{i=0}^N\) to be zero points of \(P_{N+1}^{\frac{\alpha ^*}{2},\frac{\alpha ^*}{2}}(x)\), then we can consider

$$\begin{aligned} \omega _{N+1}(x)=c_{\left( \frac{\alpha ^*+1}{2},N+1\right) }P_{N+1}^{\frac{\alpha ^*}{2},\frac{\alpha ^*}{2}}(x)=C_{N+1}^{\frac{\alpha ^*+1}{2}}(x). \end{aligned}$$

Due to the form in (2.13), for \(m \in {\mathbb {N}}\), define:

$$\begin{aligned} \phi _m(x)= \frac{\Gamma (N+2)}{\Gamma (N+2+\alpha ^*)} {^R D_{\nu }^{\alpha -m}} \left[ c_{\left( \frac{\alpha ^*+1}{2},N+1\right) }{\mathcal {J}}_{N+1}^{-\frac{\alpha ^*}{2},-\frac{\alpha ^*}{2}}(x)\right] , \ \phi _m=\Vert \phi _m \Vert _{L^{\infty }}. \nonumber \\ \end{aligned}$$
(4.7)

Since \(0<\alpha <2\), so \(\alpha =\alpha ^*\).

I. When \(0<\alpha <1\), by the same analysis in Theorem 3.1, we have:

$$\begin{aligned} \vert ^R D^{\alpha }(u(x)-u_N(x)) \vert \leqslant \frac{c(\rho ,v) \Gamma (N+2+\alpha )\Gamma (\frac{\alpha +1}{2}) }{\Gamma (N+2) (N+1)^{\frac{\alpha -1}{2}} \rho ^{N+1} } \Big ( \frac{\phi _0}{{\mathcal {D}}_{\rho }}+ \frac{c_{\rho }}{ {\mathcal {D}}_{\rho }} \Big (1+\frac{2d_{\rho }}{{\mathcal {D}}_{\rho }}\Big )^{\alpha } \phi _1 \Big ). \nonumber \\ \end{aligned}$$
(4.8)

where \(c(\rho ,v) = \frac{C_{\rho }M_v \cdot {\mathcal {L}}({\mathcal {E}}_{\rho })}{2 \pi } (1+\rho ^{-2})^{\frac{\alpha +1}{2}} (\rho ^2+\rho ^{-2})^{\frac{1}{2}}\).

II. When \(1<\alpha <2\), by Lemma 2.9, we have:

$$\begin{aligned}&\phi _0(x) = C_{N+1}^{\frac{\alpha +1}{2}}(x),&\phi _0=\frac{\Gamma (N+\alpha +2)}{\Gamma (N+2)\Gamma (\alpha +1)}, \end{aligned}$$
(4.9)
$$\begin{aligned}&\phi _1(x) = \frac{1}{\alpha -1} C_{N+2}^{\frac{\alpha -1}{2}}(x),&\phi _1=\frac{\Gamma (N+\alpha +1)}{\Gamma (N+3)\Gamma (\alpha +1)}. \end{aligned}$$
(4.10)

Even though \(\phi _2(x)\) is not a classical Gegenbauer, we may calculate it by:

$$\begin{aligned} \phi _2(x) = \int _{-1}^x \phi _1(x) dx +C_0, \end{aligned}$$
(4.11)

where

$$\begin{aligned} C_0= & {} \Big ( I^{2-\alpha }_e \Big [{\frac{\Gamma (N+2)}{\Gamma (N+2+\alpha )}} c_{\Big (\frac{\alpha +1}{2},N+1\Big )} {\mathcal {J}}_{N+1}^{-\frac{\alpha }{2},-\frac{\alpha }{2}}(x)\Big ] \Big ) \Big |_{x=-1} \\= & {} c_2\frac{\Gamma (N+2)}{\Gamma (N+2+\alpha )} \int _{-1}^1 \frac{(1+t)^{\frac{\alpha }{2}}(1-t)^{\frac{\alpha }{2}}C_{N+1}^{\frac{\alpha +1}{2}}(t)}{(1+t)^{\alpha -1}}dt \\\leqslant & {} { \frac{c_2 c_{\Big (\frac{\alpha +1}{2},N+1\Big )}}{N^{\alpha }}} \int _{-1}^1 (1+t)^{1-\frac{\alpha }{2}}(1-t)^{\frac{\alpha }{2}}P_{N+1}^{\frac{\alpha }{2},\frac{\alpha }{2}}(t)dt = {\frac{c_2 c_{\Big (\frac{\alpha +1}{2},N+1\Big )}}{N^{\alpha }}} \cdot {_{N+1}^{\Big (\frac{\alpha }{2},\frac{\alpha }{2}\Big )}c_0^{\Big (\frac{\alpha }{2},1-\frac{\alpha }{2}\Big )}}, \end{aligned}$$

with \(c_2=\frac{1}{2 \cos (\pi \gamma /2)\Gamma (\gamma )}\), and \({_{N+1}^{(\frac{\alpha }{2},\frac{\alpha }{2})}c_0^{(\frac{\alpha }{2},1-\frac{\alpha }{2})}}\) represents the weighted inner product (see [24], pp 76):

$$\begin{aligned} {_{N+1}^{\left( \frac{\alpha }{2},\frac{\alpha }{2}\right) }c_0^{\left( \frac{\alpha }{2},1-\frac{\alpha }{2}\right) }}:= & {} \left( 1,P_{N+1}^{\frac{\alpha }{2},\frac{\alpha }{2}}(x)\right) _{\omega ^{\frac{\alpha }{2},1-\frac{\alpha }{2}}} \\= & {} \frac{2\Gamma (N+\frac{\alpha }{2}+2)}{\Gamma (N+\alpha +2)\Gamma (\frac{\alpha }{2}+1)} \sum _{m=0}^{N+1} \frac{(-1)^m \Gamma (N+\alpha +m+2)}{m!(N-m+1)!\Gamma (m+3)}. \end{aligned}$$
Fig. 5
figure 5

\(\alpha =1.01\), \({_{N+1}^{(\frac{\alpha }{2},\frac{\alpha }{2})}c_0^{(\frac{\alpha }{2},1-\frac{\alpha }{2})}}\) (blue) decays faster than \(N^{-1.99}\) (red)

Fig. 6
figure 6

\(\alpha =1.45\), \({_{N+1}^{(\frac{\alpha }{2},\frac{\alpha }{2})}c_0^{(\frac{\alpha }{2},1-\frac{\alpha }{2})}}\) (blue) decays faster than \(N^{-1.55}\) (red)

Fig. 7
figure 7

\(\alpha =1.99\), \({_{N+1}^{(\frac{\alpha }{2},\frac{\alpha }{2})}c_0^{(\frac{\alpha }{2},1-\frac{\alpha }{2})}}\) (blue) decays as fast as \(N^{-1.01}\) (red)

We estimate it numerically, seeing Figs. 5, 6 and 7, and observe that:

$$\begin{aligned} {_{N+1}^{\left( \frac{\alpha }{2},\frac{\alpha }{2}\right) }c_0^{\left( \frac{\alpha }{2},1-\frac{\alpha }{2}\right) }} \lesssim N^{\alpha -3}, \ \hbox {for} \ 1<\alpha <2, \end{aligned}$$

and by (2.11), we have:

$$\begin{aligned} C_0 \lesssim N^{{\frac{1}{2}}\alpha -3}=o(\phi _1). \end{aligned}$$

And by using the Holder’s inequality, we have:

$$\begin{aligned} \phi _2 \leqslant \int _{-1}^x |\phi _1(x)| dx + |C_0| \leqslant 2\phi _1+o(\phi _1). \end{aligned}$$
(4.12)

If \(m \geqslant 3\), \(\phi _m(x)\) are fractional integrals. So similar to (3.10), we have:

$$\begin{aligned} |\phi _m(x)| \leqslant \Big ( {_{-1}I_x^{m-2} +_xI_1^{m-2}} \Big ) |\phi _2(x)| \leqslant \frac{2^{m-1}}{\Gamma (m-1)}\phi _2. \end{aligned}$$
(4.13)

So for the \((m+1)\)-th term in the infinity series of \(^R D^{\alpha }(u(x)-u_N(x))\), we have:

$$\begin{aligned} \frac{\Gamma (m+1) \phi _m }{({\mathcal {D}}_{\rho }) ^{m+1}} \leqslant \frac{\Gamma (m+1) 2^{m-1}}{\Gamma (m-1) ({\mathcal {D}}_{\rho })^{m+1}} \phi _2=\frac{1}{2 {\mathcal {D}}_{\rho }} \frac{(m-1)m \cdot 2^{m}}{({\mathcal {D}}_{\rho })^{m}} \phi _2, \end{aligned}$$

if we strictly have \({\mathcal {D}}_{\rho }>2\), i.e. \(\rho >3+2\sqrt{2}\), then \(\exists c'_{\rho },d'_{\rho }>0\), s.t.

$$\begin{aligned} \frac{2d'_{\rho }}{{\mathcal {D}}_{\rho }}<1, \ \hbox {and} \ (m-1)m \leqslant c'_{\rho }(d'_{\rho })^m, \end{aligned}$$

so we have:

$$\begin{aligned} \frac{\Gamma (m+1) \phi _m }{({\mathcal {D}}_{\rho }) ^{m+1}} \leqslant \frac{c'_{\rho }}{2 {\mathcal {D}}_{\rho }} \left( \frac{2d'_{\rho }}{{\mathcal {D}}_{\rho }}\right) ^m \phi _2. \end{aligned}$$
(4.14)

Therefore,

$$\begin{aligned}&\vert ^R D^{\alpha }(u(x)-u_N(x)) \vert \leqslant \nonumber \\&\qquad c(\rho ,v) \frac{ \Gamma (N+2+\alpha )\Gamma (\frac{\alpha +1}{2}) }{\Gamma (N+2) (N+1)^{\frac{\alpha -1}{2}} \rho ^{N+1} } \Big ( \frac{1}{{\mathcal {D}}_{\rho }}\phi _0 + \frac{\alpha }{{\mathcal {D}}_{\rho }^2} \phi _1+ \frac{c'_{\rho }}{2 {\mathcal {D}}_{\rho }} \left( 1+\frac{2d'_{\rho }}{{\mathcal {D}}_{\rho }}\right) ^{\alpha } \phi _2 \Big ). \nonumber \\ \end{aligned}$$
(4.15)

where \(c(\rho ,v) = \frac{C_{\rho }M_v \cdot {\mathcal {L}}({\mathcal {E}}_{\rho })}{2 \pi } (1+\rho ^{-2})^{\frac{\alpha +1}{2}} (\rho ^2+\rho ^{-2})^{\frac{1}{2}}\). \(\square \)

Thanks to the analysis in [32], we may quantitatively analyze the gain of convergence rate at the superconvergence points. The result is given in the following theorem.

Theorem 4.3

Under the same assumptions in Theorem 4.2, when \(0<\alpha <1,\) we obtain the following global error estimation:

$$\begin{aligned}&\max _{-1\leqslant x \leqslant 1}{\vert ^R D^{\alpha }(u-u_N)(x) \vert } \nonumber \\&\qquad \lesssim M_v \frac{2\Gamma (\frac{\alpha +1}{2}) (1+\rho ^{-2})^{\frac{\alpha +1}{2}} (\rho ^2+\rho ^{-2})^{\frac{1}{2}}}{\Gamma (\alpha +1)(\rho +\rho ^{-1}-2)} (N+1)^{\frac{3\alpha +1}{2}} {\rho }^{-(N+1)} \end{aligned}$$
(4.16)

and the error estimation at the superconvergence points \(\{ \xi _j^{\alpha } \}_{j=0}^N\):

$$\begin{aligned}&\max _{0\leqslant j \leqslant N}{\vert ^R D^{\alpha }(u-u_N)(\xi _j^{\alpha }) \vert } \nonumber \\&\qquad \lesssim M_v \frac{2 \Gamma \left( \frac{\alpha +1}{2}\right) (1+\rho ^{-2})^{\frac{\alpha +1}{2}} (\rho ^2+\rho ^{-2})^{\frac{1}{2}}}{\Gamma (\alpha +1)(\rho +\rho ^{-1}-2)} (N+1)^{\alpha -1} {\rho }^{-(N+1)}; \end{aligned}$$
(4.17)

when \(1<\alpha <2\), the global error estimation and the error estimation at the superconvergence points \(\{ \xi _j^{\alpha } \}_{j=0}^N\) are given respectively in the following:

$$\begin{aligned}&\max _{-1\leqslant x \leqslant 1}{\vert ^R D^{\alpha }(u-u_N)(x) \vert } \nonumber \\&\qquad \lesssim M_v \frac{2\Gamma \left( \frac{\alpha +1}{2}\right) (1+\rho ^{-2})^{\frac{\alpha +1}{2}} (\rho ^2+\rho ^{-2})^{\frac{1}{2}}}{\Gamma (\alpha +1)(\rho +\rho ^{-1}-2)} (N+1)^{\frac{3\alpha +1}{2}} {\rho }^{-(N+1)}; \end{aligned}$$
(4.18)
$$\begin{aligned}&\max _{0\leqslant j \leqslant N}{\vert ^R D^{\alpha }(u-u_N)(\xi _j^{\alpha }) \vert } \nonumber \\&\qquad \lesssim M_v \frac{2 \Gamma \left( \frac{\alpha +1}{2}\right) (1+\rho ^{-2})^{\frac{\alpha +1}{2}} (\rho ^2+\rho ^{-2})^{\frac{1}{2}}}{\Gamma (\alpha +1)(\rho +\rho ^{-1}-2)} (N+1)^{\frac{3\alpha -3}{2}} {\rho }^{-(N+1)}, \end{aligned}$$
(4.19)

where \(M_v=\sup _{z \in {\mathcal {E}}_{\rho }} \{ (1-z^2)^{-\frac{\alpha }{2}}u(z) \}\).

Proof

When \(0<\alpha <1\), by Lemma 2.9, we have:

$$\begin{aligned}&\phi _0(x) = C_{N+1}^{\frac{\alpha +1}{2}}(x), \qquad \phi _0=\frac{\Gamma (N+\alpha +2)}{\Gamma (N+2)\Gamma (\alpha +1)} \sim \frac{(N+1)^{\alpha }}{\Gamma (\alpha +1)},\\&\phi _1(x) = \frac{1}{\alpha -1} C_{N+2}^{\frac{\alpha -1}{2}}(x), \quad \phi _1 \leqslant \frac{D_{(\alpha -1)/2}}{\alpha -1} (N+1)^{\frac{\alpha -3}{2}}. \end{aligned}$$

So (4.16) is obtained by combining the estimation of \(\phi _0\) and (4.8). If \(x=\xi _j^{\alpha }\), then the leading term vanishes, i.e.

$$\begin{aligned} \phi _0(\xi _j^{\alpha })=P_{N+1}^{\frac{\alpha }{2},\frac{\alpha }{2}}(\xi _j^{\alpha })=0, \ j=0,\ldots ,N, \end{aligned}$$

so that (4.17) is obtained.

When \(1<\alpha <2\), we may use (4.9), (4.10), (4.12), (4.15) and the same argument to get the result. \(\square \)

When \(\alpha >2\), since

$$\begin{aligned} ^R D^{\alpha }=D^{\alpha -\alpha ^*} (^RD^{\alpha ^*}) \end{aligned}$$

where \(\alpha -\alpha ^*\) is an even integer, we can generalize the results.

Corollary 4.4

Let \(\alpha \in (k-1,k)\), \(k \in {\mathbb {Z}}^+\), \(k \ll N\), and \(\alpha ^*\) be defined in (4.1). Under the same assumptions in Theorem 4.2, we have:

when k is odd,

$$\begin{aligned} \max _{-1\leqslant x \leqslant 1}{\vert ^R D^{\alpha }(u-u_N)(x) \vert } \lesssim (N-k+2)^{\frac{3\alpha ^*+1}{2}+2(k-1)} \rho ^{-(N-k+2)}, \end{aligned}$$
(4.20)

the superconvergence points \(\{\xi _{i}^{\alpha } \}_{i=0}^{N-k+1}\) are the zero points of \(P_{N-k+2}^{\frac{\alpha ^*}{2}+k-1,\frac{\alpha ^*}{2}+k-1}(x)\), and at those points, we have:

$$\begin{aligned} \max _{0\leqslant i \leqslant N}{\vert ^R D^{\alpha }(u-u_N)(\xi _i^{\alpha }) \vert } \lesssim (N-k+2)^{\frac{3\alpha ^*+1}{2}+2(k-2)} \rho ^{-(N-k+2)}; \end{aligned}$$
(4.21)

when k is even,

$$\begin{aligned} \max _{-1\leqslant x \leqslant 1}{\vert ^R D^{\alpha }(u-u_N)(x) \vert } \lesssim (N-k+3)^{\frac{3\alpha ^*+1}{2}+2(k-2)} \rho ^{-(N-k+3)}, \end{aligned}$$
(4.22)

the superconvergence points \(\{\xi _{i}^{\alpha } \}_{i=0}^{N-k+2}\) are the zero points of \(P_{N-k+3}^{\frac{\alpha ^*}{2}+k-2,\frac{\alpha ^*}{2}+k-2}(x)\), and at those points, we have:

$$\begin{aligned} \max _{0\leqslant i \leqslant N}{\vert ^R D^{\alpha }(u-u_N)(\xi _i^{\alpha }) \vert } \lesssim (N-k+2)^{\frac{3\alpha ^*+1}{2}+2(k-3)} \rho ^{-(N-k+3)}, \end{aligned}$$
(4.23)

where the constants only depend on \(\alpha \), \(\rho \).

Fig. 8
figure 8

Curves \(^RD^\alpha (u-u_{10})(x)\) for the GJF interpolation at 11 points. Left: \(\alpha =0.4\). Right: \(\alpha =1.7\)

Fig. 9
figure 9

Curves \(^RD^\alpha (u-u_{13})(x)\) for the Petrov–Galerkin method: \(\alpha =\)1.27 and 1.84

Fig. 10
figure 10

Curves \(^RD^\alpha (u-u_{17})(x)\) for the Petrov–Galerkin method: \(\alpha =\)1.27 and 1.84

Fig. 11
figure 11

Superconvergent ratios of the Petrov–Galerkin method for different \(\alpha \)-derivatives

Fig. 12
figure 12

Curves \(^RD^\alpha (u-u_{13})(x)\) for the collocation method: \(\alpha =1.27\) and 1.84

Fig. 13
figure 13

Curves \(^RD^\alpha (u-u_{17})(x)\) for the collocation method: \(\alpha =1.27\) and 1.84

Fig. 14
figure 14

Superconvergent ratios of the collocation method for different \(\alpha \)-derivatives

Proof

When k is odd, by (2.1), (4.7), and Lemma 2.9,

$$\begin{aligned} \phi _m(x)=D^{k-1} \frac{\Gamma (N+2)}{\Gamma (N+2+\alpha ^*)} {^R D_{o}^{\alpha ^*-m}} \left[ c_{(\alpha ^*,N+1)}{\mathcal {J}}_{N+1}^{\frac{\alpha ^*}{2},\frac{\alpha ^*}{2}}(x)\right] \end{aligned}$$

From (2.8), the leading term is

$$\begin{aligned} \phi _0(x)=(\alpha +1)\cdots (\alpha -1+2k)C_{N-k+2}^{\left( \frac{\alpha ^*+1}{2}+k-1\right) }(x), \ \phi _0 \sim N^{\alpha +1+2(k-1)} \end{aligned}$$

and the second term is

$$\begin{aligned} \phi _1(x)=(\alpha +1)\cdots (\alpha -3+2k)C_{N-k+1}^{\left( \frac{\alpha ^*+1}{2}+k-2\right) }(x), \ \phi _1 \sim N^{\alpha +1+2(k-2)} \end{aligned}$$

and \(\phi _m=o(\phi _1)\), for \(m \geqslant 2\). Therefore, the leading term vanishes at \(\{\xi _{i}^{\alpha } \}_{i=0}^{N-k+1}\), zero points of \(P_{N-k+2}^{\frac{\alpha ^*}{2}+k-1,\frac{\alpha ^*}{2}+k-1}(x)\). Similar to the proof of Theorem 4.2, the estimates (4.20) and (4.21) are derived from (4.8).

When k is even,

$$\begin{aligned} \phi _m(x)=D^{k-2} \frac{\Gamma (N+2)}{\Gamma (N+2+\alpha ^*)} {^R D_{e}^{\alpha ^*-m}} \left[ c_{(\alpha ^*,N+1)}{\mathcal {J}}_{N+1}^{\frac{\alpha ^*}{2},\frac{\alpha ^*}{2}}(x)\right] , \end{aligned}$$

and the rest of the proof is similar to the case when k is odd. \(\square \)

4.2 Numerical Validations

To make sure v(x) is smooth enough, in this subsection, we consider the function:

$$\begin{aligned} u(x)=\frac{(1-x^2)^{\frac{\alpha }{2}}}{1+(x+3)^2} \end{aligned}$$

where \(v(x)=\frac{1}{1+(x+3)^2}\), and we set \(\alpha =0.4, 1.7\) respectively. It’s easy to see that v(x) has two simple poles at \(z=-3 \pm i\) in the complex plane. Hence it is analytic within the Berstein ellipse with \(\rho > 3+2\sqrt{2}\). In the numerical example, we set \(N=10\) so \(v_N(x)\) is interpolated at 11 zero points \(\{ \xi _j^{\alpha } \}_{j=0}^{10}\) of \(P_{11}^{\frac{\alpha }{2},\frac{\alpha }{2}}(x)\). The true solution of \(^RD^{\alpha }u(x)\) is approximated by the sum of 40 terms. Fig 8 (left) and (right) depict graphs of \(^RD^{\alpha }(u-u_{10})(x)\), where \(u_N(x)\) is the GJF fractional interpolation, and \(\alpha =0.4\) and 1.7, respectively. According to Theorem 4.2, the 11 interpolation points are predicted as superconvergence points. Similar with Fig. 1, the errors at those superconvergence points are significantly less than the global maximal error.

5 Applications

In this section, we focus on applications of superconvergence. Let \(1<\alpha <2\), and we consider the following FDE:

$$\begin{aligned} \left\{ \begin{array}{ll} ^RD^{\alpha }u(x)+u(x)=f(x), \ x \in (-1,1)\\ u(-1)=u(1)=0 \end{array}\right. \end{aligned}$$
(5.1)

We provide two methods to solve for the equation: Petrov–Galerkin method and spectral collocation method. Our goal is to observe superconvergence phenomenon in numerical solutions. In the following numerical examples, we set f(x) be the function such that \(u(x)=\frac{(1-x^2)^{\frac{\alpha }{2}}}{1+0.5x^2}\) is the true solution. Then we demonstrate the error curve \(^RD^{\alpha }(u-u_N)\) and highlight, by ‘\(*\)’, its value at the superconvergence points predicted in Theorem 4.2.

5.1 Petrov–Galerkin Method

For any given \(1<\alpha <2\), we are looking for

$$\begin{aligned} u_N \in S_{\alpha }= span \left\{ {\mathcal {J}}^{-\frac{\alpha }{2},-\frac{\alpha }{2}}_0, \cdots , {\mathcal {J}}^{-\frac{\alpha }{2},-\frac{\alpha }{2}}_N \right\} , \end{aligned}$$

such that \(\forall v \in {\mathbb {P}}_N[-1,1]\), we have:

$$\begin{aligned} (^RD^{\alpha }u_N, v)_{\omega ^{\frac{\alpha }{2},\frac{\alpha }{2}}}+(u_N, v)_{\omega ^{\frac{\alpha }{2}\frac{\alpha }{2}}}=(f, v)_{M,\omega ^{\frac{\alpha }{2},\frac{\alpha }{2}}}, \end{aligned}$$
(5.2)

where the right hand side is calculated by numerical quadrature with large enough M. According to (2.14), by setting \(v=P_i^{\frac{\alpha }{2},\frac{\alpha }{2}}\), \(i=0,1,\ldots ,N\), (5.2) is equivalent to find \((c_0,c_1,\ldots , c_N)^T \in {\mathbb {R}}^{N+1}\), such that, for \(i=0,1,\ldots ,N\),

$$\begin{aligned} \sum _{j=0}^N c_j \left[ d_j \left( P_j^{\frac{\alpha }{2},\frac{\alpha }{2}}, P_i^{\frac{\alpha }{2},\frac{\alpha }{2}}\right) _{\omega ^{\frac{\alpha }{2},\frac{\alpha }{2}}} + \left( P_j^{\frac{\alpha }{2},\frac{\alpha }{2}}, P_i^{\frac{\alpha }{2},\frac{\alpha }{2}}\right) _{\omega ^{\alpha ,\alpha }}\right] = \left( f, P_i^{\frac{\alpha }{2},\frac{\alpha }{2}}\right) _{\omega ^{\frac{\alpha }{2},\frac{\alpha }{2}}}, \end{aligned}$$
(5.3)

where \(d_j=-\frac{\Gamma (j+1+\alpha )}{\Gamma (j+1)}\). We observe that the stiffness matrix is diagonal and dominates the system.

We plotted error curves \(^RD^\alpha (u-u_N)\) in Figs. 9 and 10 for \(\alpha =1.27,1.84\), \(N=13,17\), respectively. According to Theorem 4.2, the superconvergence points are predicted to be zeros of \(P^{\frac{\alpha }{2},\frac{\alpha }{2}}_{N+1}(x)\). We observe that errors at those points are much smaller than the global maximal error, and moreover, both the global maximal error and errors at the superconvergence points increase, when \(\alpha \) increases. Figure 11 depicted the reciprocal of (3.13), for \(\alpha =1.1,1.52,1.9\), respectively, where \(O(N^{-1})\) is plotted as a reference slope (Since they are too close to each other, only three \(\alpha \) cases are shown in Fig. 11). We see that, at the superconvergence points predicted by Theorem 4.2, the convergence rate is \(O(N^{-1})\) faster than the optimal global rate.

5.2 Spectral Collocation Method

For any given \(1<\alpha <2\), according to the definition of GJF fractional interpolation with order \(\frac{\alpha }{2}\), we have:

$$\begin{aligned} u_N(x)=\sum _{j=0}^N \hat{{\ell }}_j(x) {v_j}:=\sum _{j=0}^N (1-x^2)^{\frac{\alpha }{2}} {\ell }_j(x) {v_j}, \end{aligned}$$
(5.4)

where \(\ell _j \in {\mathbb {P}}_N[-1,1]\) is the Lagrange basis function satisfying

$$\begin{aligned} {\ell }_j(x_i)=\delta _{ij}, \ i,j=0,1,\ldots ,N. \end{aligned}$$

Therefore, we are looking for \(V=(v_0,v_1, \cdots , v_N)^T \in {\mathbb {R}}^{N+1}\), such that

$$\begin{aligned} (^RD^{\alpha }u_N)(x_i)+u_N(x_i)=f(x_i), \quad { i=0,1,\ldots ,N,} \end{aligned}$$
(5.5)

where \(u_N(x_i)=(1-x_i^2)^{\frac{\alpha }{2}} v_i\). Then, (5.5) is equivalent to solve the linear system:

$$\begin{aligned} (D+\Lambda )V=F \end{aligned}$$

where \(D,\Lambda \in {\mathcal {M}}_{N+1}({\mathbb {R}})\), \(F \in {\mathbb {R}}^{N+1}\), and for \(i,j=0,1,\ldots ,N\),

$$\begin{aligned} D(i,j)=(^RD^{\alpha }{\hat{L}}_i)(x_j), \ \ \Lambda (i,i)=(1-x_i^2)^{\frac{\alpha }{2}}, \ \ F(i)=f(x_i), \end{aligned}$$
(5.6)

and the differential matrix D can be analytically calculated by (2.14). Here, we set \(\{ x_i \}_{i=0}^N\) to be zeros of \(P^{\frac{\alpha }{2},\frac{\alpha }{2}}_{N+1}(x)\).

Error curves \(^RD^\alpha (u-u_N)\) are plotted in Figs. 12 and 13 for \(\alpha =1.27,1.84\), \(N=13,17\), respectively. As predicted by Theorem 4.2, the superconvergence points \(\{ x_i \}_{i=0}^N\) are zeros of \(P^{\frac{\alpha }{2},\frac{\alpha }{2}}_{N+1}(x)\). We can see the errors at those points are significantly smaller than the global maximal error. Furthermore, the performance of superconvergence points of the collocation method is much better than that of the Petrov–Galerkin method. Errors at superconvergence points for the collocation method are more closer to zeros than the Petrov–Galerkin case as demonstrated by Fig. 14, the reciprocal of (3.13) ratios with \(O(N^{-3})\) and \(O(N^{-4})\) as reference slopes. We observe that the convergence rate at superconvergence points for the collocation method is about \(O(N^{-3})\) better than the optimal global rate. One possible reason is that the interpolation points and superconvergence points of \(\alpha \)th Riesz derivative are identical.

6 Concluding Remarks

In this work, we investigated superconvergence for \(u-u_N\) under Riesz fractional derivatives. We identified superconvergence points and found the improved convergence rate at those points. When \(0<\alpha <1\), we consider \(u_N(x)\) as either polynomial interpolation or GJF fractional interpolation, the improvement in convergence rates are \(O(N^{-2})\) and \(O(N^{-\frac{\alpha +3}{2}})\), respectively. When \(\alpha >1\), only the GJF fractional interpolant is discussed due to the singularity, and the improvement in the convergence rate is \(O(N^{-2})\). In particular, when \(0<\alpha <2\), for the case of GJF fractional interpolation, the superconvergence points are the same as the interpolation points. In addition, when we apply our superconvergence knowledge to the numerical solution of model FDEs, our theory predicts accurately the locations of superconvergence points. Moreover, we notice that for the Petrov–Galerkin method, the convergence improvement at the superconvergence points is only \(O(N^{-1})\), which is inferior to \(O(N^{-2})\), the improvement for the interpolation; and for the spectral collocation method, the convergence improvement at the superconvergence points is \(O(N^{-3})\) to \(O(N^{-4})\), which is superior to \(O(N^{-2})\), the improvement for the interpolation.

It seems that polynomial-based interpolation plays only a limited role in solving FDEs. We believe that GJF-type fractional interpolation is to be preferred in fractional calculus. We hope that our findings can be useful in numerically solving FDEs, especially when using data at the predicted superconvergence points.