1 Introduction

We consider in this paper the fractional Laplacian equation

$$\begin{aligned} {\left\{ \begin{array}{ll} (-\Delta )^s u(x)=f(x), \quad x=(x_1,x_2,\ldots ,x_d)\in \Omega ,\\ u|_{\partial \Omega }=0, \end{array}\right. } \end{aligned}$$
(1.1)

where \(0<s<1\), \(\Omega \) is an open subset of \({\mathbb {R}}^d,~d\ge 1\) and \((-\Delta )^s\) is defined through the spectral decomposition of \(-\Delta \) with homogeneous Dirichlet boundary conditions. A main difficulty for solving the above equation is that \((-\Delta )^s\) is a non-local operator (see [5, 6]), so analytical and numerical techniques developed for the regular PDEs with locally defined derivatives can not be directly applied. To circumvent the nonlocal difficulty of the fractional Laplacian, Caffarelli and Silvestre [6] (cf. also [4, 7, 25]) showed that the solution of the fractional Laplacian equation u(x) can be obtained through an extension problem on \({\mathbb {R}}^d\times [0,\infty )\), i.e. \(u(x)=U(x,0)\) where U(xy) solves:

$$\begin{aligned} {\left\{ \begin{array}{ll} \nabla \cdot \big (y^\alpha \nabla U(x,y)\big )=0, &{}\quad \text {in} ~{\mathcal {D}}=\Omega \times (0,\infty ),\\ {\mathcal {N}} U:=-\lim \limits _{y\rightarrow 0}y^\alpha U_y(x,y)= d_s f(x), &{}\quad \text {on} \,\Omega \times \{0\},\\ U=0,&{} \quad \text {on}\,\partial _L{\mathcal {D}}=\partial \Omega \times [0,\infty ), \end{array}\right. } \end{aligned}$$
(1.2)

where \(\alpha =1-2s\) and \(d_s=2^{1-2s}{\Gamma (1-s)}/{\Gamma (s)}\). Hence, we only have to solve (1.2) which only involves regular derivatives. However, there are two main difficulties w.r.t. (1.2): (i) it involves a degenerate/singular weight \(y^\alpha \) and the solution is weakly singular at \(y=0\), and (ii) it is of \(d+1\)-dimension while the original problem is of d-dimension.

In a series of papers, Nochetto et al. [21, 22] made a systematical study on the finite-element approximation of Caffarelli–Silvestre extension (1.2), and proposed an adaptive finite-element method for solving the \(d+1\)-dimensional extended problems and derived rigorous error estimates. Chen et al. [8] developed an efficient solver via multilevel techniques to deal with the extension problem, see also a related work in [3, sect.2.7] and references therein. However, achievable accuracy is limited by the low-order finite-element approximations used in these work. Note also that an interesting hybrid FEM-spectral method based on a clever approximation of Laplace eigenvalues for the extension problem is developed in [2]. Some alternative approaches for the spectral fractional Laplacian problem (1.1) include: through the Dunford–Taylor integral [3, 17, 27] and through model reduction using Kato’s formula [11].

The aim of this paper is to develop an efficient and accurate numerical method for the Caffarelli–Silvestre extension (1.2). In particular, we will develop suitable strategies to overcome the two main difficulties outlined above:

  • Since \(y^\alpha \) is exactly the weight associated with the generalized Laguerre functions, it is natural to use generalized Laguerre functions as basis functions in the y-direction. This will lead to sparse stiffness and mass matrices in the y-direction. However, expansion by the generalized Laguerre functions can not accurately approximate the weakly singular solution at \(y=0\). Therefore, we first determine the singular behavior at \(y=0\) for the solution of (1.2), and then enrich the approximation space by adding a few leading singular terms of the solution. We will show that convergence rate of the enriched spectral method will increase by one for each additional singular term in the approximation space.

  • Since the extended domain is of tensor-product type, we shall use the matrix diagonalization method [16, 18, 23] to reduce the \(d+1\)-dimensional problem to a sequence of d-dimensional Poisson type equations that can be solved by your favorite method in the original domain. Due to the high accuracy of the discretization in the y-direction, the number of d-dimensional Poisson type equations needed to be solved is relatively small, making the total algorithm accurate as well as efficient.

More precisely, our enriched spectral method in the extended direction coupled with one’s favorite discretization in the original domain for (1.2) enjoys the following advantages: (i) accuracy it can achieve high-order convergence rate in the extended y-direction despite the weak singularity at \(y=0\); (ii) efficiency it only requires solving a small number of Poisson type equation in the original domain ; (iii) flexibility it can be used with any approximation method in the original domain. We will present ample numerical results to show that this method is very effective in dealing with the fractional Laplacian problem. Note that the numerical techniques proposed in this work can also be applied to solving more general fractional elliptical problems through the Cafarelli–Silvestre extension (see [3, 4, 21]).

The paper is organized as follows. In the next section, we introduce some notations, describe the Caffarelli–Silvestre extension, and recall basic properties of generalized Laguerre function and its approximation results. In Sect. 3, we develop a fast spectral Galerkin method using the generalized Laguerre functions for the extension problem, conduct error analysis, and present some supporting numerical results. In Sect. 4, we construct an efficient enriched spectral method to deal with the weak singularity at \(y=0\) for (1.2), carry out a detailed analysis, and present numerical results to validate our analysis. We conclude with a few remarks in Sect. 5.

2 Preliminaries

2.1 Some Functional Spaces and Weak Formulation of the Caffarelli–Silvestre Extension

Let \(\Omega \) be an open, bounded and connected domain in \({\mathbb {R}}^d~(d\ge 1)\) with Lipschitz boundary \(\partial \Omega \), and \( \Lambda :=(0,\infty )\). We denote the semi-infinite cylinder in \({\mathbb {R}}^{d+1}\) and its lateral boundary by

$$\begin{aligned} {\mathcal {D}}:=\Omega \times \Lambda ,\quad \partial _L {\mathcal {D}}:=\partial \Omega \times {\bar{\Lambda }}. \end{aligned}$$
(2.1)

We denote a vector in \({\mathbb {R}}^{d+1}\) by

$$\begin{aligned} (x, y)=(x_1,x_2,\ldots ,x_d,y). \end{aligned}$$

Let \({\mathcal {Z}}\) be either \(\Lambda \) or \(\Omega \) or \({\mathcal {D}}\), and \(\omega \) be a positive weight function. We denote

$$\begin{aligned} \begin{aligned} \big (p, q\big )_{\omega ,{\mathcal {Z}}}:=\int _{{\mathcal {Z}}} ~p(z) q(z)\omega (z) \mathrm{d}z,\quad \Vert p\Vert _{\omega ,{\mathcal {Z}}}=\big (p, p\big )_{\omega ,{\mathcal {Z}}}, \end{aligned} \end{aligned}$$
(2.2)

and

$$\begin{aligned} \begin{aligned} {H}^1_{\omega }({\mathcal {Z}}):=\{v\in L^2_{\omega }({\mathcal {Z}}):~\nabla v\in L^2_{\omega }({\mathcal {Z}})\}, \end{aligned} \end{aligned}$$
(2.3)

equipped with norm and semi-norm

$$\begin{aligned} \begin{aligned} \Vert v\Vert _{\omega ,{\mathcal {Z}}}:=\big ( v, v\big )_{\omega ,{\mathcal {Z}}},\quad \Vert v\Vert _{{1,\omega },{\mathcal {Z}}}:=(\Vert v\Vert ^2_{\omega ,{\mathcal {Z}}}+\Vert \nabla v\Vert ^2_{\omega ,{\mathcal {Z}}})^{1/2}. \end{aligned} \end{aligned}$$
(2.4)

We will omit the weight \(\omega \) from the notations when \(\omega \equiv 1\).

In order to study the extension problem (1.2), we define

$$\begin{aligned} \begin{aligned} {{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}}):=\big \{\nabla v\in L^2_{y^\alpha }({\mathcal {D}}):\,\lim \limits _{y\rightarrow \infty }v(x,y)=0,\quad v(x,y)|_{\partial _L{\mathcal {D}}}=0,\big \} \end{aligned} \end{aligned}$$
(2.5)

with the norm

$$\begin{aligned} \Vert v\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}=\Vert \nabla v\Vert _{y^{\alpha },{\mathcal {D}}}. \end{aligned}$$
(2.6)

Denote the trace of any function \(v\in {{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})\) by

$$\begin{aligned} {\mathbf {tr}}\{ v\}(x):=v(x,0). \end{aligned}$$

We recall the following result [22, Proposition 2.5]:

Lemma 2.1

Let \(\Omega \subset {\mathbb {R}}^d\) be a bounded Lipschitz domain and \(\alpha =1-2s\). The trace operator \(\mathbf {tr}\) satisfies \(\mathbf {tr}{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})={\mathbb {H}}^s(\Omega )\) and

$$\begin{aligned} \Vert \mathbf {tr} v\Vert _{{\mathbb {H}}^s(\Omega )}\le c\Vert v\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})},\quad \forall v\in {{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}}). \end{aligned}$$

Then, the weak formulation of (1.2) is: Given \(f\in {\mathbb {H}}^{-s}(\Omega )\), find \(U\in {{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})\) such that

$$\begin{aligned} \big (y^\alpha \nabla U,\nabla V\big )_{\mathcal {D}}=d_s\big (f,\mathbf {tr}\{V\}\big )_\Omega ,\quad \forall V\in {{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}}). \end{aligned}$$
(2.7)

The wellposedness of the above weak formulation is a direct consequence of Lax–Milgram lemma and Lemma 2.1.

2.2 Fractional Laplace Operator (Fractional Laplacian)

There are essentially two ways to define the fractional Laplacian operator \((-\Delta )^s\) in a bounded domain, one is defined in the integral form and the other in spectral form [4, 20]. In this paper, we will consider the latter. More precisely, let \(\{\lambda _n,\varphi _n\}\) be the eigenvalues and orthonormal eigenfunctions of the Laplacian with homogeneous Dirichlet boundary conditions, i.e.,

$$\begin{aligned} -\Delta \varphi _n=\lambda _n \varphi _n ,\quad \varphi _n|_{\partial \Omega }=0; \quad ( \varphi _n, \varphi _n)=1. \end{aligned}$$
(2.8)

It is well-known that \(0<\lambda _1\le \lambda _2\le \cdots \le \lambda _n\rightarrow +\infty \), and \(\{\varphi _n\}\) forms an orthonormal basis in \(L^2(\Omega )\) space (see [12]). Then, the fractional Laplacian is defined by:

$$\begin{aligned} (-\Delta )^s v=\sum _{n=1}^\infty {\hat{v}}_n~\lambda ^s_n~ \varphi _n,\quad {\hat{v}}_n=\int _\Omega v\varphi _n. \end{aligned}$$
(2.9)

We aslo define the Hilbert space associated with the spectrum of the Laplacian:

$$\begin{aligned} {\mathbb {H}}^r(\Omega )=\big \{v=\sum _{n=1}^\infty {\hat{v}}_n~ \varphi _n\in L^2(\Omega ): ~|v|^2_{{\mathbb {H}}^r(\Omega )}=\sum _{n=1}^\infty (\lambda _n)^r~|{\hat{v}}_n|^2<\infty \big \}. \end{aligned}$$

Obviously, for any \(s<r\), there exists

$$\begin{aligned} |v|_{{\mathbb {H}}^s(\Omega )}\le c |v|_{{\mathbb {H}}^r(\Omega )}. \end{aligned}$$
(2.10)

2.3 Generalized Laguerre Functions

Since (2.7) involves a singular weight function \(y^\alpha \), it is natural to use the generalized Laguerre function \(\{\widehat{{\mathscr {L}}}^{(\alpha )}_n(y)\}\) which are orthogonal with respect to the weight \(y^\alpha \).

We start by reviewing some basic properties of the generalized Laguerre functions \(\widehat{{\mathscr {L}}}^{(\alpha )}_n(y):=e^{-\frac{y}{2}}{\mathscr {L}}^{(\alpha )}_n(y)\), where \({\mathscr {L}}^{(\alpha )}_n(y)\) is the generalized Laguerre polynomial [1, 24, 26]. It is clear that \(\{\widehat{{\mathscr {L}}}^{(\alpha )}_n(y)\}\) forms a complete basis in \(L^2_{y^\alpha }(\Lambda )\), and they are mutually orthogonal with respect to the weight \(y^\alpha \):

$$\begin{aligned} \int _{0}^\infty \widehat{{\mathscr {L}}}_n^{(\alpha )}(y)\, \widehat{{\mathscr {L}}}_m^{(\alpha )}(y) \,y^\alpha \, \mathrm{d}y= \gamma _n^{(\alpha )} \delta _{mn},\quad \gamma _n^{(\alpha )}=\frac{\Gamma (n+\alpha +1)}{\Gamma (n+1)}. \end{aligned}$$
(2.11)

The generalized Laguerre functions can be efficiently and stably computed by the three-term recurrence formula

$$\begin{aligned} \begin{aligned}&\widehat{{\mathscr {L}}}_{-1}^{(\alpha )}(y)\equiv 0,\qquad \widehat{{\mathscr {L}}}_0^{(\alpha )}(y)=e^{-y/2},\\&\widehat{{\mathscr {L}}}_{n+1}^{(\alpha )}(y)=\frac{2n+\alpha +1-y}{n+1}\widehat{{\mathscr {L}}}_n^{(\alpha )}(y)-\frac{n+\alpha }{n+1}\widehat{{\mathscr {L}}}_{n-1}^{(\alpha )}(y). \end{aligned} \end{aligned}$$
(2.12)

Denote \(\widehat{{\mathcal {P}}}^y_N=\text {span}\{\widehat{{\mathscr {L}}}_n^{(\alpha )}(y), 0\le n\le N\}\). For any \(u\in L^2_{y^\alpha }(\Lambda )\), we define

$$\begin{aligned} \big (\pi _N^y u-u,v\big )=0\quad \forall v\in \widehat{{\mathcal {P}}}^y_N. \end{aligned}$$
(2.13)

Next, we define a generalized derivative by \({\widehat{\partial }}_y={\partial }_y+\frac{1}{2}\) and the corresponding non-uniformly weighted Sobolev space

$$\begin{aligned} {\widehat{B}}^m_\alpha (\Lambda ):=\big \{v:~{\widehat{\partial }}^{\,l}_y v\in L^2_{y^{\alpha +l}}(\Lambda ),~0\le l\le m\big \},\quad \alpha >-1,~m\in {\mathbb {N}}. \end{aligned}$$

Lemma 2.2

[24, Theorem 7.9] For any \(u\in {\widehat{B}}^m_\alpha (\Lambda )\) and \(0\le m\le N+1\),

$$\begin{aligned} \Vert {\widehat{\partial }}^{\,l}_y(u-\pi _N^y u)\Vert _{y^{\alpha +l},\Lambda }\le \sqrt{\frac{(N-m+1)!}{(N-l+1)!}} \Vert {\widehat{\partial }}^m_y u\Vert _{y^{\alpha +m},\Lambda },\quad 0\le l \le m. \end{aligned}$$
(2.14)

3 A First Galerkin Approximation

In this section, we investigate the Galerkin approximation to (2.7) with generalized Laguerre functions as basis functions in the extended dimension. We shall first derive a fast algorithm for solving the resultant linear system, and then derive an error analysis which reveals, in particular, how the convergence rate is affected by the singularity at \(y=0\).

3.1 Galerkin Approximation

Since the domain \({\mathcal {D}}\) is a tensor-product domain, it is natural to use a tensor-product approximation. Let \(X_h\) be a suitable approximation space in the x-direction,

$$\begin{aligned} X_h=\text {span}\{ \phi ^{x}_m(x): 1\le m\le M\}, \end{aligned}$$
(3.1)

and

$$\begin{aligned} Y_N=\text {span}\{ \phi ^{y}_n({y})=\widehat{{\mathscr {L}}}^{(\alpha )}_{n-2}(y)-\widehat{{\mathscr {L}}}^{(\alpha )}_{n-1}(y): 1\le n\le N\}. \end{aligned}$$
(3.2)

Then, the Galerkin method for (2.7) is to find \(U^h_{N}\in X_{h}\times Y_{N}\) such that

$$\begin{aligned} \big (y^\alpha \nabla U^h_N,\nabla V\big )_{\mathcal {D}}=d_s\big (f,\mathbf {tr}\{V\}\big )_\Omega \quad \forall ~ V\in X_{h}\times Y_{N}. \end{aligned}$$
(3.3)

We denote

$$\begin{aligned} \begin{array}{llll} \mathbf {S}^{x}=(s^{x}_{ml})_{M\times M},&{}\mathbf {M}^{x}=(m^x_{ml})_{M\times M},&{} s^{x}_{ml}=\big (\nabla _{x}\phi ^{x}_l,\nabla _{x}\phi ^{x}_m\big )_\Omega , &{} m^{x}_{ml}=\big (\phi ^{x}_l,\phi ^{x}_m\big )_\Omega , \\ \mathbf {S}^{y}=(s^{y}_{np})_{N\times N},&{}\mathbf {M}^{y}=(m^y_{np})_{N\times N}, &{} s^{y}_{np}=\big (y^{\alpha }\partial _y\phi ^y_p,\partial _y\phi ^y_n\big )_\Lambda , &{} m^{y}_{np}=\big (y^{\alpha }\phi ^y_p, \phi ^y_n\big )_\Lambda ,\\ \mathbf {F}=(f_{mn})_{M\times N},&{} &{}f_{mn}=d_s\phi ^y_n(0)\,\big ( f, \phi ^{x}_m\big )_\Omega ,\\ \end{array} \end{aligned}$$

where \(\mathbf {M}^{x}\) and \(\mathbf {S}^{x}\) are the mass and stiffness matrices in x-direction(s), and \(\mathbf {M}^{y}\) and \(\mathbf {S}^{y}\) are the mass and stiffness matrices in the extended y-direction. We start by deriving explicit formula for \(\mathbf {M}^y\) and \(\mathbf {S}^y\).

Using the relation [24, (7.26)], we find

$$\begin{aligned} {\widehat{\partial }}_y \widehat{{\mathscr {L}}}^{(\alpha )}_n(y)=({\partial }_y+\frac{1}{2}) \widehat{{\mathscr {L}}}^{(\alpha )}_n(y)=-\widehat{{\mathscr {L}}}^{(\alpha +1)}_{n-1}(y)=-\sum _{n=0}^{n-1}\widehat{{\mathscr {L}}}^{(\alpha )}_{k}(y), \end{aligned}$$
(3.4)

which implies

$$\begin{aligned} \partial _y\widehat{{\mathscr {L}}}^{(\alpha )}_{n}=-\sum _{j=0}^{n-1}\widehat{{\mathscr {L}}}^{(\alpha )}_{j}-\frac{1}{2}\widehat{{\mathscr {L}}}^{(\alpha )}_{n}. \end{aligned}$$

Hence,

$$\begin{aligned} \partial _y\phi ^y_{n}=\partial _y\widehat{{\mathscr {L}}}^{(\alpha )}_{n-2}-\partial _y\widehat{{\mathscr {L}}}^{(\alpha )}_{n-1} =\frac{1}{2}(\widehat{{\mathscr {L}}}^{(\alpha )}_{n-2}+\widehat{{\mathscr {L}}}^{(\alpha )}_{n-1}). \end{aligned}$$
(3.5)

Then, by the orthogonality (2.11), we find that both \(\mathbf {M}^y\) and \(\mathbf {S}^y\) are symmetric tridiagonal with non-zero entries given by

$$\begin{aligned} s^{y}_{np}= \frac{1}{4}{\left\{ \begin{array}{ll} \gamma ^{(\alpha )}_{n-2}+\gamma ^{(\alpha )}_{n-1},&{} p=n,\\ \gamma ^{(\alpha )}_{n-1},&{}p=n+1,\\ \gamma ^{(\alpha )}_{n-2},&{}p=n-1, \end{array}\right. }\quad m^{y}_{np}={\left\{ \begin{array}{ll} \gamma ^{(\alpha )}_{n-2}+\gamma ^{(\alpha )}_{n-1},&{} p=n,\\ -\gamma ^{(\alpha )}_{n-1},&{}p=n+1,\\ -\gamma ^{(\alpha )}_{n-2},&{}p=n-1,\end{array}\right. } \end{aligned}$$
(3.6)

where \(\gamma ^{(\alpha )}_{-1}=0\) and \(\gamma ^{(\alpha )}_{k}=\Gamma (k+\alpha +1)/\Gamma (k+1)\).

Setting in (3.3)

$$\begin{aligned} U^h_N(x,y)=\sum _{m=1}^M\sum _{n=1}^N {\tilde{u}}_{mn} \phi ^{x}_m(x) \phi ^y_n(y), \end{aligned}$$

and taking, successively, \(V(x,y)=\phi ^{x}_i(x) \phi ^y_j(y)\), the equation (3.3) is reduced to the matrix system

$$\begin{aligned} \mathbf {S}^{x} \,\mathbf {U}\,{\mathbf {M}^y}+\mathbf {M}^{x}\,\mathbf {U}\,\mathbf {S}^y=\mathbf {F}. \end{aligned}$$
(3.7)

3.2 Fast Algorithm with Diagonalization in the Extended Direction

The linear system (3.7) can be efficiently solved by the matrix decomposition method [18], which is also known in the field of spectral methods as the matrix diagonalization method (cf. [16, 23]), as follows.

We first precompute the generalized eigenvalue problem

$$\begin{aligned} \mathbf {M}^y \,\mathbf {e}_i=\lambda _i \mathbf {S}^y \,\mathbf {e}_i \big |_{i=1,2,\ldots ,N}\quad \Longleftrightarrow \quad \mathbf {M}^y\, \mathbf {E}=\mathbf {S}^y\, \mathbf {E}\,{{\varvec{\Lambda }}}, \end{aligned}$$
(3.8)

where \({{\varvec{\Lambda }}}\) is a diagonal matrix whose diagonal entries are eigenvalues \(\{\lambda _i\}\), and the i-th column of \(\mathbf {E}\) is the eigenvector \(\mathbf {e}_i\). Then, setting \(\mathbf{U} =\mathbf{VE} ^{T}\) and using (3.8), we can rewrite (3.7) as

$$\begin{aligned} \mathbf {S}^{x} \,\mathbf {V}\,{{\varvec{\Lambda }}}\mathbf {E}^T\mathbf {S}^{y}+\mathbf {M}^{x}\,\mathbf {V} \mathbf {E}^T\mathbf {S}^{y}=\mathbf {F}, \end{aligned}$$
(3.9)

which, thanks to the fact \(\mathbf {E}^T\mathbf {E}=I\), is equivalent to

$$\begin{aligned} \mathbf {S}^{x} \,\mathbf {V}\,{{\varvec{\Lambda }}}+\mathbf {M}^{x}\,\mathbf {V}=\mathbf {G}:=\mathbf {F}(\mathbf {S}^{y})^{-1} \mathbf {E}. \end{aligned}$$
(3.10)

Let \(\mathbf {v}_i\) and \(\mathbf {g}_i\) be the i-th row of \(\mathbf {V}\) and \(\mathbf {G}\), respectively, we find

$$\begin{aligned} (\lambda _i\mathbf {S}^{x} +\mathbf {M}^{x})\mathbf {v}_i=\mathbf {g}_i,\quad {i=1,2,\ldots ,N}. \end{aligned}$$
(3.11)

We observe that for each i, (3.11) is simply the linear system resulted from the Galerkin approximation in \(X_h\) of the Poisson type equation

$$\begin{aligned} -\lambda _i \Delta v_i+v_i=g_i,\quad v_i|_{\partial \Omega }=0, \end{aligned}$$

which can be efficiently solved by your favorite method at a cost of C(M) flops, where, for instance, \(C(M)=O(M)\) with a multigrid finite-element method, or \(C(M)=O(M^{(d+1)})\) with a spectral method with d being the spatial dimension.

To summarize, we can solve (3.7) with the following procedure:

  1. Step 1

    Compute \(\mathbf {G}=\mathbf {F}(\mathbf {S}^{y})^{-1} \mathbf {E}\) (\(O(N^2M)\) flops since \(\mathbf {S}^{y}\) is tridiagonal );

  2. Step 2

    Solve \(\mathbf {v}_i\) from (3.11) for \(i=1,\ldots , N\) (NC(M) flops);

  3. Step 3

    Set \(\mathbf {U}=\mathbf {V}\mathbf {E}^T\), (\(O(N^2M)\) flops).

We hope that \(N\ll M\) so the cost is dominated by the second step, which is a small number N times the cost for solving one regular Poisson type equation in \(\Omega \).

3.3 Error Estimate

To better describe the error, we introduce the weighted Hilbert space

$$\begin{aligned} H^1_{y^\alpha }(\Lambda )=\{v\in L^2_{y^\alpha }(\Lambda ):~\partial _yv\in L^2_{y^\alpha }(\Lambda )\},\quad \alpha >-1. \end{aligned}$$

The results in Lemma 2.2 does not provide a projection error in the \(H^1_{y^\alpha }(\Lambda )\) norm. It has to be derived separately as follows.

Lemma 3.1

For any \(u\in H^1_{y^\alpha }(\Lambda )\cap {\widehat{B}}^{m}_\alpha (\Lambda )\) and \(\partial _yu\in {\widehat{B}}^{m-1}_\alpha (\Lambda )\), \(m\ge 2\), we have

$$\begin{aligned} \Vert \partial _y(u-\pi _N^y u)\Vert _{y^{\alpha },\Lambda }\le cN^{\frac{2-m}{2}}\Vert {\widehat{\partial }}^m_y u\Vert _{y^{\alpha +m-1},\Lambda } \end{aligned}$$
(3.12)

where generalized derivative operator \({\widehat{\partial }}_y= \partial _y+1/2\) is defined in (3.4).

Proof

For any \(u\in L^2_{y^\alpha } (\Lambda )\), we can expand u and its projection as follows

$$\begin{aligned} u=\sum _{n=0}^{\infty } {\tilde{u}}_n \widehat{{\mathscr {L}}}^{(\alpha )}_n,\quad \pi _N^y u=\sum _{n=0}^{N} {\tilde{u}}_n \widehat{{\mathscr {L}}}^{(\alpha )}_n,\quad {\tilde{u}}_n=\big ({\gamma _n^{(\alpha )}}\big )^{-1}(u, \widehat{{\mathscr {L}}}^{(\alpha )}_n)_{y^\alpha ,\Lambda }. \end{aligned}$$
(3.13)

We denote \({\tilde{u}}_k^s=\sum \limits _{n=k+1}^{\infty }{\tilde{u}}_n\). Applying (3.4) and the definition \({\widehat{\partial }}_y=\partial _y+1/2\) into the above expansion leads to

$$\begin{aligned} \begin{aligned}&\partial _y u+\frac{1}{2}u={\widehat{\partial }}_y u=\sum _{n=0}^{\infty } {\tilde{u}}_n {\widehat{\partial }}_y \widehat{{\mathscr {L}}}^{(\alpha )}_n=-\sum _{n=1}^\infty {\tilde{u}}_n \sum _{k=0}^{n-1}\widehat{{\mathscr {L}}}^{(\alpha )}_k=-\sum _{k=0}^\infty {\tilde{u}}_k^s\widehat{{\mathscr {L}}}^{(\alpha )}_k,\\&\pi _N^y \partial _y u+\frac{1}{2}\pi _N^y u=-\sum _{k=0}^N {\tilde{u}}_k^s\widehat{{\mathscr {L}}}^{(\alpha )}_k,\qquad \pi ^{y}_{N-1} {\widehat{\partial }}_y u-{\widehat{\partial }}_yu=\sum _{k=N}^\infty {\tilde{u}}_k^s\widehat{{\mathscr {L}}}^{(\alpha )}_k. \end{aligned} \end{aligned}$$

Similarly, we have

$$\begin{aligned} \begin{aligned} \partial _y \pi _N^y u+\frac{1}{2} \pi _N^y u ={\widehat{\partial }}_y\pi _N^y u=-\sum _{k=0}^{N-1}({\tilde{u}}_k^s-{\tilde{u}}_N^s)\widehat{{\mathscr {L}}}^{(\alpha )}_k.\\ \end{aligned} \end{aligned}$$

Combing the above identities, we derive

$$\begin{aligned} \begin{aligned} \Vert \partial _y \pi _N^y u-\pi _N^y \partial _y u\Vert ^2_{y^\alpha ,\Lambda } =\Vert {\tilde{u}}_N^s\sum _{k=0}^{N}\widehat{{\mathscr {L}}}^{(\alpha )}_k\Vert ^2_{y^\alpha ,\Lambda } =|{\tilde{u}}_N^s|^2 \gamma ^{(\alpha )}_N\sum _{k=0}^{N}\gamma ^{(\alpha )}_k(\gamma ^{(\alpha )}_N)^{-1}. \end{aligned} \end{aligned}$$

For any \({\widehat{\partial }}_y u\in {\widehat{B}}^{m-1}_\alpha (\Lambda ),~\alpha >-1\), we have

$$\begin{aligned} |{\tilde{u}}_N^s|^2 \gamma ^{(\alpha )}_N\le \sum _{k=N}^\infty |{\tilde{u}}_k^s|^2\gamma ^{(\alpha )}_k=\Vert \pi ^y_{N-1} {\widehat{\partial }}_y u-{\widehat{\partial }}_yu\Vert ^2_{y^\alpha ,\Lambda }\le cN^{{1-m}}\Vert {\widehat{\partial }}^m_y u\Vert ^2_{y^{\alpha +m-1},\Lambda } \end{aligned}$$
(3.14)

Moreover, thanks to [15, (3.8)–(3.10)], there exist a positive constant c such that

$$\begin{aligned} \sum _{k=0}^{N}\gamma ^{(\alpha )}_k(\gamma ^{(\alpha )}_N)^{-1}\le c(N+1). \end{aligned}$$
(3.15)

Then, using the triangle inequality

$$\begin{aligned} \Vert \partial _y(\pi _N^y u-u)\Vert _{y^{\alpha },\Lambda }\le \Vert \partial _y\pi _N^y u-\pi _N^y\partial _y u\Vert _{y^{\alpha },\Lambda }+\Vert \pi _N^y\partial _y u-\partial _yu\Vert _{y^{\alpha },\Lambda }, \end{aligned}$$
(3.16)

and combing (3.14)–(3.16) and Lemma 2.2, we arrive at

$$\begin{aligned} \Vert \partial _y(u-\pi _N^y u)\Vert _{y^{\alpha },\Lambda }\le c(N^{\frac{2-m}{2}}\Vert {\widehat{\partial }}^m_y u\Vert _{y^{\alpha +m-1},\Lambda }+N^{\frac{1-m}{2}}\Vert {\widehat{\partial }}^{m-1}_y\partial _y u\Vert _{y^{\alpha +m-1},\Lambda }). \end{aligned}$$

\(\square \)

The following result is now a direct consequence of the above lemma and Lemma 2.2 with \(l=0\).

Theorem 3.1

For any \(u\in H^1_{y^\alpha }(\Lambda )\cap {\widehat{B}}^{m}_\alpha (\Lambda )\) and \(\partial _yu\in {\widehat{B}}^{m-1}_\alpha (\Lambda )\), \(2\le m\le N+1\),

$$\begin{aligned} \Vert \pi _N^y u-u\Vert _{1,y^{\alpha },\Lambda }\le c\,N^{-\frac{m}{2}}\big (\Vert {\widehat{\partial }}^m_y u\Vert _{y^{\alpha +m},\Lambda }+{N}\Vert {\widehat{\partial }}^m_y u\Vert _{y^{\alpha +m-1},\Lambda }\big ). \end{aligned}$$
(3.17)

We are now in position to derive error estimate for our Galerkin approximation (3.3).

Theorem 3.2

Let U and \(U^h_N\) be the solutions to the weak problem (2.7) and the numerical problem (3.3), respectively. If \(U(x,\cdot )\in H^1_{y^\alpha }(\Lambda )\cap {\widehat{B}}^{m}_\alpha (\Lambda )\) and \(\partial _yU(x,\cdot )\in {\widehat{B}}^{m-1}_\alpha (\Lambda )\), \(2\le m\le N+1\), then there exists

$$\begin{aligned} \begin{aligned} \Vert U-U^h_{N}\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}&\le \, c \Vert \nabla _{x}(\pi ^{x}_h-{\mathcal {I}})\,U\Vert _{y^\alpha ,{\mathcal {D}}}+c\Vert (\pi ^{x}_h-{\mathcal {I}})\,{\widehat{\partial }}^2_yU\Vert _{y^{\alpha +1},{\mathcal {D}}}.\\&\quad +\,c\,{N^{-\frac{m}{2}}}\big (\Vert \nabla _x({\widehat{\partial }}^m_y \, U)\Vert _{y^{\alpha +m},{\mathcal {D}}}+{N}\Vert {\widehat{\partial }}^m_y \,U\Vert _{y^{\alpha +m-1},{\mathcal {D}}}\big ). \end{aligned} \end{aligned}$$
(3.18)

Proof

From (2.7) and (3.3), we find

$$\begin{aligned} \big (y^\alpha \nabla (U-U^h_N),\nabla V\big )_{\mathcal {D}}=0\quad \forall ~ V\in X_{h}\times Y_{N}, \end{aligned}$$

which implies that

$$\begin{aligned} \begin{aligned} \Vert U-U^h_N\Vert ^2_{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}&=\big (y^\alpha \nabla (U-U^h_N),\nabla (U-V)\big )_{\mathcal {D}}\\&\le \Vert U-U^h_N\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}~\Vert U-V\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}, \end{aligned} \end{aligned}$$

namely,

$$\begin{aligned} \Vert U-U^h_N\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}\le \inf _{V\in X_{h}\times Y^{k}_{N}} \Vert U-V\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}. \end{aligned}$$
(3.19)

Next, let \(\pi _N^y\) be the Laguerre orthogonal projection defined in (2.13) and \(\pi ^{x}_h\) be a projection operator defined by

$$\begin{aligned} \Vert u-\pi ^{x}_h u\Vert _{H^1(\Omega )}\lesssim \inf _{u_h\in X_h} \Vert u-u_h\Vert _{H^1(\Omega )}. \end{aligned}$$
(3.20)

Substituting \(V=\pi _N^y\pi ^{x}_h~U\) in (3.19) results in

$$\begin{aligned} \begin{aligned} \Vert U-V\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}\le \Vert \nabla (\pi _N^y\pi ^{x}_h \,U-U)\Vert _{y^\alpha ,{\mathcal {D}}}. \end{aligned} \end{aligned}$$

Let \({\mathcal {I}}\) be the identity operator, then

$$\begin{aligned} \begin{aligned} \Vert \nabla (\pi _N^y\circ \pi ^{x}_h \,U-U)\Vert _{y^\alpha ,{\mathcal {D}}}\le \Vert \nabla (\pi ^{x}_h-{\mathcal {I}})\circ \pi _N^y\,U)\Vert _{y^\alpha ,{\mathcal {D}}}+ \Vert \nabla (\pi _N^y-{\mathcal {I}})\,U\Vert _{y^\alpha ,{\mathcal {D}}}. \end{aligned} \end{aligned}$$

Thanks to Lemmas 2.2 and 3.1, the first term on the right hand side satisfies

$$\begin{aligned} \begin{aligned} \Vert \nabla (\pi ^{x}_h-{\mathcal {I}})\circ \pi _N^y\,U\Vert _{y^\alpha ,{\mathcal {D}}}&\le \Vert \nabla _{x}(\pi ^{x}_h-{\mathcal {I}})\circ \pi _N^y\,U\Vert _{y^\alpha ,{\mathcal {D}}}+ \Vert \partial _y(\pi ^{x}_h-{\mathcal {I}})\circ \pi _N^y\,U\Vert _{y^\alpha ,{\mathcal {D}}}\\&\le c \Vert \nabla _{x}(\pi ^{x}_h-{\mathcal {I}})\,U\Vert _{y^\alpha ,{\mathcal {D}}}+c\Vert (\pi ^{x}_h-{\mathcal {I}})\,{\widehat{\partial }}^2_yU\Vert _{y^{\alpha +1},{\mathcal {D}}}. \end{aligned} \end{aligned}$$

Furthermore, via (2.14) and (3.12), the second term can be estimated that

$$\begin{aligned}\begin{aligned}&\Vert \nabla (\pi _N^y-{\mathcal {I}})\,U\Vert _{y^\alpha ,{\mathcal {D}}}\le \Vert \nabla _x(\pi _N^y-{\mathcal {I}})\,U\Vert _{y^\alpha ,{\mathcal {D}}}+ \Vert \partial _y(\pi _N^y-{\mathcal {I}})\,U\Vert _{y^\alpha ,{\mathcal {D}}}\\&\quad \le c\,{N^{-\frac{m}{2}}}\big (\Vert \nabla _x({\widehat{\partial }}^m_y \, U)\Vert _{y^{\alpha +m},{\mathcal {D}}}+{N}\Vert {\widehat{\partial }}^m_y \,U\Vert _{y^{\alpha +m-1},{\mathcal {D}}}\big ). \end{aligned} \end{aligned}$$

Finally we end the proof by combing the above estimates. \(\square \)

3.4 Numerical Examples

The result (3.18) indicates that the error estimate for the extension problem consists of two parts: the first two terms are the approximation errors of \(X_h\) in the x-direction; the last term is a typical Laguerre-spectral approximation result in the extended direction y, whose convergence rate only depends on the solution’s regularity in the specified norms, with the potential of being faster than any algebraic rate should the solution in the specified norms be bounded for any m. Unfortunately, unless with \(s=0.5\), solutions of the extension problem do not have enough regularity in the specified norm as we show below.

Consider, for example, the following one-dimension fractional Laplacian problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (-\Delta )^s u(x)=f(x), \quad x\in \Omega =(-1,1),\\ u(-1)=u(1)=0. \end{array}\right. } \end{aligned}$$
(3.21)

The associated Caffarelli–Silvestre extension problem can be read as

$$\begin{aligned} {\left\{ \begin{array}{ll} \nabla \cdot \big (y^{\alpha } \nabla U(x,y)\big )=0, \qquad (x,y)\in (-1,1)\times (0,\infty ),\\ -\lim \limits _{y\rightarrow 0}y^{\alpha } U_y({x},y)= d_s f({x}),\\ U(\pm 1,y)=0,\qquad \qquad \qquad y\in (0,\infty ).\\ \end{array}\right. } \end{aligned}$$
(3.22)

where \(d_s=2^{1-2s}\frac{\Gamma (1-s)}{\Gamma (s)}\) and \(\alpha =1-2s,~0<s<1,~s\ne \frac{1}{2}\). Then, we have the following numerical scheme: Find \(U^h_{N}\in X_h\times Y_{N}\) such that

$$\begin{aligned} \big (y^{1-2s}\nabla U^h_{N},\nabla V\big )_{\mathcal {D}}=\int _{-1}^1 f(x)\, V(x,0) \mathrm{d}x\quad \forall ~ V\in X_h\times Y_{N}. \end{aligned}$$
(3.23)

We select the generalized Jacobi polynomials [10, 13, 14, 26]

$$\begin{aligned} \phi ^{x}_m(x)=P^{(-1,-1)}_{m+1}(x):=-\frac{1}{4}(1-x^2)P^{(1,1)}_{m-1}(x),\quad m=1,2,\ldots \end{aligned}$$

as the basis in x direction to derive the corresponding stiffness matrix \(\mathbf {S}^{x}\) and mass matrix \(\mathbf {M}^{x}\). Indeed, via the relations

$$\begin{aligned} \begin{aligned} \big (\phi _m\big )^\prime&=\big (P^{(-1,-1)}_{m+1}\big )^\prime =\frac{m}{2} P^{(0,0)}_{m},\quad P^{(-1,-1)}_{m+1}=\frac{m}{2(2m+1)}\big (P^{(0,0)}_{m+1}-P^{(0,0)}_{m-1}\big ),\quad m\ge 1, \end{aligned} \end{aligned}$$

their entries can be calculated out exactly that

$$\begin{aligned} s^x_{ml}=\frac{m^2}{2(2m+1)}\delta _{ml} ,\quad m^x_{ml} = m^x_{lm}= {\left\{ \begin{array}{ll} \frac{m^2}{(2m-1)(2m+1)(2m+3)},&{}\quad l=m;\\ \frac{-m(m+2)}{2(2m+1)(2m+3)(2m+5)},&{}\quad l=m+ 2;\\ 0,&{}\quad otherwise. \end{array}\right. } \end{aligned}$$
(3.24)

For the resource term \(f(x)=\pi ^{2s} \sin (\pi x)\), the solution \(u(x)=\sin (\pi x)\) can be detected from the definition of the fractional Laplace operator. We first test the efficiency of the numerical method to the extension problem (3.22) with smooth solution with \(s=0.5\). Figure 1 exhibits the high accuracy and efficiency (exponential decay) of the spectral scheme (3.23), in which M and N are degrees of the freedom in x direction and y direction respectively, the ’error’ is the difference between \(U_N^h(x,0)\) and u(x).

Fig. 1
figure 1

Left: \(N=60\),    Right: \(M=30\)

The approximation result (3.18) showed that the convergence rate doesn’t only rely on the regularity in the x direction, the regularity of the y direction also can determine the efficiency of the spectral method. In fact, we will provide a detail in the next section that the regularity of the solution U(xy) is quite low in y direction except the special case \(s=0.5\). Hence, the convergence rate is slow even though we choose the smooth function \(u=\sin (\pi x)\) as the solution of the fractional Laplacian problem (3.21). Figure 2 verified the theoretical result of Theorem 3.2, in which we test the numerical method with different parameter s. Besides the case \(s=0.5\), the others converge to the solution u inefficiently due to the low regularity of solutions in y direction.

Fig. 2
figure 2

Left: \(N=60\),    Right: \(M=30\)

4 Galerkin Approximation with Enriched Space in Extended Direction

The Caffarelli–Silvestre extension transformed the complicated d-dimension fractional Laplacian problem into a simple \(d+1\)-dimension second order differential equation which provided a new strategy to deal with problems involving fractional Laplacian. The Galerkin method proposed in the previous section shows the convenience of the fast algorithm. However, as stated in the numerical examples of the previous section, the low regularity in y direction seriously deteriorates the convergence rate of the usual numerical method. To overcome this, we will use the enriched spectral method (see [9]) to improve the numerical method and enhance its convergence rate in this section.

4.1 Enriched Spectral Method for a Sturm–Liouville Problem

We are concerned with the implementation of the enriched spectral method to the following Sturm–Liouville problem

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}-\psi ^{\prime \prime }(y)-\dfrac{\alpha }{y}\psi ^{\prime }(y)+\lambda ~\psi (y)=0,\quad y\in \Lambda =(0,\infty ),\\ &{}\psi (0)=1,\quad \lim \limits _{y\rightarrow \infty }\psi (y)=0, \end{array}\right. } \end{aligned}$$
(4.1)

where \(\lambda >0\) and \(\alpha =1-2s,~s\in (0,1).\) The solution \(\psi (y)\) with different parameter \(\lambda >0\) coincides with the basis arisen in the solution of the Caffarelli–Silvestre extension. So It’s absolutely necessary to figure out the above Sturm–Liouville problem for designing efficient and accurate numerical method to solve the \(d+1\) dimensional extension problem.

Thanks to [7, Proposition 2.1], the solution of the above problem could be written as

$$\begin{aligned} \psi (y)={\left\{ \begin{array}{ll} e^{-\sqrt{\lambda }y}, &{}\quad s=\dfrac{1}{2};\\ \frac{2^{1-s}}{\Gamma (s)}(\sqrt{\lambda }y)^s K_s(\sqrt{\lambda }y),&{}\quad s\in (0,1)/\{\frac{1}{2}\}, \end{array}\right. } \end{aligned}$$
(4.2)

where Bessel functions

$$\begin{aligned} K_s(z):=\frac{\pi }{2}\frac{I_{-s}(z)-I_{s}(z)}{\sin (s \pi )},\quad I_{s}(z):=\sum _{j=0}^\infty \frac{1}{j!\Gamma (j+1+s)}\big (\frac{z}{2}\big )^{2j+s}. \end{aligned}$$

In detail, for \(s\in (0,1)/\{\frac{1}{2}\},\)

$$\begin{aligned} \begin{aligned} \psi (y)&=\frac{2^{1-s}}{\Gamma (s)}(\sqrt{\lambda }y)^s K_s(\sqrt{\lambda }y)=\frac{\pi }{2^s{\sin (s \pi )\Gamma (s)}}\big \{(\sqrt{\lambda }y)^sI_{-s}(\sqrt{\lambda }y)-(\sqrt{\lambda }y)^sI_{s}(\sqrt{\lambda }y)\big \}\\&= \frac{\pi }{2^s{\sin (s \pi )\Gamma (s)}}\left( \sum _{j=0}^\infty \frac{(\sqrt{\lambda }y)^{2j}}{2^{2j-s}j!\Gamma (j+1-s)}-\sum _{j=0}^\infty \frac{(\sqrt{\lambda }y)^{2j+2s}}{2^{2j+s}j!\Gamma (j+1+s)}\right) . \end{aligned} \end{aligned}$$
(4.3)

Furthermore, it can be detected that

$$\begin{aligned} \lim \limits _{y\rightarrow 0} y^\alpha \psi ^\prime (y)=-d_s\lambda ^s,\quad d_s=2^{1-2s}\frac{\Gamma (1-s)}{\Gamma (s)}. \end{aligned}$$

That results in

$$\begin{aligned} \begin{aligned} \int _{\Lambda }y^\alpha [-\psi ^{\prime \prime }(y)-\dfrac{\alpha }{y}\psi ^{\prime }(y)] v(y)\mathrm{d} y=\int _{\Lambda }-\big (y^\alpha \psi ^{\prime }(y)\big )^\prime v(y)\mathrm{d} y=\big (y^\alpha \psi ^{\prime }, v^\prime \big )-d_s \lambda ^sv(0). \end{aligned} \end{aligned}$$

4.1.1 Laguerre Spectral Method

Now we are ready to construct the variational formula of the Sturm–Liouville problem (4.1), it can be read as to find \(\psi \in { H}^1_{y^\alpha }(\Lambda )\) such that

$$\begin{aligned} a(\psi ,v):=\big (y^\alpha \psi ^{\prime }, v^\prime \big )_{\Lambda }+\lambda \big (y^\alpha \psi , v\big )_{\Lambda }=d_s \lambda ^sv(0),\quad v\in {H}^1_{y^\alpha }(\Lambda ). \end{aligned}$$
(4.4)

It’s evident that for any \(\lambda >0\),

$$\begin{aligned} a(w,w)\ge \min \{1,\lambda \} \Vert w\Vert ^2_{1,y^\alpha ,\Lambda },\qquad a(w,z)\le \Vert w\Vert _{1,y^\alpha ,\Lambda } \Vert z\Vert _{1,y^\alpha ,\Lambda }. \end{aligned}$$
(4.5)

Moreover, for any \(-1<\alpha <1\),

$$\begin{aligned} \begin{aligned} (v(0))^2&=\left( \int _0^\infty \partial _y(v(y)e^{-y/2})\,\mathrm{d}y\right) ^2=\left( \int _0^\infty \big (\partial _yv-v/2\big )e^{-y/2}\,\mathrm{d}y\right) ^2\\&\le \int _0^\infty y^{-\alpha }e^{-y}\,\mathrm{d}y ~\big (\Vert v\Vert ^2_{y^\alpha ,\Lambda }/4+\Vert \partial _y v\Vert _{y^\alpha ,\Lambda }-\int _0^\infty v\partial _yv y^{\alpha }\,\mathrm{d}y\big )\\&\le 2\,\Gamma (1-\alpha ) \Vert v\Vert ^2_{1,y^\alpha ,\Lambda }. \end{aligned} \end{aligned}$$

Then, owe to Lax–Milgram Lemma, the variational problem admits an unique solution.

In accordance with the variational formulation (4.4), the related Laguerre-spectral scheme is to find \(\psi _N\in Y_N\) such that

$$\begin{aligned} a(\psi _N,v)=d_s \lambda ^sv(0)\quad \forall v\in Y_N, \end{aligned}$$
(4.6)

where \(0<s<1\), \(\lambda >0\) and \(d_s=2^{1-2s}{\Gamma (1-s)}/{\Gamma (s)}\) are the same as before. By elliptic condition (4.5), it’s easy to obtain by following classical Galerkin method that

$$\begin{aligned} \Vert \psi _N-\psi \Vert _{1,y^\alpha ,\Lambda } \le ({\min \{1,\lambda \}})^{-1} \inf \limits _{v\in Y_N} \Vert v-\psi \Vert _{1,y^\alpha ,\Lambda }. \end{aligned}$$

Then, by taking \(v=\pi ^{y}_N \psi \), it can be derived from Theorem 3.1 that

$$\begin{aligned} \Vert \psi _N-\psi \Vert _{1,y^\alpha ,\Lambda } \le c\,{\max \{1,\lambda ^{-1}\}} \,N^{\frac{1-m}{2}}\big (\Vert {\widehat{\partial }}^m_y \psi \Vert _{y^{\alpha +m},\Lambda }+\sqrt{N}\Vert {\widehat{\partial }}^m_y \psi \Vert _{y^{\alpha +m-1},\Lambda }\big ). \end{aligned}$$
(4.7)

The convergence rate index m strictly relies on the regularity of the solution \(\psi \). It’s easy to observe from (4.3) that the regularity of the function \(\psi \) is quite low near \(y=0\) and behaves as

$$\begin{aligned} \psi (y)\sim 1+s_1y^{2s}g(y)=1+s_1y^{1-\alpha }g(y),\quad g(y)\sim 1~ \text {as} ~y\rightarrow 0, \end{aligned}$$

where g(y) is smooth near the origin \(y=0\).

Remark 4.1

Via the expression (4.3), the coefficient \(s_1\) can be exactly calculated out that

$$\begin{aligned} s_1=\frac{-\pi }{4^s\sin (s\pi )\Gamma (s)\Gamma (1+s)}\lambda ^s=\frac{-\Gamma (1-s)}{4^s\Gamma (1+s)}\lambda ^s, \end{aligned}$$
(4.8)

in which the second equality owes to Euler reflection formula \(\pi /\sin (s\pi )=\Gamma (s)\Gamma (1-s)\).

Remark 4.2

The above singularity analysis near \(y=0\) is indispensable for subsequent error analysis of the Caffarelli–Silvestre extension in the next section.

In fact, for any \(\alpha \in (-1,1)/\{0\}\), in order to bound \(\Vert {\widehat{\partial }}^m_y \psi \Vert _{y^{\alpha +m},\Lambda }\) and \(\Vert {\widehat{\partial }}^m_y \psi \Vert _{y^{\alpha +m-1},\Lambda }\), the index m must be less than \(2-\alpha \). This is a quite low convergence rate which leads to low-efficiency of the Laguerre-spectral method, so we need to design a more efficient numerical method for solving Sturm–Liouville problem (4.1).

A reasonable attack is to split several low regular terms from the solution \(\psi \) as the basis adding into the original smooth basis space \(Y_N\). In reality, near the origin \(y=0\), function \(\psi (y)\) can be expanded by Taylor formula as

$$\begin{aligned} \psi (y)=1+y^{-\alpha }\sum _{i=1}^{\infty } {s}_{i}\, y^{i},\quad y\rightarrow 0. \end{aligned}$$

Moreover, functions \(\psi \) is completely monotonic (see [19]), which implies that the solution \(\psi \) exponentially converges to zero as \(y\rightarrow \infty \). Hence, the solution \(\psi \) can be split into

$$\begin{aligned} \begin{aligned} \psi (y)&=e^{-{y}/{2}}+s_1[ y^{1-\alpha }e^{-y/2}+a_2y^{2-\alpha }e^{-y/2}+\cdots +a_ky^{k-\alpha }e^{-y/2}+y^{k+1-\alpha }{\tilde{g}}(y)]\\&=e^{-{y}/{2}}+\sum _{i=1}^{k} {s}_{i}\, y^{i-\alpha }e^{-y/2}+s_1y^{k+1-\alpha }{\tilde{g}}(y),\quad y\in \Lambda =(0,\infty ), \end{aligned} \end{aligned}$$
(4.9)

where constants \(s_i=s_1c_i,~i=2,\ldots ,k\) and \({\tilde{g}}(y)\) is a smooth function defined on \(\Lambda \cup \{0\}\). For the sake of simplicity, we denote

$$\begin{aligned} {Y}^{k}_{N}:=Y_N \oplus span\big \{{\mathcal {S}}_i\big \}_{i=1}^{k},\quad {\mathcal {S}}_i(y):=y^{i-\alpha }e^{-y/2}. \end{aligned}$$
(4.10)

4.1.2 Enriched Laguerre Spectral Method

Our new numerical method can be read as: find \(\psi ^k_N\in {Y}^{k}_{N}\) such that

$$\begin{aligned} a(\psi ^k_N,v)=d_s \lambda ^sv(0)\quad \forall v\in {Y}^{k}_{N}. \end{aligned}$$
(4.11)

Following the same way as the spectral method results in the estimate below

$$\begin{aligned} \Vert \psi ^k_N-\psi \Vert _{1,y^\alpha ,\Lambda } \le {\max \{1,\lambda ^{-1}\}} \inf \limits _{v\in {Y}^{k}_{N}} \Vert v-\psi \Vert _{1,y^\alpha ,\Lambda }. \end{aligned}$$

Due to

$$\begin{aligned} \psi (y)={\widetilde{\psi }}(y)+\sum _{i=1}^{k} s_i\, {\mathcal {S}}_i(y),\quad {\widetilde{\psi }}(y)=e^{-{y}/{2}}+y^{k+1-\alpha }{\tilde{g}}(y) \end{aligned}$$

then, by taking

$$\begin{aligned} v={\widehat{\Pi }}_N^{y}{\widetilde{\psi }}(y)+\sum _{i=1}^{k} s_i\, {\mathcal {S}}_i(y) \end{aligned}$$

and applying Theorem 3.1, we have

$$\begin{aligned} \Vert v-\psi \Vert _{1,y^\alpha ,\Lambda }\le \Vert {\widehat{\Pi }}_N^{y}{\widetilde{\psi }}-{\widetilde{\psi }}\Vert _{1,y^\alpha ,\Lambda } \le c\,N^{-\frac{m}{2}}\big (\Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}\Vert _{y^{\alpha +m},\Lambda }+{N}\Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}\Vert _{y^{\alpha +m-1},\Lambda }\big ) \end{aligned}$$

in which m is the largest integer such that \(\Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}\Vert _{y^{\alpha +m-1}}<\infty \), \({\widehat{\partial }}^m_y=(\partial _y+1/2)^m\).

Hence, the approximate estimate of the enriched spectral method can be concluded as a theorem as follows

Theorem 4.1

Let \(\psi ^k_N\) and \(\psi \) be solutions of the enriched spectral scheme (4.11) and the variational form (4.4), respectively. Then it holds

$$\begin{aligned} \Vert \psi ^k_N-\psi \Vert _{1,y^\alpha ,\Lambda } \le c {N^{-\frac{[2-\alpha ]}{2}-k}}\,{\max \{1,\lambda ^{-1}\}}\big (\Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}\Vert _{y^{\alpha +m},\Lambda }+{N}\Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}\Vert _{y^{\alpha +m-1},\Lambda }\big ), \end{aligned}$$

where \(m=2k+1+[1-\alpha ]\) and \([1-\alpha ]\) is the integer part of \(1-\alpha =2s\).

Remark 4.3

Since \(\psi \) is monotonic (see [19]), function \({\tilde{\psi }}(y)=\psi (y)-\sum _{i=1}^{k} s_i\, {\mathcal {S}}_i(y)\) is smooth in \(\Lambda \) except at the origin \(y=0\). Moreover, we can detect from the above analysis that: i) the range of the function \({\tilde{\psi }}\) relies on parameter \(\sqrt{\lambda }\); ii) the singularity near \(y=0\) behaves as \(y^{k+1-\alpha }\).

Remark 4.4

In view of the expression of functions

$$\begin{aligned} \psi (y)={\left\{ \begin{array}{ll} e^{-\sqrt{\lambda }y}, &{}\quad s=\dfrac{1}{2};\\ \frac{2^{1-s}}{\Gamma (s)}(\sqrt{\lambda }y)^s K_s(\sqrt{\lambda }y),&{}\quad s\in (0,1)/\{\frac{1}{2}\}, \end{array}\right. } \end{aligned}$$

it can be shown that for \(\lambda \gg 1\),

$$\begin{aligned} \Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}\Vert _{y^{\alpha +m},\Lambda }={\mathcal {O}}({\lambda }^{\frac{m-1-\alpha }{4}}),\qquad \Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}\Vert _{y^{\alpha +m-1},\Lambda }={\mathcal {O}}({\lambda }^{\frac{m-\alpha }{4}}). \end{aligned}$$
(4.12)

In fact, via the derivative relation \(\partial _y^m=\lambda ^{\frac{m}{2}}\partial _z^m,\,z=\sqrt{\lambda }y\) and \({\widehat{\partial }}_y=\partial _y+1/2\), the above relations are the direct consequence.

4.1.3 Numerical Implementation

In order to derive a fast algorithm for enriched spectral method, we choose the basis and test functions as follows

$$\begin{aligned} \phi ^y_{n}(y)=\widehat{{\mathscr {L}}}^{(\alpha )}_{n-2}(y)-\widehat{{\mathscr {L}}}^{(\alpha )}_{n-1}(y),~n\ge 1; \quad {\mathcal {S}}_i(y),~ i=1,2,\ldots k. \end{aligned}$$

The numerical solution of the enriched spectral method can be expanded as

$$\begin{aligned} \psi ^k_N(y)=\sum _{n=1}^N {\tilde{c}}_n\phi ^y_n(y)+\sum _{i=1}^k {\tilde{s}}_i {\mathcal {S}}_i(y). \end{aligned}$$

Then, by taking \(v=\phi ^y_n(y),~{\mathcal {S}}_i(y),~n=1,2,\ldots ,N,~i=1,2,\ldots ,k\) successively, the enriched Laguerre-spectral scheme (4.11) is equivalent to matrix system below

$$\begin{aligned} \left( \begin{bmatrix} \mathbf {S}_{A}^e &{}\quad \mathbf {S}_{B}^e\\ \mathbf {S}_{C}^e &{}\quad \mathbf {S}_{D}^e \end{bmatrix} +\lambda \begin{bmatrix} \mathbf {M}_{A}^e &{}\quad \mathbf {M}_{B}^e\\ \mathbf {M}_{C}^e &{}\quad \mathbf {M}_{D}^e \end{bmatrix}\right) \begin{bmatrix} \mathbf {\mathbf {c}}\\ \mathbf {\mathbf {s}} \end{bmatrix}=\begin{bmatrix} \mathbf {\mathbf {f}}\\ \mathbf {\mathbf {h}} \end{bmatrix} \end{aligned}$$
(4.13)

where the coefficient vectors \(\mathbf {\mathbf {c}}=[{\tilde{c}}_{1} ~{\tilde{c}}_{2} ~\ldots ~{\tilde{c}}_{N} ]^T\) and \(\mathbf {\mathbf {s}}=[{\tilde{s}}_{1} ~{\tilde{s}}_{2}~\ldots ~{\tilde{s}}_{k}]^T\) and

$$\begin{aligned} \begin{array}{llll} \mathbf {S}_{A}^e=(s^{A}_{np})_{N\times N},&{}\mathbf {M}_{A}^e=(m^{A}_{np})_{N\times N},&{} s^{A}_{np}=\big (y^{\alpha }\partial _y\phi ^y_p,\,\partial _y\phi ^y_n\big )_\Lambda , &{}m^{A}_{np}=\big (y^{\alpha }\phi ^y_p,\,\phi ^y_n\big )_\Lambda ;\\ \mathbf {S}_{B}^e=(s^{B}_{ni})_{N\times k},&{}\mathbf {M}_{B}^e=(m^{B}_{ni})_{N\times k},&{} s^{B}_{ni}=\big (y^{\alpha }\partial _y{\mathcal {S}}_{i},\partial _y\phi ^y_n\big )_\Lambda ,&{} m^{B}_{np}=\big (y^{\alpha }{\mathcal {S}}_{i},\phi ^y_n\big )_\Lambda ;\\ \mathbf {S}_{C}^e=(s^{A}_{in})_{k\times N},&{}\mathbf {M}_{C}^e=(m^{C}_{in})_{k\times N},&{} s^{C}_{in}=\big (y^{\alpha }\partial _y\phi ^y_n,\partial _y{\mathcal {S}}_{i}\big )_\Lambda ,&{} m^{C}_{in}=\big (y^{\alpha }\phi ^y_n,{\mathcal {S}}_{i}\big )_\Lambda ;\\ \mathbf {S}_{D}^e=(s^{D}_{ij})_{k\times k},&{}\mathbf {M}_{D}^e=(m^{D}_{ij})_{k\times k},&{} s^{D}_{ij}=\big (y^{\alpha }\partial _y{\mathcal {S}}_{j},\partial _y{\mathcal {S}}_{i}\big )_\Lambda ,&{} m^{D}_{ij}=\big (y^{\alpha }{\mathcal {S}}_{j},{\mathcal {S}}_{i}\big )_\Lambda ;\\ \mathbf {\mathbf {f}}=(f_n)_{N\times 1},&{}\mathbf {\mathbf {h}}=(h_i)_{k\times 1}, &{}f_{n}=d_s\lambda ^s\phi ^y_n(0),&{}h_{i}=d_s\lambda ^s{\mathcal {S}}_{i}(0)=0.\\ \end{array}\nonumber \\ \end{aligned}$$
(4.14)

The reader undoubtedly has already found that \(\mathbf {S}_{A}^e\) and \(\mathbf {M}_{A}^e\) are the same as the y directional matrixes \(\mathbf {S}^y\) and \(\mathbf {M}^y\) defined in the previous section.

Since the low regularity (when \(y\rightarrow 0^+\)) severely deteriorates the convergence rate of spectral scheme (4.6), we prefer to use enriched spectral scheme (4.11) to improve the computional efficiency. However, matrixes

$$\begin{aligned} \mathbf {S}^{e}:=\begin{bmatrix} \mathbf {S}_{A}^e &{}\quad \mathbf {S}_{B}^e\\ \mathbf {S}_{C}^e &{}\quad \mathbf {S}_{D}^e \end{bmatrix},\quad \mathbf {M}^{e}:= \begin{bmatrix} \mathbf {M}_{A}^e &{}\quad \mathbf {M}_{B}^e\\ \mathbf {M}_{C}^e &{}\quad \mathbf {M}_{D}^e \end{bmatrix} \end{aligned}$$
(4.15)

are ill-conditional due to combining two distinct basis groups \(\{\phi ^y_n\}_{n=1}^N\) and \(\{{\mathcal {S}}_i\}_{i=1}^k\) in the same numerical scheme. To circumvent this barrier, we apply Schur complement method in the practical implementation. Indeed, we can rewrite the matrix system (4.13) as

$$\begin{aligned} \begin{bmatrix} \mathbf {A} &{}\quad \mathbf {B}\\ \mathbf {C} &{}\quad \mathbf {D} \end{bmatrix} \begin{bmatrix} \mathbf {\mathbf {c}} \\ \mathbf {\mathbf {s}} \\ \end{bmatrix}= \begin{bmatrix} \mathbf {\mathbf {f}}\\ \mathbf {\mathbf {h}} \end{bmatrix}. \end{aligned}$$
(4.16)

Alternatively,

$$\begin{aligned} \begin{aligned} \mathbf {A}\mathbf {\mathbf {c}}+\mathbf {B}\mathbf {\mathbf {s}}=\mathbf {\mathbf {f}},\quad \mathbf {C}\mathbf {\mathbf {c}}+\mathbf {D}\mathbf {\mathbf {s}}=\mathbf {\mathbf {h}} \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{array}{llll} \mathbf {A}=\mathbf {S}_{A}^e+\lambda \mathbf {M}_{A}^e,&{} \mathbf {B}=\mathbf {S}_{B}^e+\lambda \mathbf {M}_{B}^e,&{} \mathbf {C}=\mathbf {S}_{C}^e+\lambda \mathbf {M}_{C}^e,&{} \mathbf {D}=\mathbf {S}_{D}^e+\lambda \mathbf {M}_{D}^e.\\ \end{array} \end{aligned}$$

Then, the coefficient vectors \(\mathbf {\mathbf {s}}\) and \(\mathbf {\mathbf {c}}\) can be derived from Schur complement

$$\begin{aligned} (\mathbf {C}\mathbf {A}^{-1}\mathbf {B}-\mathbf {D})\mathbf {\mathbf {s}}=\mathbf {C}\mathbf {A}^{-1}\mathbf {\mathbf {f}}-\mathbf {\mathbf {h}}, \quad \mathbf {A} \mathbf {\mathbf {c}}=\mathbf {\mathbf {f}}-\mathbf {B}\mathbf {\mathbf {s}}. \end{aligned}$$
(4.17)

Owe to \(\mathbf {A}=\mathbf {S}_{A}^e+\lambda \mathbf {M}_{A}^e\) is tridiagonal, matrixes \(\mathbf {A}^{-1}\mathbf {B}\), \(\mathbf {A}^{-1}\mathbf {\mathbf {f}}\) and vector \(\mathbf {\mathbf {c}}\) can be obtained in O(N) operations instead of \(O(N^{3})\) required by Gaussian elimination.

4.1.4 Numerical Results

From the observation of the exact solution (4.2), we know that \(\psi (y)=e^{-\sqrt{\lambda }y}\) is a smooth function when \(s=0.5\), i.e. \(\alpha =1-2s=0\), so the spectral scheme (4.6) is efficient to approximate the solution, which can be verified by the numerical experiment in the left of the Fig. 3. However, the cases with \(s=0.31\) and \(s=0.82\) show that the same method does not work well for \(s\in (0,1)/\{0.5\}\) due to the low regularity of the solution \(\psi (y)=\frac{2^{1-s}}{\Gamma (s)}(\sqrt{\lambda }y)^s K_s(\sqrt{\lambda }y)\).

Theorem 4.1 showed that the enriched spectral method (4.11) can enhance the computational efficiency and improve the numerical results. We apply this method to the singular case with \(s=0.31\). The results are shown in the right of the Fig. 3 which exhibits that the enriched spectral method does enhance the convergence rate significantly by adding k leading singular terms. However, since the singular terms are not orthogonal, they lead to very ill conditioned system as k increases. we observe in Fig. 3 that, if adding too many singular terms, the convergence rate may deteriorate due to the ill-conditioning.

Fig. 3
figure 3

Left: \(\lambda =2,~s=0.5, 0.31, 0.82.\)       Right: \(\lambda =2,~s=0.31\)

Table 1 \(\lambda =10,~s=0.3,~k=3\)

From Remark 4.1, we have the exact value of the coefficient of the singular term \({\mathcal {S}}_1(y)\) that \(s_1=\frac{-\Gamma (1-s)}{4^s\Gamma (1+s)}\lambda ^s\). The corresponding approximate value \({\tilde{s}}_1\) can be derived from the Schur complement method (4.17). In Table 1, we fix parameters \(\lambda =10,~s=0.3\) and \(k=3\), and test the accuracy of the approximate value of the coefficient \(s_1\). In Table 2, we test the accuracy for different values of the parameter \(\lambda \) with fixed \(N=80\). The numerical results show that the accuracy is lower for the two extremes ( \(\lambda =1000\) or \(\lambda =10^{-5}\)). This verifies the result of Theorem 4.1 that the error estimate relies on \({\max \{1,\lambda ^{-1}\}}\), and function \({\widehat{\partial }}^m_y {\widetilde{\psi }}\) involving the parameter \(\lambda \).

Table 2 \(N=80,~s=0.3,~k=3\)

Theorem 4.1 showed that the error \(\Vert \psi ^k_N-\psi \Vert _{1,y^\alpha ,\Lambda }\) enjoys the convergence rate \({N^{-\frac{[1+2s]}{2}-k}}\). In order to further verify the theoretical result, we plot the error curves and their theoretical convergence rate in Fig. 4.

Fig. 4
figure 4

Left: \(\lambda =2,~s=0.31.\)       Right: \(\lambda =2,~s=0.82\)

4.2 Enriched Spectral Method for the Caffarelli–Silvestre Extension

The solution of the fractional Laplacian problem (1.1) can be derived from the Caffarelli–Silvestre extension (1.2), i.e. \(u(x)=U(x,0)\). Indeed, owe to [4, Lemma 2.2] and [7, Proposition 2.1],

$$\begin{aligned} U(x,y)=\sum \limits _{n=1}^\infty {\tilde{u}}_n\phi _n(x)\psi _n(y), \end{aligned}$$
(4.18)

where \(\psi _n(y),~n=1,2,\ldots \) are the eigenfunctions of the Sturm–Liouville problem (4.1) with some determined eigenvalues \(\lambda _n\). That implies that there exists singularity in y direction affects the convergence rate, we need to apply enriched spectral method in y-axis.

4.2.1 Enriched Spectral Scheme

The enriched spectral approximation is to find \(U^h_{N,k}\in X_{h}\times Y^{k}_{N}\) such that

$$\begin{aligned} \big (y^\alpha \nabla U^h_{N,k},\nabla V\big )_{\mathcal {D}}=d_s\big (f,\mathbf {tr}\{V\}\big )_\Omega \quad \forall ~ V\in X_{h}\times Y^{k}_{N}. \end{aligned}$$
(4.19)

Obviously, the numerical solution can be expanded as

$$\begin{aligned} U_{MN}(x,y)=\sum _{m=1}^M\sum _{n=1}^N {\tilde{u}}_{mn} \phi ^{x}_m(x) \phi ^y_n(y)+\sum _{m=1}^M\sum _{l=1}^k {\tilde{u}}^s_{mi} \phi ^{x}_m(x) {\mathcal {S}}_i(y), \end{aligned}$$

where \( \phi ^y_n(y)\) and \({\mathcal {S}}_i(y)\) are the same as the basis in the Sturm–Liouville case. By taking

$$\begin{aligned} V(x,y){=}\phi ^{x}_m(x) \phi ^y_n(y),~\text {and}~ \phi ^{x}_m(x) {\mathcal {S}}_{i}(y),~m{=}1,2,\ldots ,M,~n{=}1,2,\ldots ,N,~i{=}1,2,{\ldots },k \end{aligned}$$

successively, we obtain the matrix system

$$\begin{aligned} \left\{ \begin{bmatrix} \mathbf {M}_{A}^e &{}\quad \mathbf {M}_{B}^e\\ \mathbf {M}_{C}^e &{}\quad \mathbf {M}_{D}^e \end{bmatrix}\otimes \mathbf {S}^{x}+\begin{bmatrix} \mathbf {S}_{A}^e &{}\quad \mathbf {S}_{B}^e\\ \mathbf {S}_{C}^e &{}\quad \mathbf {S}_{D}^e \end{bmatrix}\otimes \mathbf {M}^{x}\right\} \begin{bmatrix} \mathrm{\overrightarrow{\mathbf {U}}}\\ {\overrightarrow{\mathbf {U}^s}} \end{bmatrix}=\begin{bmatrix} {\overrightarrow{{\mathbf {F}}}}\\ {\overrightarrow{{\mathbf {F}^s}}} \end{bmatrix} \end{aligned}$$
(4.20)

where \(\mathbf {U}:=({\tilde{u}}_{mn})_{M\times N}\) and \(\mathbf {U}^s:=({\tilde{u}}^s_{mj})_{M\times k}\) are the coefficients matrixes and

$$\begin{aligned} \begin{array}{llll} \mathbf {S}^{x}=(s^{x}_{ml})_{M\times M},&{}\mathbf {M}^{x}=(m^x_{ml})_{M\times M},&{} s^{x}_{ml}=\big (\nabla ^{x}\phi ^{x}_l,\nabla ^{x}\phi ^{x}_m\big )_\Omega , &{} m^{x}_{ml}=\big (\phi ^{x}_l,\phi ^{x}_m\big )_\Omega ;\\ \mathbf {F}=(f_{mn})_{M\times N},&{}\mathbf {F}^s=(f^s_{mi})_{M\times k}, &{}f_{mn}=d_s\phi ^y_n(0)\,\big ( f, \phi ^{x}_m\big )_\Omega ,&{}f^s_{mi}=d_s{\mathcal {S}}_{i}(0)\,\big ( f, \phi ^{x}_m\big )_\Omega .\\ \end{array} \end{aligned}$$

The y-directional matrixes

$$\begin{aligned} \mathbf {S}^{e}:=\begin{bmatrix} \mathbf {S}_{A}^y &{}\quad \mathbf {S}_{B}^y\\ \mathbf {S}_{C}^y &{}\quad \mathbf {S}_{D}^y \end{bmatrix},\quad \mathbf {M}^{e}:= \begin{bmatrix} \mathbf {M}_{A}^y &{}\quad \mathbf {M}_{B}^y\\ \mathbf {M}_{C}^y &{}\quad \mathbf {M}_{D}^y \end{bmatrix} \end{aligned}$$

are the same as (4.15) arose in the case of the Sturm–Liouville problem. The notation \(\otimes \) represents the Kronecker product and \({\overrightarrow{{\mathbf {X}}}}\) is the vectorization of the matrix \(\mathbf {X}\) formed by stacking the columns of \(\mathbf{X} \) into a single column vector.

We further denote

$$\begin{aligned} \mathbf {U}^e:=\begin{bmatrix} \mathbf {U}&\quad \mathbf {U}^s \end{bmatrix},\quad \mathbf {F}^e:=\begin{bmatrix} \mathbf {F}&\quad \mathbf {F}^s \end{bmatrix}, \end{aligned}$$

then there exists a equivalent form of the matrix system (4.20) below,

$$\begin{aligned} \mathbf {S}^{x} \,\mathbf {U}^e\,\mathbf {M}^e+\mathbf {M}^{x}\,\mathbf {U}^e\,\mathbf {S}^e=\mathbf {F}^e. \end{aligned}$$
(4.21)

Using the fast algorithm provided in Sect. 3.2, the above matrix system can be resolved efficiently by matrix diagonalization method in y direction.

4.3 Error Estimate

Combing variational formulation (2.7) and numerical scheme (4.19) leads to

$$\begin{aligned} \big (y^\alpha \nabla (U-U^h_{N,k}),\nabla V\big )_{\mathcal {D}}=0,\quad \forall ~ V\in X_{h}\times Y^k_{N}. \end{aligned}$$

Then, via Cauchy–Schwarz inequality, we have that for any \(\Phi \in X_{h}\times Y^k_{N}\)

$$\begin{aligned}\begin{aligned} \Vert U-U^h_{N,k}\Vert ^2_{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}&=\big (y^\alpha \nabla (U-U^h_{N,k}),\nabla (U-\Phi )\big )_{\mathcal {D}}\\&\le \Vert U-U^h_{N,k}\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}~\Vert U-\Phi \Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}, \end{aligned} \end{aligned}$$

namely,

$$\begin{aligned} \Vert U-U^h_{N,k}\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}\le \inf _{\Phi \in X_{h}\times Y^k_{N}} \Vert U-\Phi \Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}. \end{aligned}$$
(4.22)

As shown in (4.18) that the solutions of the Caffarelli–Silvestre extension problem and the related fractional Laplacian problem could be respectively expressed as

$$\begin{aligned} U(x,y)=\sum \limits _{n=1}^\infty {\tilde{u}}_n\phi _n(x)\psi _n(y), \quad u(x)=U(x,0)=\sum \limits _{n=1}^\infty {\tilde{u}}_n\phi _n(x), \end{aligned}$$

where the orthonormal basis \(\phi _n(x)\) is the eigenfunction of the classical Laplace eigenvalue problem (2.8) with the corresponding eigenvalue \(\lambda _n\); the function \(\psi _n(y)\) is the solution of the Sturm–Liouville problem (4.1) with parameter \(\lambda =\lambda _n\).

Owe to the analysis of Sturm–Liouville problem in previous section, the function \(\psi _n(y)\) can be split as

$$\begin{aligned} \psi _n(y)=c^1_n {\mathcal {S}}_1(y)+c^2_n{\mathcal {S}}_2(y)+\cdots +c^k_n{\mathcal {S}}_k(y)+{\tilde{\psi }}_n(y), \end{aligned}$$

where constants \(\{c^i_n\}_{i=1}^k\) are the coefficients of the weak singular terms \(\{{\mathcal {S}}_i(y)=y^{i-\alpha }e^{-\frac{y}{2}}\}_{i=1}^k\). So function \({\tilde{\psi }}_n(y)\) is a smooth function in \(\Lambda \) with regularity behaves as \(y^{k+1-\alpha }\) near \(y=0\). Similar to the analysis in Remark 4.4, it holds that for \(\lambda \gg 1\),

$$\begin{aligned} \Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}_n\Vert _{y^{\alpha +m},\Lambda }={\mathcal {O}}({\lambda }^{\frac{m-1-\alpha }{4}}),\qquad \Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}_n\Vert _{y^{\alpha +m-1},\Lambda }={\mathcal {O}}({\lambda }^{\frac{m-\alpha }{4}}). \end{aligned}$$
(4.23)

Without loss of generality, we denote \(\pi ^{x}_h\) an efficient approximation operator in x directions and assume that for \(\sigma =0,1\) and positive integer r

$$\begin{aligned} \Vert \nabla _x^\sigma (\pi ^{x}_h v-v)\Vert \le h^{r-\sigma }\Vert \nabla ^r v\Vert _{\Omega }\quad \forall v\in {H}^r(\Omega ), \end{aligned}$$
(4.24)

where operator \(\nabla ^{2k} :=\Delta ^k\) and \(\nabla ^{2k+1} :=\nabla \Delta ^k\) for any integer k.

Then, by taking \(\Phi \in X_{h}\times Y^k_{N}\) such that

$$\begin{aligned} U(x,y)-\Phi (x,y)=\sum _{n=1}^\infty ~{\tilde{u}}_n \phi _n(x) {\widetilde{\psi }}_n(y)-\pi ^{x}_h\Big (\sum _{n=1}^\infty ~{\tilde{u}}_n\phi _n(x)\,\pi _N^y{\widetilde{\psi }}_n(y)\Big )\end{aligned}$$
(4.25)

we have the following error estimate.

Theorem 4.2

Let U and \(U^h_{N,k}\) be the solutions to the weak problem (2.7) and the numerical problem (4.19), respectively. If \(f\in {\mathbb {H}}^{l-s}(\Omega ),~l=\max \{\frac{m}{2},r+\frac{1}{2}\}\), then it holds

$$\begin{aligned} \begin{aligned} \Vert U-U^h_{N,k}\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}\le&c {N^{-\frac{m}{2}}|f|_{{\mathbb {H}}^{\frac{m}{2}-s}(\Omega )}+cN^{1-\frac{m}{2}}|f|_{{\mathbb {H}}^{\frac{m-1}{2}-s}(\Omega ) }}\\&+ch^{r}|f|_{{\mathbb {H}}^{{r+\frac{1}{2}}-s}(\Omega )}+ch^{r-1}|f|_{{\mathbb {H}}^{{r-1}-s}(\Omega )}, \end{aligned} \end{aligned}$$
(4.26)

where \(\alpha =1-2s\) and \(m=2k+1+[2s]\).

Proof

Owe to the inequality (4.22) and the triangle inequality, we have

$$\begin{aligned} \Vert U-U^h_{N,k}\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}\le \Vert U-\Phi \Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}\le I_1+I_2, \end{aligned}$$
(4.27)

where

$$\begin{aligned}\begin{aligned} I_1&=\big \Vert \sum _{n=1}^\infty ~{\tilde{u}}_n \phi _n( {\widetilde{\psi }}_n-\pi _N^y{\widetilde{\psi }}_n)\big \Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})},\\ I_2&=\big \Vert \sum _{n=1}^\infty ~{\tilde{u}}_n \phi _n\pi _N^y{\widetilde{\psi }}_n-\pi ^{x}_h\Big (\sum _{n=1}^\infty ~{\tilde{u}}_n\phi _n\,\pi _N^y{\widetilde{\psi }}_n\Big )\big \Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}. \end{aligned}\end{aligned}$$

First of all, Thanks to Lemmas 2.2 and 3.1 and the equation (4.25), we can derive that

$$\begin{aligned}\begin{aligned} ( I_1)^2&=\Vert \sum _{n=1}^\infty ~{\tilde{u}}_n \phi _n( {\widetilde{\psi }}_n-\pi _N^y{\widetilde{\psi }}_n)\Vert ^2_{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})} =\Vert \nabla \big (\sum _{n=1}^\infty ~{\tilde{u}}_n \phi _n( {\widetilde{\psi }}_n-\pi _N^y{\widetilde{\psi }}_n)\big )\Vert ^2_{{y^\alpha },{\mathcal {D}}}\\&=\sum _{n=1}^\infty ~|{\tilde{u}}_n|^2 ~\big (\Vert \nabla _{{x}} \phi _n\Vert ^2_\Omega \Vert {\widetilde{\psi }}_n-\pi _N^y{\widetilde{\psi }}_n\Vert ^2_{y^\alpha ,\Lambda }+\Vert \phi _n\Vert ^2_{\Omega }\Vert \partial _y({\widetilde{\psi }}_n-\pi _N^y{\widetilde{\psi }}_n)\Vert ^2_{y^\alpha ,\Lambda }\big )\\&\le c \sum _{n=1}^\infty ~|{\tilde{u}}_n|^2 (\lambda _n N^{-m}\Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}_n\Vert ^2_{y^{\alpha +m}}+ N^{2-m}\Vert {\widehat{\partial }}^m_y {\widetilde{\psi }}_n\Vert ^2_{y^{\alpha +m-1},\Lambda })\\&\le c \sum _{n=1}^\infty ~|{\tilde{u}}_n|^2 \big ( (\lambda _n)^{\frac{m+1-\alpha }{2}} N^{-m}+ (\lambda _n)^{\frac{m-\alpha }{2}} N^{2-m}\big ). \end{aligned} \end{aligned}$$

Then, via the relation \({\tilde{f}}_n={\tilde{u}}_n(\lambda _n)^{s}={\tilde{u}}_n(\lambda _n)^{\frac{1-\alpha }{2}},\) it holds

$$\begin{aligned} I_1\le c {N^{-\frac{m}{2}}|f|_{{\mathbb {H}}^{\frac{m}{2}-s}(\Omega )}+N^{1-\frac{m}{2}}|f|_{{\mathbb {H}}^{\frac{m-1}{2}-s}(\Omega ) }} \end{aligned}$$
(4.28)

Next, for notational simplicity, we denote \({\tilde{U}}_N:=\sum _{n=1}^\infty ~{\tilde{u}}_n\phi _n(x)\,\pi _N^y{\widetilde{\psi }}_n(y)\), then

$$\begin{aligned}\begin{aligned} I_2&=\Vert \pi _h^x{\tilde{U}}_N-{\tilde{U}}_N\Vert _{{{\mathcal {H}}}^{1,b}_{y^\alpha }({\mathcal {D}})}=\Vert \nabla (\pi _h^x{\tilde{U}}_N-{\tilde{U}}_N)\Vert _{y^\alpha ,{\mathcal {D}}}\\&=\Vert \nabla _x(\pi _h^x{\tilde{U}}_N-{\tilde{U}}_N)\Vert _{y^\alpha ,{\mathcal {D}}}+\Vert \partial _y(\pi _h^x{\tilde{U}}_N-{\tilde{U}}_N)\Vert _{y^\alpha ,{\mathcal {D}}}\\&\le h^{r-1}~\Vert \nabla _x^r{\tilde{U}}_N\Vert _{y^\alpha ,{\mathcal {D}}}+h^{r}\Vert \nabla _x^r\partial _y{\tilde{U}}_N)\Vert _{y^\alpha ,{\mathcal {D}}} \end{aligned} \end{aligned}$$

The last inequality of the above equation holds due to \(f\in {\mathbb {H}}^{r+\alpha /2}(\Omega )\) implies the boundedness of the above norms \(\Vert \nabla _x^r{\tilde{U}}_N\Vert ^2_{y^\alpha ,{\mathcal {D}}}\) and \(\Vert \nabla _x^r\partial _y{\tilde{U}}_N)\Vert ^2_{y^\alpha ,{\mathcal {D}}}\). In fact, owe to Lemmas 2.2, 3.1 and relation (4.23), we have

$$\begin{aligned} \Vert \pi _N^y{\widetilde{\psi }}_n\Vert _{y^{\alpha },\Lambda }\le \Vert {\widetilde{\psi }}_n\Vert _{y^{\alpha },\Lambda }={\mathcal {O}}(\lambda _n^{\frac{-1-\alpha }{4}}) \end{aligned}$$

and

$$\begin{aligned} \Vert \partial _y \pi _N^y{\widetilde{\psi }}_n\Vert _{y^{\alpha },\Lambda }\le \Vert {\widehat{\partial }}_y {\widetilde{\psi }}_n\Vert _{y^{\alpha },\Lambda }+\Vert {\widehat{\partial }}^2_y {\widetilde{\psi }}_n\Vert _{y^{\alpha +1},\Lambda }={\mathcal {O}}(\lambda _n^{\frac{2-\alpha }{4}}), \end{aligned}$$

In addition,

$$\begin{aligned} \Vert \nabla _x^r \phi _n\Vert ^2_\Omega =(\nabla _x^r \phi _n,\nabla _x^r \phi _n)_\Omega =(\lambda _n)^r. \end{aligned}$$

Then, via the above relations, it can be shown that

$$\begin{aligned} (I_2)^2\le c h^{2r-2}~\sum _{n=1}^\infty |{\tilde{u}}_n|^2(\lambda _n)^{r+\frac{-1-\alpha }{2}}+c h^{2r}~\sum _{n=1}^\infty |{\tilde{u}}_n|^2(\lambda _n)^{r+\frac{2-\alpha }{2}}. \end{aligned}$$

Using the relation \({\tilde{f}}_n={\tilde{u}}_n(\lambda _n)^{s}={\tilde{u}}_n(\lambda _n)^{\frac{1-\alpha }{2}}\) again, we have

$$\begin{aligned} I_2\le ch^{r-1}|f|_{{\mathbb {H}}^{{r-1}-s}(\Omega )}+ch^{r}|f|_{{\mathbb {H}}^{{r+\frac{1}{2}}-s}(\Omega )}. \end{aligned}$$
(4.29)

Finally, we end the proof by combing (4.27)–(4.29). \(\square \)

Thanks to Lemma 2.1, we have the error estimate for the original fractional Laplacian problem below

Theorem 4.3

Let u and \(U^h_{N,k}\) be the solutions to the fractional Laplacian problem (1.1) and the numerical problem (4.19), respectively. If

$$\begin{aligned} f\in {\mathbb {H}}^{l-s}(\Omega ),\quad l=\max \{k+({1+[2s]})/{2},r+{1}/{2}\} \end{aligned}$$

then it holds

$$\begin{aligned}\begin{aligned} \Vert u-U^h_{N,k}(\cdot ,0)\Vert _{{\mathbb {H}}^s(\Omega )}&\le c {N^{-\frac{m}{2}}|f|_{{\mathbb {H}}^{\frac{m}{2}-s}(\Omega )}+cN^{1-\frac{m}{2}}|f|_{{\mathbb {H}}^{\frac{m-1}{2}-s}(\Omega ) }}\\&\quad +\,ch^{r}|f|_{{\mathbb {H}}^{{r+\frac{1}{2}}-s}(\Omega )}+ch^{r-1}|f|_{{\mathbb {H}}^{{r-1}-s}(\Omega )}. \end{aligned} \end{aligned}$$

Remark 4.5

One can selects a preferred method to deal with x directions, the operator \(\pi ^{x}_h\) is determined by the related numerical method. What we are concerned with in the above consequence is to remove the effect of the y directional singularity caused by the Caffarelli–Silvestre extension.

4.4 Numerical Examples

By following the procedure (4.19)–(4.20), we can solve the Caffarelli–Silvestre extension by the enriched spectral method. To complete the implementation, it remains to select the basis defined on \(\Omega \) to obtain the corresponding stiffness matrix \(\mathbf {S}^{x}\) and mass matrix \(\mathbf {M}^{x}\). For the sake of simplicity, we will only consider the rectangular domains \(\Omega =(-1,1)^d\), although one can easily deal with general domains with finite elements.

For each \(i=1,\cdots ,d\), let \(\{\phi _{m}(x_i)\}_{m=1}^{M_i}\) be a set of basis functions in the direction \(x_i\) satisfying \(\phi _{m}(\pm 1)=0\), and set

$$\begin{aligned} X ^{x_i}_{h_i}:=span\{\phi _{m}(x_i):~m=1,2,\ldots ,M_i\},\quad 1\le i\le d. \end{aligned}$$

Then we define our d-dimensional approximation space by

$$\begin{aligned} X_h=X ^{x_1}_{h_1}\times X ^{x_2}_{h_2}\times \cdots \times X ^{x_d}_{h_d}. \end{aligned}$$

Let \(\mathbf {M}_i^{x}\) and \(\mathbf {S}_i^{x}\) be respectively the one dimensional mass and stiffness matrices, then the d-dimensional mass and stiffness matrices can be written as the following Kroneck product form

$$\begin{aligned} \mathbf {M}^{x}={\mathbf {M}_1^{x}}\otimes {\mathbf {M}_2^{x}}\otimes \cdots \otimes {\mathbf {M}_d^{x}},\quad \mathbf {S}^{x}=\sum \limits _{i=1}^d{\mathbf {M}_1^{x}}\otimes \cdots {\mathbf {M}_{i-1}^{x}}\otimes {\mathbf {S}_i^{x}}\otimes {\mathbf {M}_{i+1}^{x}}\cdots \otimes {\mathbf {M}_d^{x}}. \end{aligned}$$

Then our enriched Jacobi–Laguerre spectral scheme is to find \(U^h_{N,k}\in X_h\times Y^{k}_{N}\) such that

$$\begin{aligned} \big (y^{1-2s}\nabla U^h_{N,k},\nabla V\big )_{\mathcal {D}}=\int _{-1}^1 f(x)\, V(x,0) \mathrm{d}x,\quad \forall ~ V\in X_h\times Y^{k}_{N}. \end{aligned}$$
(4.30)

where \(Y^{k}_{N}\) is the same as (4.10). We recall that the above enriched spectral method can be reduced to a sequence of Poisson type problem (3.11) which can then be solved efficiently by using a matrix diagonalization approach.

We shall consider two specific choices for \(\phi _{m}(x_i)\) below.

  • Piecewise linear finite elements We choose \(\phi _{m}(x_i)\) to be the usual hat function.

  • Spectral methods We take \(\phi _{m}(x_i)=P_{m+1}^{-1,-1}(x_i)\) where \(\{P_j^{-1,-1}(x)\}\) are the generalized Jacobi function of index \((-1,-1)\).

Since the exact solution u is usually not available for general f, so a reasonable approach is to take a reference solution \(u_K,\,K\gg 1\) via the definition of the fractional Laplacian (2.9). In fact, the eigenvalues and eigenfunctions of the problem (2.8) to our case are

$$\begin{aligned} \lambda _n=\frac{(n\pi )^2}{4},\quad \varphi _n(x)=\sin \big (\frac{n\pi }{2} (x+1)\big ),\quad n=1,2,\ldots , \end{aligned}$$

we have the reference solution \(u_K=\sum _{n=1}^K \lambda _n^{-s} {\tilde{f}}_n \varphi _n(x)\), where \(\{{\tilde{f}}_n\}_{n=1}^\infty \) are the coefficients such that \(f(x)=\sum _{n=1}^\infty {\tilde{f}}_n \varphi _n(x).\)

As the first example, we take \(f=(1-x^2)^r\) and compute a reference resolution as described above. We use the piecewise linear finite element method in the x-direction, and plot the numerical convergence rates with \(r=0, 1\) and \(s=0.7\) in Fig. 5. We observe that the usual spectral method (in the y-direction) barely converges, but the enriched spectral method improves the convergence rate significantly.

Fig. 5
figure 5

Left: \(s=0.7, \, r=0\),    Right: \(s=0.7,\,r=1\)

Next, we consider a two-dimensional example with

$$\begin{aligned} f(x)=(2\pi ^{2})^s\sin (\pi x_1)\sin (\pi x_2),\quad x=(x_1,x_2)\in (-1,1)^2. \end{aligned}$$

Thanks to the definition of the fractional Laplacian (2.9), we can easily find the exact solution to be \(u(x_1,x_2)=\sin (\pi x_1)\sin (\pi x_2).\) This example was also considered in [22].

We first use the finite element method in the spatial directions and plot the convergence rates in Fig. 6. We observe that even though the exact solution is smooth, the usual spectral method in the extended direction still barely converges due to the singularity of the extension problem. However, the enriched spectral method improves the convergence rate significantly.

Note also that for both examples, the convergence rates with respect to \(h=1/M\) are essentially second order, which is the expected rate for the \(L^2\)-error by the piecewise linear finite-elements, until the errors are dominated by the errors in the extended direction.

Fig. 6
figure 6

Left: \(s=0.2\),    Right: \(s=0.8\)

Then, we use the spectral method in the spatial directions, and plot the results in Fig. 7. On the left of Fig. 7, we fix the number of modes in each spatial direction at \(M=20\) which is enough since the exact solution is smooth, and plot the convergence rates with respect to N for various s with \(k=2\). As expected, the convergence rate is algebraic due to the singularity at \(y=0\). On the right of Fig. 7, we fix \(N=80\) and \(k=2\) so that for the range of M considered, the dominating error is in the spatial direction. We observe that the error converges exponentially with respect to the number of modes in each spatial direction.

Fig. 7
figure 7

Left: \(M=M_1=M_2=20,~k=2\),    Right: \(N=80,~k=2\)

We also observe in these examples that only very few degrees of freedom, \(N+k\), in the extended direction y are needed to achieve high accuracy in the y direction. So the enriched spectral method with the matrix diagonalization method allows us to solve the fractional Laplacian problem by solving only a relative small number of Poisson type equations in the original domain.

5 Concluding Remarks

We proposed in this paper an efficient numerical method for the spectral fraction Laplacian problem through the Caffarelli–Silvestre extension. The method combines a generalized Laguerre approximation with an enriched spectral method in the extended direction to deal with the singularity at \(y=0\), and uses a diagonalization approach to decouple the \(d+1\) dimensional extension problem into a sequence of Poisson type equations in the original domain which can be solved with one’s favorite method. The new method enjoys high-order accuracy, efficiency and flexibility. We also derived error estimates which show that we can achieve arbitrary high-order accuracy in the extended direction using the enriched spectral method, although the enriched method leads to very ill conditioned system which effectively limits the number of enriched terms that can be added, but our numerical results indicate that only a few (usually less than three) singular functions would dramatically improve the accuracy to a satisfactory level.