1 Introduction

Spectral methods employ global basis functions to approximate PDEs and have become increasingly popular in scientific computing and engineering applications, see, e.g., [2, 4, 5, 7, 10, 16, 19, 21, 23]. As long as the solutions are analytic, spectral methods possess exponential convergence.

However, the spectral methods are only applicable to regular domains, such as rectangles or discs in the 2D case, which greatly limits their applications. In order to expand the applications of spectral methods to more complex geometries, some domain decomposition spectral methods and spectral element methods have been developed, see, e.g., [12, 13, 17, 20, 22]. The basic idea is to partition the irregular region into smaller regular subdomains, and to achieve global spectral approximations by constructing local basis functions on subdomains, interfaces and nodes. Although these methods can effectively deal with some problems of irregular regions and can obtain high order accuracy, they greatly increase the complexity of programming and the computational costs.

It is still a great challenge to construct direct spectral methods for solving problems in complex geometries. So far, there are basically two direct spectral methods: (i) embed the complex geometry into a larger regular domain, which belongs to the class of fictitious domain methods [3]; and (ii) map the complex geometry into a regular domain through an explicit and smooth mapping [15] or through the Gordon–Hall mapping [6].

There have been some attempts in using the fictitious domain approach to solve PDEs in complex geometries. Lui [14] proposed the spectral method with domain embedding to solve PDEs in complex geometry. The main idea is to embed the irregular domain into a regular one so that classical spectral methods can be applied. Gu and Shen [8, 9] developed efficient and well-posed spectral methods for elliptic PDEs in complex geometries using the fictitious domain approach, and provided a rigorous error analysis for the Poisson equation.

As for the mapping methods, Orszag [15] proposed a Fourier–Chebyshev spectral approach for solving the heat equation in the annular region by using an explicit mapping. Heinrichs [11] tested the diameter Fourier–Chebyshev spectral collocation approach for problems in complex geometries and focused on the clustering of collocation nodes and the improvement of the condition number. It should be pointed out that the authors in [11, 15] did not describe the algorithms in detail, nor did they prove the existence and uniqueness of the weak solutions and the convergence of the numerical solutions.

The aim of this paper is to develop a Fourier–Legendre spectral method for elliptic PDEs in two-dimentional complex geometries using the mapping method. To do this, we first consider a polar coordinate transformation to transform the complex geometries into a unit disc. Then we present some basic properties of the polar coordinate transformation. As applications, we focus on the elliptic equation in two-dimensional complex geometries and construct a proper variational formulation. The existence and uniqueness of the weak solution are proved. We also propose a Fourier–Legendre spectral-Galerkin method with proper test and trial spaces and analyze the optimal convergence of numerical solutions under \(H^1\)-norm. The proposed method is very effective and easy to implement for problems in complex geometries whether the boundary curves can be represented directly or not. Ample numerical results demonstrate the high accuracy of our spectral-Galerkin method.

The remainder of this paper is organized as follows. In Sect. 2, we recall some basic results on the shifted Legendre polynomials. In Sect. 3, we first introduce the polar coordinate transformation and present its basic properties. Then we consider the elliptic equation in two-dimensional complex geometries and construct a proper variational formulation. The existence and uniqueness of the weak solution are proved. We also propose a Fourier–Legendre spectral-Galerkin method and describe its numerical implementation. In Sect. 4, we analyze the optimal convergence of numerical solutions under \(H^1\)-norm. Section 5 is for some numerical results of the elliptic equations defined in squircle-bounded domains, butterfly-bounded domains, cardioid-bounded domains, star-shaped domains and general domains, respectively. The finial section is for some concluding remarks.

2 Preliminaries

For integer \(r\ge 0,\) we define the weighted Sobolev space \(H^{r}_{\chi }(I)\) as usual, with the inner product \((u,v)_{r,\chi ,I},\) the semi-norm \(|v|_{r,\chi ,I}\) and the norm \(\Vert v\Vert _{r,\chi ,I},\) respectively, where I is a certain interval. In cases where no confusion would arise, \(\chi \) (if \(\chi \equiv 1\)), r (if \(r=0\)) and I may be dropped from the notations. For simplicity, we denote \(v^{(k)}=\partial _{x}^k v=\frac{d^k v}{dx^k}.\) Moreover, let N be any positive integer and \(\mathcal {P}_N(I)\) stands for the set of all algebraic polynomials of degree at most N.

We next introduce the shifted Legendre polynomials. For \(\xi \in (-1,1),\) let \(P_n(\xi )\) be the Legendre polynomial of degree n,  which satisfies the three-term recurrence relation (cf. [19]):

$$\begin{aligned} (n+1)P_{n+1}(\xi )=(2n+1)\xi P_n(\xi )-nP_{n-1}(\xi ),\quad n\ge 1, \end{aligned}$$
(2.1)

with \(P_0(\xi )=1\) and \(P_1(\xi )=\xi .\)

By the coordinate transformation \(\xi =2r-1,\) we consider the shifted Legendre polynomial \(L_{n}(r):=P_{n}(2r-1)\) of degree n,  which satisfies the three-term recurrence relation:

$$\begin{aligned} (n+1)L_{n+1}(r)= & {} (2n+1)(2r-1) L_n(r) -nL_{n-1}(r),\quad \nonumber \\&n\ge&1,\quad r\in I:=(0,1), \end{aligned}$$
(2.2)

with \(L_0(r)=1\) and \(L_1(r)=2r-1.\) Moreover, the shifted Legendre polynomials \(L_{n}(r)\) also satisfy the following relations:

$$\begin{aligned} \begin{array}{c} 2(2n+1)L_n(r)\,=\,L_{n+1}^\prime (r)-L_{n-1}^\prime (r),\quad n\ge 1,\\ L_n(0)\,=\,(-1)^n,\quad L_n(1)=1,\\ L_{n}^{\prime }(r)\,=\,\displaystyle \sum _{\begin{array}{c} k=0 \\ k+n \text{ odd } \end{array}}^{n-1}2(2 k+1) L_{k}(r). \end{array} \end{aligned}$$
(2.3)

The shifted Legendre polynomials \(L_{n}(r)\) possess the following orthogonality:

$$\begin{aligned} \displaystyle \int _{0}^{1}L_n(r)L_m(r)\textrm{d}r=\dfrac{1}{2n+1}\delta _{mn},\quad m,~n \ge 0, \end{aligned}$$
(2.4)

where \(\delta _{mn}\) is the Kronecker symbol.

3 Fourier–Legendre Spectral-Galerkin Method in Complex Geometries

In this section, we shall propose a Fourier–Legendre spectral-Galerkin method for problems in two-dimensional complex geometries.

3.1 A Polar Coordinate Transformation

In this subsection, we consider a polar coordinate transformation, which will be used to deal with PDEs in two-dimensional complex geometries.

Fig. 1
figure 1

The geometric meaning of the function \(R(\theta )\)

For a simply-connected domain \(\Omega \) of complex geometry bounded by a simple closed curve \(\Gamma ,\) we define the polar coordinate transformation in the following form:

$$\begin{aligned} \left\{ \begin{aligned}&x=rR(\theta )\cos \theta ,\quad (r,\theta ) \in \Theta :=(0,1)\times (0,2\pi ),\\&y=rR(\theta )\sin \theta ,\quad (r,\theta ) \in \Theta , \end{aligned} \right. \end{aligned}$$
(3.1)

where \(R(\theta )>0\) is only related to the angle \(\theta ,\) and represents the distance from the origin to the curve \(\Gamma \) (see Fig. 1). The coordinate transformation (3.1) converts the domain \(\Omega \) of complex geometry into the unit disc \(\Theta \). Particularly, the case \(R(\theta )\equiv 1\) is simplified to the classical polar transformation. Clearly, by the definition of \(R(\theta ),\) we have

$$\begin{aligned} R(0)=R(2\pi )\quad \text {and} \quad R(\theta ) \in C^0[0,2\pi ]. \end{aligned}$$
(3.2)

Moreover, we always assume

$$\begin{aligned} \partial _{\theta }R(0)=\partial _{\theta }R(2\pi ) \quad \text {and} \quad \partial ^k_{\theta }R(\theta ) \in C^0[0,2\pi ],\quad k=1,2. \end{aligned}$$
(3.3)

Next, differentiating (3.1) with respect to x and y, respectively, we obtain

$$\begin{aligned} \begin{array}{lll} &{}\partial _{x} \theta = -\dfrac{y}{r^2R^2(\theta )}, \quad &{} \partial _{x} r = \dfrac{x}{rR^2(\theta )} + \dfrac{y}{rR^3(\theta )}\partial _\theta R(\theta ), \\ {} &{}\partial _{y}\theta = \dfrac{x}{r^2R^2(\theta )}, \quad &{} \partial _{y} r = \dfrac{y}{rR^2(\theta )} - \dfrac{x}{rR^3(\theta )}\partial _\theta R(\theta ). \end{array} \end{aligned}$$
(3.4)

Accordingly, we have

$$\begin{aligned} \begin{array}{lll} \partial _{x}U(x,y)&{}=\left( \dfrac{x}{rR^2(\theta )}+\dfrac{y}{rR^3(\theta )}\partial _\theta R(\theta )\right) \partial _{r}u(r,\theta )-\dfrac{y}{r^2R^2(\theta )}\partial _{\theta }u(r,\theta )\\ &{}=\left( \dfrac{\cos \theta }{R(\theta )}+\dfrac{\sin \theta }{R^2(\theta )}\partial _\theta R(\theta )\right) \partial _{r}u(r,\theta ) -\dfrac{\sin \theta }{rR(\theta )}\partial _{\theta }u(r,\theta ),\\ \partial _{y}U(x,y)&{}=\left( \dfrac{y}{rR^2(\theta )}-\dfrac{x}{rR^3 (\theta )}\partial _\theta R(\theta )\right) \partial _{r}u(r,\theta )+\dfrac{x}{r^2R^2(\theta )}\partial _{\theta }u(r,\theta )\\ &{}=\left( \dfrac{\sin \theta }{R(\theta )}-\dfrac{\cos \theta }{R^2(\theta )}\partial _\theta R(\theta )\right) \partial _{r}u(r,\theta ) +\dfrac{\cos \theta }{rR(\theta )}\partial _{\theta }u(r,\theta ), \end{array} \end{aligned}$$
(3.5)

where \(u(r,\theta ):=U(rR(\theta )\cos \theta ,rR(\theta )\sin \theta ).\)

We are now ready to change to polar coordinates in the Laplacian. Differentiating (3.4) again with respect to x and y, respectively, we get

$$\begin{aligned} \begin{array}{lll} &{}\partial _{x}^2 \theta = \dfrac{2xy}{r^4R^4(\theta )}, \quad &{} \partial _{x}^2 r = \dfrac{1}{rR^2(\theta )} - \dfrac{x^2}{r^3R^4(\theta )} - \dfrac{y^2}{r^3R^5(\theta )}\partial ^2_\theta R(\theta ) + \dfrac{2y^2}{r^3R^6(\theta )}\big (\partial _\theta R(\theta )\big )^2, \\ {} &{}\partial _{y}^2 \theta = -\dfrac{2xy}{r^4R^4(\theta )}, \quad &{} \partial _{y}^2 r = \dfrac{1}{rR^2(\theta )} - \dfrac{y^2}{r^3R^4(\theta )} - \dfrac{x^2}{r^3R^5(\theta )}\partial ^2_\theta R(\theta ) + \dfrac{2x^2}{r^3R^6(\theta )}\big (\partial _\theta R(\theta )\big )^2. \end{array} \end{aligned}$$
(3.6)

Applying the product rule for differentiation and the chain rule in two dimensions, we have

$$\begin{aligned} \begin{aligned} \partial _{x}^2 U(x,y)&+\partial _{y}^2 U(x,y) = \partial _{r}^2 u(r,\theta )\big ((\partial _ {x} r)^{2}+(\partial _ {y} r)^{2}\big ) +\partial _{\theta }^2 u(r,\theta )\big ((\partial _ {x} \theta )^{2}+(\partial _ {y} \theta )^{2}\big ) \\&+2 \partial _{r\theta }^2 u(r,\theta )\big (\partial _ {x} \theta \partial _ {x} r+\partial _ {y} \theta \partial _ {y} r \big ) + \partial _{r} u(r,\theta ) \big ( \partial ^2_ {x} r +\partial ^2_ {y} r \big )\\&+ \partial _{\theta } u(r,\theta ) \big (\partial ^2_ {x} \theta +\partial ^2_ {y} \theta \big ). \end{aligned} \end{aligned}$$
(3.7)

By (3.4), (3.6) and (3.7), a direct computation leads to the following results,

$$\begin{aligned} \begin{aligned} \partial _{x}^2 U(x,y)+\partial _{y}^2 U(x,y)&= \frac{1}{rR^2(\theta )} \left\{ \big (1+ \frac{(\partial _\theta R(\theta ))^2}{R^2(\theta )} \big ) \frac{\partial }{\partial r}\big (r\partial _{r} u(r,\theta )\big )\right. \\&\quad -\frac{\partial }{\partial r}\big (\frac{\partial _\theta R(\theta )}{ R(\theta )} \partial _{\theta } u(r,\theta )\big )-\,\frac{\partial }{\partial \theta }\big (\frac{\partial _\theta R(\theta )}{ R(\theta )} \partial _{r} u(r,\theta )\big )\\&\left. \quad +\frac{1}{r} \partial _{\theta }^2 u(r,\theta )\right\} =: \Delta _{p} u(r,\theta ). \end{aligned} \end{aligned}$$
(3.8)

Furthermore, differentiating (3.1) with respect to r and \(\theta \), respectively, we know the Jacobian determinant of the polar transformation is

$$\begin{aligned} |J(r,\theta )| =rR^2(\theta ). \end{aligned}$$
(3.9)

Therefore

$$\begin{aligned} \iint _{\Omega }U(x,y)\textrm{d}x\textrm{d}y=\iint _{\Theta }u(r,\theta )rR^2(\theta )\textrm{d}r\textrm{d}\theta . \end{aligned}$$
(3.10)

3.2 Fourier–Legendre Spectral-Galerkin Method

In order to more easily illustrate the Fourier–Legendre spectral-Galerkin method based on the polar coordinate transformation for problems in two-dimensional complex geometries, we consider the following elliptic equation with homogeneous boundary condition on \(\Omega \):

$$\begin{aligned} \left\{ \begin{aligned}&-\Delta U(x,y) + \mu U(x,y)=F(x,y),\quad \mu \ge 0,\quad (x,y)\in \Omega ,\\ {}&U(x,y)=0,\quad (x,y)\in \Gamma . \end{aligned} \right. \end{aligned}$$
(3.11)

Applying the polar coordinate transformation (3.1)–(3.11), and denoting

$$u(r,\theta ) = U(rR(\theta )\cos \theta ,rR(\theta )\sin \theta ),\qquad f(r,\theta ) = F(rR(\theta )\cos \theta ,rR(\theta )\sin \theta ),$$

we obtain

$$\begin{aligned} \left\{ \begin{aligned}&-\Delta _{p} u(r,\theta )+ \mu u(r,\theta )= f(r,\theta ),\quad (r,\theta ) \in \Theta ,\\ {}&u(1,\theta )=0,\quad \theta \in [0,2\pi ),\\ {}&u(r,\theta +2\pi )=u(r,\theta ),\quad \theta \in [0,2\pi ),\quad r\in [0,1), \end{aligned} \right. \end{aligned}$$
(3.12)

where \(u(r,\theta )\) satisfies the essential pole condition \(\partial _{\theta } u(0,\theta )=0.\)

By multiplying the first equation of (3.12) by \(rR^2(\theta )v\) and then integrating the result over \(\Theta \), we derive a weak formulation of (3.12). It is to find \(u\in {}_0H^1_p(\Theta )\) such that for any \(v\in {}_0H^1_p(\Theta )\),

$$\begin{aligned} \begin{aligned} a(u,v):=&\iint _{\Theta } r\left( 1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\right) \partial _{r} u(r,\theta ) \partial _{r} v(r,\theta ) \textrm{d} r \textrm{d} \theta +\iint _{\Theta } \frac{1}{r}\partial _{\theta }u(r,\theta ) \partial _{\theta }v(r,\theta ) \textrm{d} r \textrm{d} \theta \\ {}&- \iint _{\Theta } \frac{\partial _{\theta } R(\theta )}{ R(\theta )} \partial _{r}u(r,\theta ) \partial _{\theta }v(r,\theta ) \textrm{d} r \textrm{d} \theta - \iint _{\Theta } \frac{\partial _{\theta } R(\theta )}{ R(\theta )} \partial _{\theta }u(r,\theta ) \partial _{r}v(r,\theta ) \textrm{d} r \textrm{d} \theta \\ {}&+ \mu \iint _{\Theta }rR^2(\theta ) u(r,\theta ) v(r,\theta ) \textrm{d} r \textrm{d} \theta =\iint _{\Theta }rR^2(\theta )f(r,\theta )v(r,\theta ) \textrm{d} r \textrm{d} \theta , \end{aligned} \end{aligned}$$
(3.13)

and

$$\begin{aligned} {}_0H^{1}_{p} (\Theta )= & {} \{v ~|~ v ~\mathrm{is~ measurable},~ v(r,\theta +2\pi )=v(r,\theta ),~v(1,\theta )=0,\\ {}{} & {} ~\partial _{\theta } v(0,\theta )=0~\mathrm{~ and~} \Vert v\Vert _{1,p,\Theta }<\infty \} \end{aligned}$$

equipped with the semi-norm \(|\cdot |_{1,p,\Theta }\) and norm \(\Vert \cdot \Vert _{1,p,\Theta }\) as follows:

$$\begin{aligned}&|v|_{1,p,\Theta }=|V|_{H^1(\Omega )}=\Bigg ( \iint _{\Theta }r\Big (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\Big ) \big ( \partial _{r} v(r,\theta ) \big )^2 \textrm{d} r \textrm{d} \theta + \iint _{\Theta } \frac{1}{r}\big ( \partial _{\theta }v(r,\theta ) \big )^2 \textrm{d} r \textrm{d} \theta \\ {}&\qquad \qquad \qquad \qquad \quad - 2\iint _{\Theta } \frac{\partial _{\theta } R(\theta )}{ R(\theta )} \partial _{r}v(r,\theta ) \partial _{\theta }v(r,\theta ) \textrm{d} r \textrm{d} \theta \Bigg )^{\frac{1}{2}},\\&\Vert v\Vert _{1,p,\Theta }=\Vert V\Vert _{H^1(\Omega )}=\Bigg (|v|^2_{1,p,\Theta }+\iint _{\Theta }rR^2(\theta ) ( v(r,\theta ))^2 \textrm{d} r \textrm{d} \theta \Bigg )^{\frac{1}{2}}. \end{aligned}$$

Hereafter, \(v(r,\theta )=V(rR(\theta )\cos \theta ,rR(\theta )\sin \theta ).\)

Next, let

$$\begin{aligned} b(U,V):=\iint _{\Omega }\nabla U(x,y)\cdot \nabla V(x,y) \textrm{d}x \textrm{d}y + \mu \iint _{\Omega } U(x,y)V(x,y)\textrm{d}x \textrm{d}y. \end{aligned}$$
(3.14)

Clearly, we have \(a(u,v)=b(U,V)\). Since

$$\begin{aligned} b(U,V)=\iint _{\Omega }\nabla U \cdot \nabla V\textrm{d}x\textrm{d}y+\mu \iint _{\Omega } U V\textrm{d}x\textrm{d}y\le (1+\mu )\Vert U\Vert _{H^1(\Omega )}\Vert V\Vert _{H^1(\Omega )}, \end{aligned}$$
(3.15)

and by the Poincaré inequality,

$$\begin{aligned} b(U,U)=\iint _{\Omega }|\nabla U|^2\textrm{d}x\textrm{d}y+\mu \iint _{\Omega } U^2\textrm{d}x\textrm{d}y\ge C\Vert U\Vert ^2_{H^1(\Omega )}, \end{aligned}$$
(3.16)

where \(C>0\) depends only on \(\Omega ,\) we deduce

$$\begin{aligned} a(u,v)\le (1+\mu )\Vert u\Vert _{1,p,\Theta }\Vert v\Vert _{1,p,\Theta },\qquad a(u,u)\ge C\Vert u\Vert ^2_{1,p,\Theta }. \end{aligned}$$
(3.17)

This implies that \(a(\cdot ,\cdot )\) is continuous and coercive in \({}_0H^{1}_{p} (\Theta )\). Hence, by the Lax-Milgram lemma, (3.13) admits a unique solution as long as \(f\in ({}_0H^1_p(\Theta ))^\prime .\)

Remark 3.1

For problem (3.12) with nonhomogeneous boundary condition \(u(1,\theta ) = h(\theta ),\) we may easily make the variable transformation \(w(r,\theta )=u(r,\theta )-rh(\theta )\) to transform the original problem to a problem with homogeneous boundary condition.

We next propose a Fourier–Legendre spectral-Galerkin method for (3.13). To do this, let

$$\begin{aligned} \mathcal {P}^0_{N}(I)=\left\{ v \in \mathcal {P}_N:v(0)=v(1)=0\right\} ,\qquad {}_0\mathcal {P}_{N}(I)=\left\{ v \in \mathcal {P}_N:v(1)=0\right\} . \end{aligned}$$
(3.18)

Obviously, by choosing \(\varphi _k(r) = L_k(r) -L_{k+2}(r) \) and \(\psi _k(r) = L_k(r)-L_{k+1}(r),\) we have from (2.3) that

$$\begin{aligned} \mathcal {P}^0_{N}(I)= {\text {span}} \{ \varphi _k(r), \quad 0 \le k \le N-2 \} ,\qquad {}_0\mathcal {P}_{N}(I)= {\text {span}} \{ \psi _k(r), \quad 0 \le k \le N-1 \}. \end{aligned}$$
(3.19)

Denote

$$\begin{aligned} {}_0X_{M,N}(\Theta )= {\text {span}} \Big \{ \big \{ \psi _k(r) \big \}_{0 \le k \le N-1},~ \big \{\cos (m\theta )\varphi _k(r),~\sin (m\theta )\varphi _k(r) \big \}_{1 \le m \le M,~ 0 \le k \le N-2} \Big \}. \end{aligned}$$
(3.20)

Then, for any \(u(r,\theta ) \in {}_0X_{M,N}(\Theta ),\) \(u(1,\theta )=0\) and the pole condition \(\partial _{\theta }u(0,\theta )=0\) is satisfied. We expand the numerical solution \(u_{MN}(r,\theta )\) as

$$\begin{aligned} u_{MN}(r,\theta )= \displaystyle \sum _{k=0}^{N-2}\displaystyle \sum _{m=1}^{M}\Big (u^1_{m,k}\sin (m\theta )\varphi _k(r) + u^2_{m,k}\cos (m\theta )\varphi _k(r)\Big ) + \displaystyle \sum _{k=0}^{N-1} u^3_{k}\psi _k(r). \end{aligned}$$
(3.21)

The Fourier–Legendre spectral-Galerkin approximation for (3.13) is to find \( u_{MN}(r,\theta ) \in {}_0X_{M,N}(\Theta )\) such that for any \(v\in {}_0X_{M,N}(\Theta ),\)

$$\begin{aligned} \begin{aligned} a(u_{MN},v)&=\iint _{\Theta } r\big (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\big ) \partial _{r} u_{MN}(r,\theta ) \partial _{r} v(r,\theta ) \textrm{d} r \textrm{d} \theta \\ {}&\quad +\iint _{\Theta } \frac{1}{r}\partial _{\theta }u_{MN}(r,\theta ) \partial _{\theta }v(r,\theta ) \textrm{d} r \textrm{d} \theta \\ {}&\quad - \iint _{\Theta } \frac{\partial _{\theta } R(\theta )}{ R(\theta )} \partial _{r}u_{MN}(r,\theta ) \partial _{\theta }v(r,\theta ) \textrm{d} r \textrm{d} \theta \\ {}&\quad - \iint _{\Theta } \frac{\partial _{\theta } R(\theta )}{ R(\theta )} \partial _{\theta }u_{MN}(r,\theta ) \partial _{r}v(r,\theta ) \textrm{d} r \textrm{d} \theta \\ {}&\quad + \mu \iint _{\Theta }rR^2(\theta ) u_{MN}(r,\theta ) v(r,\theta ) \textrm{d} r \textrm{d} \theta =\iint _{\Theta }rR^2(\theta )f(r,\theta )v(r,\theta ) \textrm{d} r \textrm{d} \theta . \end{aligned} \end{aligned}$$
(3.22)

3.3 Numerical Implementation

We describe in this subsection how our method can be efficiently implemented. To this end, we first consider the r-direction. Let

$$\begin{aligned} \begin{aligned}&a^{q,m,n}_{k,j}=\displaystyle \int _0^1r^{q}\varphi _{j}^{(m)}(r) \varphi _{k}^{(n)}(r) \textrm{d} r,\quad b^{q,m,n}_{k,j}=\displaystyle \int _0^1r^{q}\psi _{j}^{(m)}(r) \varphi _{k}^{(n)}(r) \textrm{d} r,\\ {}&c^{q,m,n}_{k,j}=\displaystyle \int _0^1r^{q}\psi _{j}^{(m)}(r) \psi _{k}^{(n)}(r) \textrm{d} r,\quad q=-1,0,1,\quad m,n=0,1, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&A^{q,m,n}=\big (a^{q,m,n}_{k,j}\big )_{0\le k,j\le N-2},\quad B^{q,m,n}=\big (b^{q,m,n}_{k,j}\big )_{0\le k\le N-2,0\le j\le N-1},\\ {}&C^{q,m,n}=\big (c^{q,m,n}_{k,j}\big )_{0\le k,j\le N-1}. \end{aligned} \end{aligned}$$

Clearly, the matrices \(A^{q,m,n},\) \(B^{q,m,n}\) and \(C^{q,m,n}\) can be computed precisely according to the recurrence relations (2.2)–(2.4).

We next describe the implementation of the \(\theta \)-direction. Let \( 1\le k,j\le M \) and denote the matrices \(D^{1}=\big (d^{1}_{k,j}\big ),\) \(D^{2}=\big (d^{2}_{k,j}\big ),\) \(D^{3}=\big (d^{3}_{k,j}\big ),\) \(D^{4}=\big (d^{4}_{j}\big )\) and \(D^{5}=\big (d^{5}_{j}\big )\) with the elements

$$\begin{aligned} \begin{array}{lll} &{}d^{1}_{k,j}\,=\,\displaystyle \int _0^{2\pi }\Bigg (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\Bigg )\sin (j\theta )\sin (k\theta )\textrm{d} \theta ,\quad &{} d^{2}_{k,j}\,=\,\displaystyle \int _0^{2\pi }\Bigg (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\Bigg )\cos (j\theta )\sin (k\theta )\textrm{d} \theta ,\\ {} &{} d^{3}_{k,j}\,=\,\displaystyle \int _0^{2\pi }\Bigg (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\Bigg )\cos (j\theta )\cos (k\theta )\textrm{d} \theta ,\quad &{} d^{4}_{j}\,=\,\displaystyle \int _0^{2\pi }\Bigg (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\Bigg )\sin (j\theta )\textrm{d} \theta ,\\ {} &{} d^{5}_{j}\,=\,\displaystyle \int _0^{2\pi }\Bigg (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\Bigg )\cos (j\theta )\textrm{d} \theta ,\quad &{}d\,=\,\displaystyle \int _0^{2\pi }\Bigg (1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\Bigg )\textrm{d} \theta . \end{array} \end{aligned}$$

Moreover, set the matrix \(E=\big (e_{k,j}\big )\) with the elements \(e_{k,j}=k^2\pi \delta _{kj}.\) Similarly, denote the following matrices and their corresponding elements

$$\begin{aligned} \begin{array}{llllll} &{}G^{1}=\big (g^{1}_{k,j}\big ),\quad &{}G^{2}=\big (g^{2}_{k,j}\big ),\quad &{}G^{3}=\big (g^{3}_{k,j}\big ),\quad &{}G^{4}=\big (g^{4}_{k,j}\big ),\quad &{}G^{5}=\big (g^{5}_{j}\big ),\quad G^{6}=\big (g^{6}_{j}\big ),\quad \\ &{}S^{1}=\big (s^{1}_{k,j}\big ),\quad &{}S^{2}=\big (s^{2}_{k,j}\big ),\quad &{}S^{3}=\big (s^{3}_{k,j}\big ),\quad &{}S^{4}=\big (s^{4}_{j}\big ),\quad &{}S^{5}=\big (s^{5}_{j}\big ), \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{lll} &{}g^{1}_{k,j}=\displaystyle \int _0^{2\pi }\frac{\partial _\theta R(\theta )}{ R(\theta )} \sin (j\theta )\cos (k\theta )k\textrm{d} \theta ,\quad &{}g^{2}_{k,j}=\displaystyle \int _0^{2\pi }\frac{\partial _\theta R(\theta )}{ R(\theta )} \cos (j\theta )\cos (k\theta )k\textrm{d} \theta ,\\ &{}g^{3}_{k,j}=-\displaystyle \int _0^{2\pi }\frac{\partial _\theta R(\theta )}{ R(\theta )} \sin (j\theta )\sin (k\theta )k\textrm{d} \theta ,\quad &{}g^{4}_{k,j}=-\displaystyle \int _0^{2\pi }\frac{\partial _\theta R(\theta )}{ R(\theta )} \cos (j\theta )\sin (k\theta )k\textrm{d} \theta ,\\ &{}g^{5}_{j}=\displaystyle \int _0^{2\pi }\frac{\partial _\theta R(\theta )}{ R(\theta )}j\cos (j\theta )\textrm{d} \theta ,\quad &{}g^{6}_{j}=-\displaystyle \int _0^{2\pi }\frac{\partial _\theta R(\theta )}{ R(\theta )} j\sin (j\theta )\textrm{d} \theta ,\\ &{}s^{1}_{k,j}=\displaystyle \int _0^{2\pi }R^2(\theta )\sin (j\theta )\sin (k\theta )\textrm{d} \theta ,\quad &{} s^{2}_{k,j}=\displaystyle \int _0^{2\pi }R^2(\theta )\cos (j\theta )\sin (k\theta )\textrm{d} \theta ,\\ {} &{} s^{3}_{k,j}=\displaystyle \int _0^{2\pi }R^2(\theta )\cos (j\theta )\cos (k\theta )\textrm{d} \theta ,\quad &{} s^{4}_{j}=\displaystyle \int _0^{2\pi }R^2(\theta )\sin (j\theta )\textrm{d} \theta ,\\ {} &{} s^{5}_{j}=\displaystyle \int _0^{2\pi }R^2(\theta )\cos (j\theta )\textrm{d} \theta ,\quad &{}s=\displaystyle \int _0^{2\pi }R^2(\theta )\textrm{d} \theta . \end{array} \end{aligned}$$

The matrices mentioned above can be efficiently approximated by some suitable quadrature formulas. For deriving a compact matrix form of (3.22), we introduce the matrices

$$\begin{aligned}&W_1 = A^{1,1,1}\otimes D^1 + A^{-1,0,0}\otimes E - A^{0,1,0}\otimes G^1 - \big (A^{0,1,0}\otimes G^1\big )^T +\mu A^{1,0,0} \otimes S^1,\\&W_3 = A^{1,1,1} \otimes D^3 +A^{-1,0,0}\otimes E - A^{0,1,0}\otimes G^4- \big (A^{0,1,0}\otimes G^4\big )^T + \mu A^{1,0,0} \otimes S^3,\\&W_2 = A^{1,1,1}\otimes D^2 - A^{0,1,0} \otimes G^2 - \big (A^{0,1,0} \otimes G^3\big )^T + \mu A^{1,0,0} \otimes S^2,\\&W_4 = B^{1,1,1} \otimes D^4 - B^{0,1,0} \otimes G^5 + \mu B^{1,0,0}\otimes S^4,\\&W_5 = B^{1,1,1} \otimes D^5 - B^{0,1,0} \otimes G^6 + \mu B^{1,0,0} \otimes S^5,\\&W_6 =dC^{1,1,1} + \mu sC^{1,0,0} ,\\&\textbf{W} = \big [W_1 \quad W_2\quad W4;\quad W_2^T \quad W_3\quad W5;\quad W^T_4 \quad W^T_5\quad W_6\big ] , \end{aligned}$$

where \(\otimes \) represents the Kronecker product, i.e., \(A \otimes B=(a_{ij}B).\) We also denote

$$\begin{aligned}&f_{m,k}^1=\iint _{\Theta }rR^2(\theta )f(r,\theta )\sin (m\theta ) \varphi _{k}(r) \textrm{d} r \textrm{d} \theta , \quad \\ {}&\quad \mathbf {f_1}= \big (f^1_{1,0},\cdots ,f^1_{M,0},\cdots ,f^1_{1,N-2},\cdots ,f^1_{M,N-2}\big )^{T},\\&f_{m,k}^2=\iint _{\Theta }rR^2(\theta )f(r,\theta )\cos (m\theta ) \varphi _{k} (r) \textrm{d} r \textrm{d} \theta , \quad \\ {}&\quad \mathbf {f_2} = \big (f^2_{1,0},\cdots ,f^2_{M,0},\cdots ,f^2_{1,N-2},\cdots ,f^2_{M,N-2}\big )^{T},\\&f_{k}^3=\iint _{\Theta }rR^2(\theta )f(r,\theta ) \psi _{k} (r) \textrm{d} r \textrm{d} \theta , \quad \mathbf {f_3} = \big (f^3_{0},\cdots ,f^3_{N-1}\big )^{T},\\&\mathbf {u_1} = \big (u^1_{1,0},\cdots ,u^1_{M,0},\cdots ,u^1_{1,N-2},\cdots ,u^1_{M,N-2}\big )^{T}, \quad \\ {}&\quad \mathbf {u_2} = \big (u^2_{0,0},\cdots ,u^2_{M,0},\cdots ,u^2_{0,N-2},\cdots ,u^2_{M,N-2}\big )^T,\\&\mathbf {u_3} = \big (u^3_{0},\cdots ,u^3_{N-1}\big )^T,\quad \textbf{f} = \big (\mathbf {f_1} ;\mathbf {f_2} ;\mathbf {f_3}\big ),\quad \textbf{u} = \big (\mathbf {u_1} ;\mathbf {u_2} ;\mathbf {u_3}\big ). \end{aligned}$$

Then, the compact matrix form of (3.22) is as follows,

$$\begin{aligned} \textbf{W}\textbf{u}=\textbf{f}. \end{aligned}$$
(3.23)

It should be pointed out that the matrix \(C^{1,1,1}\) is diagonal, the matrices \(A^{1,1,1}\), \(A^{-1,0,0}\), \(A^{0,1,0}\), \(B^{1,1,1}\) and \(B^{0,1,0}\) are tridiagonal, the matrix \(C^{1,0,0}\) is pentadiagonal, and the matrices \(A^{1,0,0}\) and \(B^{1,0,0}\) are heptadiagonal. Moreover, the matrix \(\textbf{W}\) in (3.23) is symmetrical. The system (3.23) can be solved in the same way as the usual spectral-Galerkin methods for PDEs with variable coefficients in regular geometries (cf. [18]).

4 Convergence Analysis

In this section, we shall analyze the numerical error of scheme (3.22).

We first consider the Legendre orthogonal approximation. Denote by \(\omega ^{\alpha ,\beta }:=\omega ^{\alpha ,\beta }(r)=(1-r)^{\alpha }r^{\beta }\) the Jacobi weight function of index \((\alpha ,\beta ),\) which is not necessarily in \(L^1(I).\) Define the \(L^2\)-orthogonal projection \(\pi ^{0,0}_{N}:L^2(I)\rightarrow \mathcal {P}_{N}(I)\) such that

$$\begin{aligned} (\pi _N^{0,0}u-u,v)_{I}=0,\quad v\in \mathcal {P}_N (I). \end{aligned}$$

Further, for \(k\ge 1,\) we define recursively the \(H^k\)-orthogonal projections \(\pi _N^{-k,-k}:H^k(I)\rightarrow \mathcal {P}_N(I)\) such that

$$\begin{aligned} \pi _N^{-k,-k}u(r)=\int _0^r\pi ^{1-k,1-k}_{N-1}u^{\prime }(t)\textrm{d}t+u(0). \end{aligned}$$

Next, for any nonnegative integers \(s\ge k\ge 0,\) define the Sobolev space as follows:

$$\begin{aligned} H^{s,k}(I)=\{u~|~u\in H^k(I)~\textrm{and}~\sum _{l=0}^{s}\Vert \partial _r^l u\Vert _{\omega ^{\max (l-k,0),\max (l-k,0)},I}<\infty \}. \end{aligned}$$

We have the following error estimate on \(\pi _N^{-k,-k}u.\)

Lemma 1.1 (cf. [1]) \(\pi _N^{-k,-k}u\) is a Legendre tau approximation of u such that

$$\begin{aligned}&\partial _r^l\pi _N^{-k,-k}u(0)=\partial _r^l u(0),\quad \partial _r^l\pi _N^{-k,-k}u(1)=\partial _r^l u(1),\quad 0\le l\le k-1,\\ {}&(\pi _N^{-k,-k}u-u,v)=0,\quad v\in \mathcal {P}_{N-2k}(I). \end{aligned}$$

Further suppose \(u\in H^{s,k}(I)\) with \(s\ge k.\) Then for \(N\ge k,\)

$$\begin{aligned} \Vert \partial _r^l(\pi _N^{-k,-k}u-u)\Vert _{\omega ^{l-k,l-k},I} \le cN^{l-s}\Vert \partial _r^s u\Vert _{\omega ^{s-k,s-k},I},\quad 0\le l\le k\le s. \end{aligned}$$

We next consider the Fourier orthogonal approximation. Let \(\Lambda =(0,2\pi )\) and \(H^m(\Lambda )\) be the Sobolev space with norm \(\Vert \cdot \Vert _{m,\Lambda }\) and semi-norm \(|\cdot |_{m,\Lambda }.\) For any non-negative integer m\(H^m_{p}(\Lambda )\) denotes the subspace of \(H^m(\Lambda ),\) consisting of all functions whose derivatives of order up to \(m-1\) have the period \(2\pi .\) In particular, \(L^2(\Lambda )=H^0_{p}(\Lambda ).\)

Let M be any positive integer, and \(\widetilde{V}_{M}(\Lambda )=\hbox {span}\{~e^{il\theta }~|~|l|\le M\}.\) We denote by \(V_{M}(\Lambda )\) the subset of \(\widetilde{V}_{M}(\Lambda )\) consisting of all real-valued functions. The orthogonal projection \(P_{M}:\) \(L^2(\Lambda )\rightarrow V_{M}(\Lambda )\) is defined by

$$\begin{aligned} \displaystyle \int _{\Lambda }(P_{M}v(\theta )-v(\theta ))\phi (\theta )\textrm{d}\theta =0, \qquad \forall \phi \in V_{M}(\Lambda ). \end{aligned}$$

It was shown in [4] that for any \(v \in H^m_{p}(\Lambda )\), integer \(m\ge 0\) and \(\mu \le m\),

$$\begin{aligned} \Vert P_{M}v-v\Vert _{\mu ,\Lambda }\le cM^{\mu -m}|v|_{m,\Lambda }.\end{aligned}$$
(4.1)

We now turn to the mixed Fourier–Legendre orthogonal approximation. We introduce the space

$$\begin{aligned} {}_{0}\widetilde{H}^1_{p}(\Theta )=\{~v~|~ v(r,\theta +2\pi )=v(r,\theta ),~ v(1,\theta )=0,~\partial _{\theta }v(0,\theta )=0 ~ \textrm{and}~\Vert v\Vert _{\widetilde{H}^1(\Theta )}<\infty \}, \end{aligned}$$

equipped with the following semi-norm and norm,

$$\begin{aligned} |v|_{\widetilde{H}^1(\Theta )}=\left( \displaystyle \iint _{\Theta } (\partial _{r}v)^2\textrm{d} r \textrm{d} \theta +\displaystyle \iint _{\Theta } \dfrac{1}{r} (\partial _{\theta }v)^2\textrm{d} r \textrm{d} \theta \right) ^{\frac{1}{2}},\quad \Vert v\Vert _{\widetilde{H}^1(\Theta )}=\left( \displaystyle \iint _{\Theta }v^2\textrm{d} r \textrm{d} \theta +|v|^2_{\widetilde{H}^1(\Theta )}\right) ^{\frac{1}{2}}. \end{aligned}$$

Since \(|\frac{\partial _{\theta }R(\theta )}{R(\theta )}|\) is bounded above, we deduce readily that \({}_{0}\widetilde{H}^1_{p}(\Theta )\) is the subspace of \({}_0H^{1}_{p} (\Theta ).\) Next, we define the orthogonal projection \({}_0P^1_{M,N}\): \({}_0\widetilde{H}^{1}_{p} (\Theta )\rightarrow {}_0X_{M,N}(\Theta )\) by

$$\begin{aligned}&\displaystyle \iint _{\Theta } \partial _{r} ({}_0P^1_{M,N}u(r,\theta )-u(r,\theta )) \partial _{r} \phi (r,\theta ) \textrm{d} r \textrm{d} \theta \nonumber \\&\quad + \displaystyle \iint _{\Theta } \dfrac{1}{r} \partial _{\theta }({}_0P^1_{M,N}u(r,\theta )-u(r,\theta )) \partial _{\theta }\phi (r,\theta ) \textrm{d} r \textrm{d} \theta \nonumber \\ {}&\quad +\displaystyle \iint _{\Theta } ({}_0P^1_{M,N}u(r,\theta )-u(r,\theta ))\phi (r,\theta ) \textrm{d} r \textrm{d} \theta =0, \qquad \forall \phi \in {}_0X_{M,N}(\Theta ). \end{aligned}$$
(4.2)

Theorem 4.1

For any \(v \in {}_{0}\widetilde{H}^1_{p}(\Theta )\cap H^{m,1}(I,L^2(\Lambda ))\cap L^2(I,H^s_p(\Lambda ))\), integers \(m\ge 1\) and \(s\ge 1\),

$$\begin{aligned} \begin{array}{ll}\Vert v-{}_0P^1_{M,N} v\Vert ^2_{\widetilde{H}^1(\Theta )} \le &{}cM^{2-2s}\displaystyle \iint _{\Theta } \left( (\partial _r\partial ^{s-1}_\theta v(r,\theta ))^2+ \frac{1}{r}(\partial ^s_\theta v(r,\theta ))^2\right) \textrm{d} r \textrm{d} \theta \\ &{}+cN^{2-2m}\left( |v|^2_{H^m_{\omega ^{m-1,m-1}}(I,L^2(\Lambda ))}+|\partial _\theta v|^2_{H^{m-1}_{\omega ^{m-2,m-2}}(I,L^2(\Lambda ))}\right) , \end{array} \end{aligned}$$
(4.3)

provided that the norms mentioned above are bounded.

Proof

By the projection theorem,

$$\begin{aligned} \Vert v-{}_0P^1_{M,N} v\Vert _{\widetilde{H}^1(\Theta )}\le c\Vert v-\phi \Vert _{\widetilde{H}^1(\Theta )}, \qquad \forall \phi \in {}_0X_{M,N}(\Theta ). \end{aligned}$$

Take \(\phi =\pi _N^{-1,-1}P_Mv.\) Clearly, by Lemma 4.1 we know \(\phi \in {}_0X_{M,N}(\Theta ).\) With the aid of (4.1) and Lemma 4.1, we deduce that

$$\begin{aligned} \begin{array}{lll}&{}\displaystyle \iint _{\Theta } \big (\partial _r(v- \pi _N^{-1,-1}P_Mv)\big )^2\textrm{d} r \textrm{d} \theta \\ &{}\quad \le 2\displaystyle \iint _{\Theta } (\partial _rv- P_M\partial _rv)^2\textrm{d} r \textrm{d}\theta +2\displaystyle \iint _{\Theta } \big (\partial _r(P_Mv- \pi _N^{-1,-1}P_Mv)\big )^2\textrm{d} r \textrm{d}\theta \\ &{}\quad \le cM^{2-2s}\displaystyle \iint _{\Theta } \big (\partial _r\partial ^{s-1}_\theta v\big )^2\textrm{d} r \textrm{d} \theta +cN^{2-2m}|P_{M}v|^2_{H^m_{\omega ^{m-1,m-1}}(I,L^2(\Lambda ))} \\ &{}\quad \le cM^{2-2s}\displaystyle \iint _{\Theta } \big (\partial _r\partial ^{s-1}_\theta v\big )^2\textrm{d} r \textrm{d} \theta +cN^{2-2m}|v|^2_{H^m_{\omega ^{m-1,m-1}}(I,L^2(\Lambda ))}.\end{array}\end{aligned}$$
(4.4)

Next, due to \(\partial _\theta P_M v=P_M \partial _\theta v\), we use (4.1) and Lemma 4.1 again to obtain that

$$\begin{aligned} \begin{array}{lll} &{}\displaystyle \iint _{\Theta } \dfrac{1}{r} \big (\partial _{\theta }(v-\pi _N^{-1,-1}P_Mv) \big )^2 \textrm{d} r \textrm{d} \theta \\ &{}\quad \le 2\displaystyle \iint _{\Theta } \dfrac{1}{r} \big (P_M\partial _{\theta }v-\partial _{\theta }v \big )^2 \textrm{d} r \textrm{d}\theta +2\displaystyle \iint _{\Theta } \dfrac{1}{r} \big (\partial _{\theta }(\pi _N^{-1,-1}P_Mv-P_Mv) \big )^2 \textrm{d} r \textrm{d}\theta \\ &{}\quad \le cM^{2-2s}\displaystyle \iint _{\Theta } \frac{1}{r}\big (\partial ^s_\theta v\big )^2\textrm{d} r \textrm{d} \theta +cN^{2-2m}|P_M \partial _\theta v|^2_{H^{m-1}_{\omega ^{m-2,m-2}}(I,L^2(\Lambda ))}\\ &{}\quad \le cM^{2-2s}\displaystyle \iint _{\Theta } \frac{1}{r}\big (\partial ^s_\theta v\big )^2\textrm{d} r \textrm{d} \theta +cN^{2-2m}|\partial _\theta v|^2_{H^{m-1}_{\omega ^{m-2,m-2}}(I,L^2(\Lambda ))}.\end{array}\end{aligned}$$
(4.5)

In the same manner, we verify that

$$\begin{aligned} \begin{array}{lll} &{}\displaystyle \iint _{\Theta } (v-\pi _N^{-1,-1}P_Mv)^2 \textrm{d} r \textrm{d} \theta \\ &{}\quad \le 2\displaystyle \iint _{\Theta } (v-P_Mv)^2 \textrm{d} r \textrm{d}\theta +2\displaystyle \iint _{\Theta } (P_Mv-\pi _N^{-1,-1}P_Mv)^2 \textrm{d} r \textrm{d}\theta \\ &{}\quad \le cM^{-2s}\displaystyle \iint _{\Theta } \big (\partial ^s_\theta v\big )^2\textrm{d} r \textrm{d} \theta +cN^{-2m}|P_M v|^2_{H^{m}_{\omega ^{m-1,m-1}}(I,L^2(\Lambda ))}\\ &{}\quad \le cM^{-2s}\displaystyle \iint _{\Theta } \big (\partial ^s_\theta v\big )^2\textrm{d} r \textrm{d} \theta +cN^{-2m}|v|^2_{H^{m}_{\omega ^{m-1,m-1}}(I,L^2(\Lambda ))}.\end{array} \end{aligned}$$
(4.6)

Finally, the desired result comes immediately from a combination of (4.4)–(4.6). \(\square \)

Let u and \(u_{MN}\) be the solutions of the problem (3.13) and the scheme (3.22), U and \(U_{MN}\) be the corresponding solutions of u and \(u_{MN}\) in the xy-plane. We have the following convergence results.

Theorem 4.2

If \(R(\theta )\) satisfies the conditions (3.2) and (3.3), then for any \(u \in {}_{0}\widetilde{H}^1_{p}(\Theta )\cap H^{m,1}(I,L^2(\Lambda ))\cap L^2(I,H^s_p(\Lambda ))\) with integers \(m\ge 1\) and \(s\ge 1\), we have

$$\begin{aligned} \begin{aligned} \Vert U-U_{MN}\Vert ^2_{H^1(\Omega )}&=\Vert u-u_{MN}\Vert ^2_{1,p,\Theta }\\ {}&\le cM^{2-2s}\displaystyle \iint _{\Theta } \big ( (\partial _r\partial ^{s-1}_\theta u(r,\theta ))^2+ \frac{1}{r}(\partial ^s_\theta u(r,\theta ))^2\big )\textrm{d} r \textrm{d} \theta \\&\quad +cN^{2-2m}\big (|u|^2_{H^m_{\omega ^{m-1,m-1}}(I,L^2(\Lambda ))}+|\partial _\theta u|^2_{H^{m-1}_{\omega ^{m-2,m-2}}(I,L^2(\Lambda ))}\big ), \end{aligned} \end{aligned}$$
(4.7)

provided that the norms mentioned above are bounded.

Proof

By (3.13) and (3.22), we have

$$\begin{aligned} \begin{aligned} a(u,v)&=\iint _{\Theta }rR^2(\theta )f(r,\theta )v(r,\theta )\textrm{d}r\textrm{d}\theta ,\quad \forall v\in {}_0 H^1_{p}(\Theta ),\\ a(u_{MN},v)&=\iint _{\Theta }rR^2(\theta )f(r,\theta )v(r,\theta )\textrm{d}r\textrm{d}\theta ,\quad \forall v\in {}_0 X_{M,N}(\Theta ). \end{aligned} \end{aligned}$$
(4.8)

Hence

$$\begin{aligned} a(u-u_{MN},v)=0, \quad \forall v\in {}_0 X_{M,N}(\Theta ). \end{aligned}$$
(4.9)

Next, according to (3.17),

$$\begin{aligned} \begin{aligned} C\Vert u-u_{MN}\Vert ^2_{1,p,\Theta }&\le a(u-u_{MN},u-u_{MN})=a(u-u_{MN},u-{}_0P^1_{M,N}u)\\&\le (1+\mu )\Vert u-u_{MN}\Vert _{1,p,\Theta }\Vert u-{}_0P^1_{M,N}u\Vert _{1,p,\Theta }. \end{aligned} \end{aligned}$$
(4.10)

This means \(\Vert u-u_{MN}\Vert _{1,p,\Theta }\le c\Vert u-{}_0P^1_{M,N}u\Vert _{1,p,\Theta }.\) Moreover, since \(|\frac{\partial _{\theta }R(\theta )}{R(\theta )}|\) is bounded above, we use the Cauchy–Schwarz inequality to get

$$\begin{aligned} \iint _{\Theta }\left( 1+\dfrac{( \partial _\theta R(\theta ))^2}{R^2(\theta )}\right) \big ( \partial _{r} v(r,\theta ) \big )^2 r\textrm{d} r \textrm{d} \theta \le c \iint _{\Theta } \big (\partial _{r}v(r,\theta ) \big )^2 \textrm{d} r \textrm{d} \theta , \end{aligned}$$

and

$$\begin{aligned} \Big |\iint _{\Theta } \frac{\partial _{\theta } R(\theta )}{ R(\theta )} \partial _{\theta }u(r,\theta ) \partial _{r}v(r,\theta ) \textrm{d} r \textrm{d} \theta \Big | \le c \iint _{\Theta } \frac{1}{r}\big (\partial _{\theta }u(r,\theta ) \big )^2 \textrm{d} r \textrm{d} \theta + c\iint _{\Theta } \big ( \partial _{r}v(r,\theta )\big )^2 \textrm{d} r \textrm{d} \theta . \end{aligned}$$

The above two inequalities imply \(\Vert v\Vert ^2_{1,p,\Theta }\le c \Vert v\Vert ^2_{\widetilde{H}^1(\Theta )},\) where c is a positive constant depending on \(|\frac{\partial _{\theta }R(\theta )}{R(\theta )}|.\) Therefore, by Theorem 4.1 we obtain

$$\begin{aligned} \begin{aligned} \Vert U-U_{MN}\Vert ^2_{H^1(\Omega )}&=\Vert u-u_{MN}\Vert ^2_{1,p,\Theta }\le c\Vert u-{}_0P^1_{M,N}u\Vert ^2_{1,p,\Theta }\le c\Vert u-{}_0P^1_{M,N}u\Vert ^2_{\tilde{H}^1(\Theta )}\\ {}&\le cM^{2-2s}\displaystyle \iint _{\Theta } \left( (\partial _r\partial ^{s-1}_\theta u(r,\theta ))^2+ \frac{1}{r}(\partial ^s_\theta u(r,\theta ))^2\right) \textrm{d} r \textrm{d} \theta \\&\quad +cN^{2-2m}\left( |u|^2_{H^m_{\omega ^{m-1,m-1}}(I,L^2(\Lambda ))}+|\partial _\theta u|^2_{H^{m-1}_{\omega ^{m-2,m-2}}(I,L^2(\Lambda ))}\right) . \end{aligned} \end{aligned}$$
(4.11)

This ends the proof. \(\square \)

5 Numerical Results

In this section, we present some numerical results for problem (3.11) defined in a convex or concave domain. We first consider some special domains where the functions \(R(\theta )\) have exact expressions, and then consider general domains where the functions \(R(\theta )\) do not have specific expressions. We define the \(L^2\)- and \(L^\infty \)- errors by

$$\begin{aligned} \begin{array}{ll}&{}\Vert U-U_{MN}\Vert _{L^2(\Omega )}=\left( \displaystyle \iint _{\Omega }(U(x,y)-U_{MN}(x,y))^2\textrm{d} x \textrm{d}y\right) ^{\frac{1}{2}},\\ {} &{}\Vert U-U_{MN}\Vert _{L^\infty (\Omega )}=\displaystyle \max _{(x,y)\in \Omega }|U(x,y)-U_{MN}(x,y)|. \end{array} \end{aligned}$$

5.1 Squircle-Bounded Domains

We consider the simply connected domain bounded by a squircle

$$\begin{aligned} \Gamma :~|x|^p + |y|^p =1,\quad p\ge 2. \end{aligned}$$
(5.1)

A squircle, also known as a Lamé curve or Lamé oval, is a mathematical shape with properties between those of a square and those of a circle. It is a special case of the superellipse. As p increases, the curve becomes more and more like a square with slightly rounded corners, and the limit as \( p \rightarrow \infty \) is a square, which can be seen from Fig. 2.

Fig. 2
figure 2

Squircles for \(p=2\), \(p=4\) and \(p=30\)

5.1.1 p is Even

When p is even, the squircle (5.1) becomes

$$\begin{aligned} \Gamma :~ x^p + y^p =1, \quad p\ge 2 ~\text {and} ~p ~\text {even}, \end{aligned}$$
(5.2)

from which we can easily know

$$\begin{aligned} R(\theta ) = \big (\cos ^p \theta + \sin ^p \theta \big )^{-1/p}. \end{aligned}$$
(5.3)

We first take \(p=4\), \(\mu =2.5\) and test the smooth exact solution of the elliptic equation (3.11),

$$\begin{aligned} U(x,y)=\dfrac{1-\big (x^p+y^p\big )^{\frac{1}{p}}}{1+\big (x^p+y^p\big )^{\frac{6}{p}}} \text {exp}\left( {\frac{\big (x^2-y^2\big )\sin \big (3\root p \of {x^p+y^p}\big )}{{x^2+y^2}} +\frac{y\sin \big (6\root p \of {x^p+y^p}\big )}{\sqrt{x^2+y^2}}} \right) .\end{aligned}$$
(5.4)

Clearly

$$\begin{aligned} u(r,\theta ) = \frac{1-r}{1+r^6}\text {exp}\big ({\sin 3r\cos 2\theta +\sin 6r\sin \theta }\big ). \end{aligned}$$
(5.5)
Fig. 3
figure 3

The domain and the exact solution (5.4)

Fig. 4
figure 4

\(L^2\)- and \(L^{\infty }\)- errors of (5.4) versus N with different M

In Fig. 3, we show the domain \(\Omega \) and the exact solution in the xy-plane. In Fig. 4, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus N with different M. They indicate the errors decay exponentially as N increases. We see that for fixed N,  the scheme with larger M produces better numerical results.

Fig. 5
figure 5

\(L^2\)- and \(L^{\infty }\)- errors of (5.6) versus N with different M

We next take \(p=4\), \(\mu =2.5\) and test the exact solution with low regularity,

$$\begin{aligned} U(x,y)= & {} \left( 1-(x^p+y^p)^{\frac{1}{p}}\right) \left( 1+(x^p+y^p)^{\frac{h}{p}}\right) \nonumber \\{} & {} \text {exp}\left( {\frac{(x^2-y^2)\sin \left( 3\root p \of {x^p+y^p}\right) }{{x^2+y^2}} +\frac{y\sin \left( 6\root p \of {x^p+y^p}\right) }{\sqrt{x^2+y^2}}} \right) . \end{aligned}$$
(5.6)

Obviously

$$\begin{aligned} u(r,\theta ) = (1-r)\left( 1+r^{h}\right) \text {exp}\big (\sin 3r\cos 2\theta +\sin 6r\sin \theta \big ). \end{aligned}$$

We first take \(h=\frac{9}{2}.\) In Fig. 5, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus N with different M. They indicate the errors decay exponentially as N increases. We also see that for fixed N,  the scheme with larger M produces better numerical results. In Fig. 6 (left), we plot the exact solution with \(h=\frac{9}{2}\) in the xy-plane. In Fig. 6 (right), we plot the \(L^{\infty }\)-errors versus N with different h. They indicate the errors decay exponentially as N increases. We also observe that the errors decrease rapidly as h increases.

Fig. 6
figure 6

The exact solution with \(h=\frac{9}{2}\) and \(L^{\infty }\)- errors of (5.6) with \(M=24\)

To further illustrate the efficiency of the suggested approach, we also test the following exact solution,

$$\begin{aligned} U(x,y)=\left( 1-\root p \of {x^p+y^p}\right) \text {exp}\big ({x+y}\big ) = (1-r)\text {exp}\left( {r(\cos \theta +\sin \theta )(\cos ^p\theta +\sin ^p\theta )^{-\frac{1}{p}}}\right) . \end{aligned}$$
(5.7)

Taking \(p=4\) and \(\mu =2.5\), we show the exact solution (5.7) in Fig. 7 (left) and the \(L^{\infty }\)- errors in Fig. 7 (right). We find that the numerical results achieve good expectations again.

Fig. 7
figure 7

The exact solution and \(L^{\infty }\)- errors of (5.7)

5.1.2 p is Odd

When p is odd, the squircle (5.1) becomes

$$\begin{aligned} \Gamma :~ |x|x^{p-1} + |y|y^{p-1} =1, \quad p\ge 3 ~\text {and} ~p ~\text {odd}. \end{aligned}$$
(5.8)

Accordingly,

$$\begin{aligned} R(\theta )=\left\{ \begin{array}{ll} \big ( \cos ^p \theta + \sin ^p\theta \big )^{-1/p},\qquad &{} \theta \in [0,\frac{\pi }{2}], \\ \big ( \sin ^p\theta -\cos ^p \theta \big )^{-1/p},&{} \theta \in [\frac{\pi }{2},\pi ], \\ \big ( -\cos ^p \theta - \sin ^p\theta \big )^{-1/p}, &{} \theta \in [\pi ,\frac{3\pi }{2}], \\ \big ( \cos ^p \theta - \sin ^p\theta \big )^{-1/p}, &{} \theta \in [\frac{3\pi }{2},2\pi ]. \end{array} \right. \end{aligned}$$
(5.9)
Fig. 8
figure 8

The domain and the exact solution

Fig. 9
figure 9

\(L^2\)- and \(L^{\infty }\)- errors of (5.5) versus N with different M

We take \(p=7,\) \(\mu =2.5\) and test the exact solution (5.5). In Fig. 8, we show the domain \(\Omega \) and the exact solution (5.5) in the xy-plane. In Fig. 9, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus N with different M. They indicate the errors decay exponentially as N increases. We see that for fixed N,  the scheme with larger M produces better numerical results.

5.2 Butterfly-Bounded Domains

We mainly consider butterflies that have no body, tail and middle wings, but only upper and lower wings. Lui [14] used the spectral domain embedding method to solve the elliptic partial differential equation defined in the simplest butterfly-bounded domain.

Fig. 10
figure 10

The domain and the exact solution

Let

$$\begin{aligned} R(\theta ) = e^{\cos \theta }-\cos (a \theta ) + b\sin ^5\left( \frac{\theta }{2}\right) ,\quad \theta \in [0,2\pi ]. \end{aligned}$$
(5.10)

In Fig. 10, we show the domain \(\Omega \) and the exact solution (5.5) in the xy-plane, by taking the parameters \(a=4\) and \(b=3\). In Fig. 11, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus N with \(\mu =2.5\) and different M. They indicate the errors decay exponentially as N increases. We also see that for fixed N,  the scheme with larger M produces better numerical results.

Fig. 11
figure 11

\(L^2\)- and \(L^{\infty }\)- errors of (5.5) versus N with different M

We next consider variants of the butterfly curves. In Figs. 12 and 14, we show the domains and the exact solutions (5.5) in the xy-plane.

Fig. 12
figure 12

The domain (the parameters \(a=b=4\)) and the exact solution

In Figs. 13 and 15, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus N with \(\mu =2.5\) and different M for two cases. It can be seen that for those cases, the exponential spectral accuracy can be reached.

Compared with the butterfly domain considered in [14], the three domains we consider are more complex. Particularly, we can easily carry out their numerical simulations and obtain high-order accuracy.

Fig. 13
figure 13

\(L^2\)- and \(L^{\infty }\)- errors of (5.5) versus N with different M

Fig. 14
figure 14

The domain (the parameters \(a=12\) and \(b=1.5\)) and the exact solution

Fig. 15
figure 15

\(L^2\)- and \(L^{\infty }\)- errors of (5.5) versus N with different M

5.3 Cardioid-Bounded Domains

We consider the cardioid-bounded domain. Let

$$\begin{aligned} R(\theta ) = a-b\sin ( \theta ),\quad a>b>0. \end{aligned}$$
(5.11)

In Fig. 16, we show the domain \(\Omega \) and the exact solution (5.5) in the xy-plane, by taking the parameters \(a=10\) and \(b=9\). In Fig. 17, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus N with \(\mu =2.5\) and different M. Clearly, they indicate the errors decay exponentially as N. We see that for fixed N,  the scheme with lager M produces better numerical results.

Fig. 16
figure 16

The domain and the exact solution

Fig. 17
figure 17

\(L^2\)- and \(L^{\infty }\)- errors of (5.5) versus N with different M

5.4 Star-Shaped Domains

We consider the star-shaped domains. Let

$$\begin{aligned} R(\theta ) = a+b\sin (p\theta ). \end{aligned}$$
(5.12)

In Figs. 18 and 20, we show the domains \(\Omega \) and the exact solutions (5.5) in the xy-plane, by taking \(R(\theta )=0.7+0.2\sin ( 3\theta )\) and \(R(\theta )=0.8+0.1\sin ( 5\theta ),\) which were also considered in [9, 14]. In Figs. 19 and 21, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus N with \(\mu =2.5\) and different M. Clearly, they indicate the errors decay exponentially as N. We see that for fixed N,  the scheme with larger M produces better numerical results.

Fig. 18
figure 18

The domain and the exact solution

Fig. 19
figure 19

\(L^2\)- and \(L^{\infty }\)- errors of (5.5) versus N with different M

Fig. 20
figure 20

The domain and the exact solution

Fig. 21
figure 21

\(L^2\)- and \(L^{\infty }\)- errors of (5.5) versus N with different M

5.5 General Domains

In fact, in many cases, we do not know the specific expressions or parametric equations of the boundary curves. For those general domains, our method is still available. To this end, we need to derive the approximation function of \(R(\theta )\) described below.

We first calculate the discrete values \(\big \{R(\theta _k)\big \}_{k=0}^{2K}\) by \(R(\theta _k)=\sqrt{x_k^2+y_k^2},\) where \(\theta _k=\frac{2k\pi }{2K+1}\) represent the \(2K+1\) Fourier collocation points, and \((x_k,y_k)\) is the Cartesian coordinates on boundary curve \(\Gamma \) corresponding to \(\theta _k.\) Then we approximate \(R(\theta )\) using the trigonometric functions as basis functions, i.e.,

$$\begin{aligned} R(\theta )\simeq \frac{b_0}{2}+\displaystyle \sum _{k=1}^{K} \big (a_k \sin (k\theta ) + b_k \cos (k\theta )\big ), \end{aligned}$$
(5.13)

where

$$\begin{aligned} a_k\simeq \frac{2}{2K+1}\displaystyle \sum ^{2K}_{j=0}R\big (\theta _j\big )\sin \big (k\theta _j\big ), \quad b_k\simeq \frac{2}{2K+1}\displaystyle \sum ^{2K}_{j=0}R\big (\theta _j\big )\cos \big (k\theta _j\big ). \end{aligned}$$

Accordingly, we obtain the approximation function of \(\partial _{\theta }R(\theta ).\)

As examples, we provide four representative domains. Throughout this subsection, we always take the exact solution

$$\begin{aligned} U(x,y)=e^{x+y}, \end{aligned}$$
(5.14)

and use the suggested approach combined with the homogenization technique in Remark 3.1 to simulate numerically the Eq. (3.11).

We first consider the smooth quasi-pentagonal domain. To apply our method to solve this kind of problem, we start by calculating the values of the function \(R(\theta )\) at \(\theta _k\) marked by six-pointed stars, and then use (5.13) to approximate the function \(R(\theta ).\)

Fig. 22
figure 22

The domain and the exact solution

Fig. 23
figure 23

\(L^2\)- and \(L^{\infty }\)- errors of (5.14) versus M with different N

In Fig. 22, we show the domain \(\Omega \) and the exact solution (5.14) in the xy-plane. In Fig. 23, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus M with \(\mu =2.5\) and different N. They indicate the errors decay exponentially as M increases. We also see that for fixed M,  the scheme with larger N produces better numerical results.

We next consider the smooth quasi-hexagonal domain. Similarly, we use (5.13) to approximate the function \(R(\theta ).\)

Fig. 24
figure 24

The domain and the exact solution

Fig. 25
figure 25

\(L^2\)- and \(L^{\infty }\)- errors of (5.14) versus M with different N.

In Fig. 24, we show the domain \(\Omega \) and the exact solution (5.14) in the xy-plane. In Fig. 25, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus M with \(\mu =2.5\) and different N. They indicate the errors decay exponentially as M increases. We also see that for fixed M,  the scheme with larger N produces better numerical results.

Fig. 26
figure 26

The domain and the exact solution

We then consider the teardrop domain, whose boundary curve satisfies the following cartesian equation:

$$\begin{aligned} \left( x^2+y^2\right) ^2-\frac{5}{6}(x+1)\left( (x+1)^2+y^2\right) +8y^2=0. \end{aligned}$$
(5.15)

In Fig. 26, we show the domain \(\Omega \) and the exact solution (5.14) in the xy-plane. By calculating the values of the function \(R(\theta )\) at \(\theta _k\) marked by six-pointed stars, we can use the suggested approach to simulate numerically the Eq. (3.11). In Fig. 27, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus M with \(\mu =2.5\) and different N. They indicate the errors decay exponentially as M increases. We also see that for fixed M,  the scheme with larger N produces better numerical results.

Fig. 27
figure 27

\(L^2\)- and \(L^{\infty }\)- errors of (5.14) versus M with different N

Fig. 28
figure 28

The domain and the exact solution

Fig. 29
figure 29

\(L^2\)- and \(L^{\infty }\)- errors of (5.14) versus M with different N

We finally consider a more general smooth domain, whose boundary curve has an irregular smooth shape, as shown in Fig. 28 (left). This boundary curve has several irregular protrusions and depressions. Generally speaking, we can not get the explicit cartesian equation for this irregular curve.

As before, we can approximate the function \(R(\theta )\). In Fig. 28 (right), we plot the exact solution (5.14) in the xy-plane. In Fig. 29, we plot the \(L^2\)- and \(L^{\infty }\)- errors versus M with \(\mu =2.5\) and different N. Clearly, the numerical errors decay exponentially as M increases. We also see that for fixed M,  the scheme with larger N produces better numerical results. From the above, it can be seen that for irregular convex and concave regions, our Fourier–Legendre spectral-Galerkin method is still feasible with higher accuracy.

6 Concluding Remarks

We developed in this paper the Fourier–Legendre spectral-Galerkin method for solving two-dimensional elliptic PDEs in complex geometries using a polar coordinate transformation. This method is very effective and easy to implement, and is proved to be well-posed with spectral accuracy in the sense that the convergence rate increases with the smoothness of the solution. Although we only consider linear elliptic PDEs, the proposed method is effective for the nonlinear and/or time-dependent problems in two-dimensional complex geometries. Particularly, the main ideas and methods of this paper can also be extended to three-dimensional situations, and we will report on this work in the near future.