1 Introduction

Legendre approximations are one of the most fundamental methods in the field of scientific computing, such as Gauss-type quadrature, finite element and spectral methods for the numerical solution of differential equations (see, e.g., [3, 6, 7, 14]). One of the most remarkable advantages of Legendre approximations is that their accuracy depends solely upon the smoothness of the underlying functions. From both theoretical and applied perspectives, it is of particular importance to study error estimates of various Legendre approximation methods, such as Legendre projection and interpolation.

Over the past one hundred years, there has been a continuing interest in developing error estimates and error bounds for classical spectral approximations (i.e., Chebyshev, Legendre, Jacobi, Laguerre and Hermite approximations) and many results can be found in monographes on approximation theories, orthogonal polynomials and spectral methods (see, e.g., [3, 4, 8, 11, 14,15,16]). However, the existing results have several typical drawbacks: (i) optimal error estimates of Laguerre and Hermite approximations are far from satisfactory; (ii) error estimates and error bounds for Legendre and, more generally, Gegenbauer and Jacobi approximations of differentiable functions might be suboptimal. Indeed, for the former, little literature is available on optimal error estimates or sharp error bounds for Laguerre and Hermite interpolation. For the latter, even for the very simple function \(f(x)=|x|\), the error bounds of its Legendre approximation of degree n in the maximum norm in [10, 18, 22] are suboptimal since they all behave like \(O(n^{-1/2})\) as \(n\rightarrow \infty \), while the actual convergence rate is \(O(n^{-1})\) (see [19, Theorem 3]). In view of these drawbacks, optimal error estimates and sharp error bounds for classical spectral approximations have received renewed interest in recent years. We refer to the monograph [16] and the literature [2, 10, 17,18,19,20,21,22, 25, 26] for more detailed discussions.

Let \(\Omega =[-1,1]\) and let \(P_k(x)\) be the Legendre polynomial of degree k. It is well known that the sequence \(\{P_k(x)\}_{k=0}^{\infty }\) forms a complete orthogonal system on \(\Omega \) and

$$\begin{aligned} \int _{\Omega } P_j(x) P_k(x) \textrm{d}x = \left( k + \frac{1}{2} \right) ^{-1} \delta _{j,k}, \end{aligned}$$
(1.1)

where \(\delta _{j,k}\) is the Kronecker delta. For any \(f\in {L}^2(\Omega )\), its Legendre projection of degree n is defined by

$$\begin{aligned} f_n(x) = \sum _{k=0}^{n} a_k P_k(x), \quad a_k = \left( k + \frac{1}{2} \right) \int _{-1}^{1} f(x) P_k(x) \textrm{d}x. \end{aligned}$$
(1.2)

In order to analyze the error estimate of Legendre projections in both the \(L^2\) and \(L^{\infty }\) norms, sharp estimates of the Legendre coefficients play an important role in the analysis (see, e.g., [10, 17,18,19, 21, 22, 25]). Indeed, these estimates are useful not only in understanding the convergence rates of Legendre projections but useful also in estimating the degree of the Legendre projection to approximate f(x) within a prescribed accuracy. In recent years, sharp estimates of the Legendre coefficients have experienced rapid development. In the case when f(x) is analytic inside and on the Bernstein ellipse

$$\begin{aligned} \mathcal {E}_{\rho } = \left\{ z\in \mathbb {C} ~\bigg | ~z = \frac{u + u^{-1}}{2}, ~ u=\rho e^{\textrm{i}\theta },~ 0\le \theta <2\pi \right\} , \end{aligned}$$
(1.3)

for some \(\rho >1\), an explicit and sharp bound for the Legendre coefficients was given in [19, Lemma 2]

$$\begin{aligned} |a_0| \le \frac{D(\rho )}{2}, \qquad |a_k|\le D(\rho )\frac{\sqrt{k}}{\rho ^k}, \quad k\ge 1, \end{aligned}$$
(1.4)

where \(D(\rho )=2M(\rho )L(\mathcal {E}_{\rho })/(\pi \sqrt{\rho ^2-1})\) and \(M(\rho )=\max _{z\in \mathcal {E}_{\rho }} |f(z)|\) and \(L(\mathcal {E}_{\rho })\) is the length of the circumference of \(\mathcal {E}_{\rho }\). As a direct consequence, for each \(n\ge 0\), it was proved in [19, Theorem 2] that

$$\begin{aligned} \Vert f-f_n\Vert _{L^{\infty }(\Omega )}&\le \frac{D(\rho )}{\rho ^n} \bigg ( \frac{(n+1)^{1/2}}{\rho -1} + \frac{(n+1)^{-1/2}}{(\rho -1)^2} \bigg ). \end{aligned}$$
(1.5)

Moreover, another direct consequence of (1.4) is the error bound of Legendre projection in the \(L^2\) norm:

$$\begin{aligned} \Vert f-f_n\Vert _{L^{2}(\Omega )}&\le \left( \sum _{k=n+1}^{\infty } \frac{D(\rho )^2}{\rho ^{2k}} \right) ^{1/2} = \frac{D(\rho )}{\rho ^n\sqrt{\rho ^2-1}}. \end{aligned}$$
(1.6)

In the case when f(x) is differentiable but not analytic on the interval \(\Omega \), upper bounds for the Legendre coefficients were extensively studied in [10, 18, 22, 25]. However, those bounds in [18, 22] depend on some semi-norms of high-order derivatives of f(x) which may be overestimated, especially when the singularity is close to endpoints, and those bounds in [10, 25] involve ratios of gamma functions, which are less favorable since their asymptotic behavior and computation still require further treatment.

In this paper, we give a new perspective on error bounds of Legendre approximations for differentiable functions. We start by introducing the Legendre–Gauss–Lobatto (LGL) polynomials

$$\begin{aligned} \phi _k^{\textrm{LGL}}(x) = \left\{ \begin{array}{ll} P_{k+1}(x) - P_{k-1}(x), &{} {\textrm{k}}\ge 1, \\ P_1(x), &{} {\mathrm{k=0,}} \end{array} \right. \end{aligned}$$
(1.7)

and then proving theoretical properties of these polynomials, including the differential recurrence relation and an optimal and explicit bound for their maximum value on \(\Omega \). Based on LGL polynomials, we obtain a new explicit and sharp bound for the Legendre coefficients of differentiable functions, which is sharper than the result in [18] and is more informative than the results in [10, 25]. Building on this new bound, we then establish an explicit and optimal error bound for Legendre projections in the \(L^2\) norm and an explicit and optimal error bound for Legendre projections in the \(L^\infty \) norm whenever the maximum error of \(f_n\) is attained in the interior of \(\Omega \). We emphasize that in contrast to those results in [10, 25] which involve ratios of gamma functions, our results are more explicit and informative.

This paper is organized as follows. In Sect. 2 we prove some theoretical properties of the LGL polynomials. In Sect. 3 we establish a new bound for the Legendre coefficients of differentiable functions, which improves the existing result in [18]. Building on this bound, we establish some new error bounds of Legendre projections in both \(L^2\) and \(L^{\infty }\) norms. In Sect. 4 we present two extensions, including the extension of LGL polynomials to Gegenbauer–Gauss–Lobatto functions and optimal convergence rates of LGL interpolation and differentiation for analytic functions. Finally, we give some concluding remarks in Sect. 5.

2 Properties of Legendre–Gauss–Lobatto Polynomials

In this section, we establish some theoretical properties of LGL polynomials \(\{\phi _k^{\textrm{LGL}}\}\). Our main result is stated in the following theorem.

Theorem 2.1

Let \(\phi _k^{\textrm{LGL}}(x)\) be defined in (1.7). Then, the following properties hold:

  1. (i)

    \(|\phi _k^{\textrm{LGL}}(x)|\) is even for \(k=0,1,\ldots \) and \(\phi _k^{\textrm{LGL}}(\pm 1)=0\) for \(k=1,2,\ldots \).

  2. (ii)

    The following differential recurrence relation holds

    $$\begin{aligned} \phi _k^{\textrm{LGL}}(x) = \frac{\textrm{d}}{\textrm{d}x} \left( \frac{\phi _{k+1}^{\textrm{LGL}}(x)}{2k+3} - \frac{\phi _{k-1}^{\textrm{LGL}}(x)}{2k-1} \right) , \quad k=1,2,\ldots . \end{aligned}$$
    (2.1)
  3. (iii)

    Let \(\nu =\lfloor (n+1)/2 \rfloor \) and let \(x_{\nu }<\cdots <x_1\) be the zeros of \(P_n(x)\) on the interval [0, 1]. Then, \(|\phi _n^{\textrm{LGL}}(x)|\) attains its local maximum values at these points and

    $$\begin{aligned} |\phi _n^{\textrm{LGL}}(x_1)|<|\phi _n^{\textrm{LGL}}(x_2)|<\cdots <|\phi _n^{\textrm{LGL}}(x_{\nu })|. \end{aligned}$$
    (2.2)
  4. (iv)

    The maximum value of \(|\phi _{n}^{\textrm{LGL}}(x)|\) satisfies

    $$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)| \le \frac{4}{\sqrt{2\pi n}}, \quad n=1,2,\ldots , \end{aligned}$$
    (2.3)

    and the bound on the right-hand side is optimal in the sense that it can not be improved further.

Proof

As for (i), they follow from the properties of Legendre polynomials, i.e., \(P_k(-x)=(-1)^kP_k(x)\) and \(P_k(\pm 1)=(\pm 1)^k\) for \(k=0,1,\ldots \). As for (ii), we obtain from [15, Equation (4.7.29)] that

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}x} \phi _k^{\textrm{LGL}}(x) = (2k+1) P_k(x), \quad k=0,1,\ldots . \end{aligned}$$
(2.4)

The identity (2.1) follows immediately from (2.4). As for (iii), due to (2.4) we know that \(|\phi _n^{\textrm{LGL}}(x)|\) attains its local maximum values at the zeros of \(P_n(x)\). Since \(|\phi _n^{\textrm{LGL}}(x)|\) is an even function on \(\Omega \), we only need to consider the maximum values of \(|\phi _n^{\textrm{LGL}}(x)|\) at the nonnegative zeros of \(P_n(x)\). To this end, we introduce an auxiliary function

$$\begin{aligned} \psi (x) = \frac{n(n+1)}{(2n+1)^2}(\phi _n^{\textrm{LGL}}(x))^2 + (1-x^2)P_n^2(x). \end{aligned}$$
(2.5)

Direct calculation of the derivative of \(\psi (x)\) by using (2.4) gives

$$\begin{aligned} \psi {'}(x)&= \frac{2n(n+1)}{2n+1}\phi _n^{\textrm{LGL}}(x)P_n(x) - 2xP_n^2(x) + 2(1-x^2)P_n(x)P_n{'}(x) \nonumber \\&= 2P_n(x) \left[ \frac{n(n+1)}{2n+1}\phi _n^{\textrm{LGL}}(x) - xP_n(x) + (1-x^2)P_n{'}(x) \right] \nonumber \\&= -2x P_n^2(x), \end{aligned}$$
(2.6)

where we have used the first equality of [15, Equation (4.7.27)] in the last step. From (2.6) it is easy to see that \(\psi {'}(x)<0\) whenever \(x\in [0,1]\) and \(x\ne x_j\) for \(j=1,\ldots ,\mu \), and thus \(\psi (x)\) is strictly decreasing on the interval [0, 1]. Combining this with (2.5) gives the desired result (2.2). As for (iv), it follows from (2.2) that

$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)| = |\phi _n^{\textrm{LGL}}(x_{\nu })|. \end{aligned}$$
(2.7)

Now, we first consider the case where n is odd. In this case, we know that \(x_{\nu }=0\) and thus

$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)|=|\phi _n^{\textrm{LGL}}(0)| = \frac{(2n+1)\Gamma (\frac{n}{2})}{\sqrt{\pi }(n+1)\Gamma (\frac{n+1}{2})} := \chi _n. \end{aligned}$$

A straightforward calculation shows that the sequence \(\{ n^{1/2}\chi _n\}_{n=1}^{\infty }\) is a strictly increasing sequence, and thus

$$\begin{aligned} n^{1/2}\chi _n \le \lim _{n\rightarrow \infty } n^{1/2} \chi _n = \frac{4}{\sqrt{2\pi }} \quad \Longrightarrow \quad \max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)| \le \frac{4}{\sqrt{2\pi n}}. \end{aligned}$$

This proves the case of odd n. We next consider the case where n is even. Recall that \(\psi (x)\) is strictly decreasing on the interval [0, 1], we obtain that

$$\begin{aligned} \psi (x) \le \psi (0) \quad \Longrightarrow \quad \max _{x\in \Omega }|\phi _n^{\textrm{LGL}}(x)| \le \frac{(2n+1)\Gamma (\frac{n+1}{2})}{\sqrt{\pi n(n+1)}\Gamma (\frac{n+2}{2})} := \zeta _n. \end{aligned}$$

A straightforward calculation shows that the sequence \(\{n^{1/2}\zeta _n\}_{n=1}^{\infty }\) is a strictly increasing sequence, and thus

$$\begin{aligned} n^{1/2} \zeta _n \le \lim _{n\rightarrow \infty } n^{1/2} \zeta _n = \frac{4}{\sqrt{2\pi }} \quad \Longrightarrow \quad \max _{x\in \Omega }|\phi _n^{\textrm{LGL}}(x)| \le \frac{4}{\sqrt{2\pi n}}. \end{aligned}$$

This proves the case of even n. Since the constant \(4/\sqrt{2\pi }\) is the limit of the sequences \(\{ n^{1/2}\chi _n\}_{n=1}^{\infty }\) and \(\{n^{1/2}\zeta _n\}_{n=1}^{\infty }\), the bound in (2.3) is optimal in the sense that it can not be reduced further. This completes the proof. \(\square \)

Fig. 1
figure 1

The plot of \(|\phi _n^{\textrm{LGL}}(x)|\) and the points \(\{(x_j,|\phi _n^{\textrm{LGL}}(x_j)|)\}_{j=1}^{\nu }\) for three values of n

In Fig. 1 we plot the function \(|\phi _n^{\textrm{LGL}}(x)|\) and the points \(\{(x_j,|\phi _n^{\textrm{LGL}}(x_j)|)\}_{j=1}^{\nu }\) for \(n=5,10,15\). Indeed, it is easily seen that the sequence \(\{|\phi _n^{\textrm{LGL}}(x_1)|,\ldots ,|\phi _n^{\textrm{LGL}}(x_{\nu })|\}\) is strictly increasing, which coincides with the result (iii) in Theorem 2.1. To verify the sharpness of the result (iv) in Theorem 2.1, we plot in Fig. 2 the scaled function \(\phi _n^{S}(x)=|\phi _{n}^{\textrm{LGL}}(x)|\sqrt{2\pi n}/4\) for \(x\in \Omega \). Clearly, we observe that the maximum value of \(|\phi _n^{S}(x)|\) approaches to one as \(n\rightarrow \infty \), which implies the inequality (2.3) is quite sharp.

Fig. 2
figure 2

The plot of the scaled function \(\phi _n^{S}(x)\) for three values of n

Remark 2.2

In the proof of Theorem 2.1, we have actually proved the following sharper inequality

$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)| \le \frac{(2n+1)}{\sqrt{\pi }} \left\{ \begin{array}{ll} {\displaystyle \frac{\Gamma (\frac{n}{2})}{(n+1)\Gamma (\frac{n+1}{2})}}, &{} {n=1,3,\ldots ,} \\ {\displaystyle \frac{\Gamma (\frac{n+1}{2})}{\sqrt{n(n+1)}\Gamma (\frac{n+2}{2})}}, &{} {n=2,4,\ldots ,} \end{array} \right. \end{aligned}$$
(2.8)

Since the bound on the right-hand side of (2.8) involves the ratio of gamma functions, we therefore established a simpler upper bound for \(\max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)|\) in (2.3). On the other hand, we mention that a rough estimate for \(\phi _n^{\textrm{LGL}}(x)\) has been given in the classical monograph [15, Theorem 7.33.3]:

$$\begin{aligned} \phi _n^{\textrm{LGL}}(\cos \theta ) = (\sin \theta )^{1/2} O(n^{-1/2}), \quad 0<\theta <\pi , \end{aligned}$$
(2.9)

and the bound of the factor \(O(n^{-1/2})\) is independent of \(\theta \). Comparing (2.9) with (2.3), it is clear to see that the latter is more precise than the former.

Remark 2.3

The polynomials \(\{\phi _k^{\textrm{LGL}}\}\) can also be viewed as weighted Gegenbauer or Sobolev orthogonal polynomials. Indeed, from (4.2) in Sect. 4 we know that each \(\phi _k^{\textrm{LGL}}(x)\), where \(k\ge 1\), can be expressed by a weighted Gegenbauer polynomial up to a constant factor. On the other hand, it is easily verified that \(\{\phi _k^{\textrm{LGL}}\}\) are orthogonal with respect to the Sobolev inner product of the form

$$\begin{aligned} \langle f,g \rangle = [f(1)+f(-1)][g(-1)+g(1)] + \int _{\Omega } f{'}(x) g{'}(x)\textrm{d}x, \end{aligned}$$

and hence they can also be viewed as Sobolev orthogonal polynomials.

3 A New Explicit Bound for Legendre Coefficients of Differentiable Functions

In this section, we establish a new explicit bound for the Legendre coefficients of differentiable functions with the help of the properties of LGL polynomials. As will be shown later, our new result is better than the existing result in [18] and is more informative than the result in [25].

Let the total variation of f(x) on \(\Omega \) be defined by

$$\begin{aligned} \mathcal {V}(f,\Omega ) = \sup \sum _{j=1}^{n} \left| f(x_j)-f(x_{j-1}) \right| , \end{aligned}$$
(3.1)

where the supremum is taken over all partitions \(-1=x_0<x_1<\cdots <x_n=1\) of the interval \(\Omega \).

Theorem 3.1

If \(f,f{'},\ldots ,f^{(m-1)}\) are absolutely continuous on \(\Omega \) and \(f^{(m)}\) is of bounded variation on \(\Omega \) for some nonnegative integer m, i.e., \(V_m=\mathcal {V}(f^{(m)},\Omega )<\infty \). Then, for each \(n\ge m+1\),

$$\begin{aligned} |a_n|&\le \frac{2V_m}{\sqrt{2\pi (n-m)}} \prod _{k=1}^{m} \frac{1}{n-k+1/2}. \end{aligned}$$
(3.2)

where the product is assumed to be one whenever \(m=0\).

Proof

We prove the inequality (3.2) by induction on m. Invoking (2.4) and using integration by part, we obtain

$$\begin{aligned} a_n&= \frac{2n+1}{2}\int _{-1}^{1} f(x) P_n(x) \textrm{d}x = \frac{1}{2} \int _{-1}^{1} f(x) \frac{\textrm{d}}{\textrm{d}x} \phi _n^{\textrm{LGL}}(x) \textrm{d}x \\&= - \frac{1}{2}\int _{-1}^{1} \phi _n^{\textrm{LGL}}(x) \textrm{d}f(x), \end{aligned}$$

where we have used Theorem 2.1 and the integral is a Riemann-Stieltjes integral (see, e.g., [13, Chapter 12]) in the last equation. Furthermore, taking advantage of the inequality of Riemann-Stieltjes integral (see, e.g., [13, Theorem 12.15]) and making use of Theorem 2.1, we have

$$\begin{aligned} |a_n|&\le \frac{1}{2}\left| \int _{-1}^{1}\phi _n^{\textrm{LGL}}(x) \textrm{d}f(x) \right| \le \frac{V_0}{2} \max _{x\in \Omega }|\phi _n^{\textrm{LGL}}(x)| \le \frac{2V_0}{\sqrt{2\pi n}}. \end{aligned}$$

This proves the case \(m=0\). In the case of \(m=1\), making use of (2.1) and employing once again integration by part, we obtain

$$\begin{aligned} a_n&= - \frac{1}{2} \int _{-1}^{1} f{'}(x) \frac{\textrm{d}}{\textrm{d}x}\bigg [ \frac{\phi _{n+1}^{\textrm{LGL}}(x)}{2n+3} - \frac{\phi _{n-1}^{\textrm{LGL}}(x)}{2n-1} \bigg ] \textrm{d}x \\&= -\frac{1}{2} \int _{-1}^{1} \bigg [ \frac{\phi _{n-1}^{\textrm{LGL}}(x)}{2n-1} - \frac{\phi _{n+1}^{\textrm{LGL}}(x)}{2n+3} \bigg ] \textrm{d}f{'}(x), \end{aligned}$$

from which, using again the inequality of Riemann-Stieltjes integral and (2.3), we find that

$$\begin{aligned} |a_n|&\le \frac{V_1}{2n-1}\max _{x\in \Omega }|\phi _{n-1}^{\textrm{LGL}}(x)| \le \frac{2V_1}{\sqrt{2\pi (n-1)}(n-1/2)}. \end{aligned}$$

This proves the case \(m=1\). In the case of \(m=2\), we can continue the above process to obtain

$$\begin{aligned} a_n = -\frac{1}{2} \int _{-1}^{1}&\bigg [ \frac{\phi _{n-2}^{\textrm{LGL}}(x)}{(2n-3)(2n-1)} - \frac{\phi _{n}^{\textrm{LGL}}(x)}{(2n-1)(2n+1)} \\&~ - \frac{\phi _{n}^{\textrm{LGL}}(x)}{(2n+1)(2n+3)} + \frac{\phi _{n+2}^{\textrm{LGL}}(x)}{(2n+3)(2n+5)} \bigg ] \textrm{d}f{''}(x), \end{aligned}$$

from which we infer that

$$\begin{aligned} |a_n| \le \frac{2V_2}{(2n-3)(2n-1)} \max _{x\in \Omega }|\phi _{n-2}^{\textrm{LGL}}(x)| \le \frac{2V_2}{\sqrt{2\pi (n-2)}(n-1/2)(n-3/2)}. \end{aligned}$$

This proves the case \(m=2\). For \(m\ge 3\), the repeated application of integration by parts brings in higher derivatives of f and corresponding higher variations up to \(V_m\). Hence we can obtain the desired result (3.2) and this ends the proof. \(\square \)

Remark 3.2

An explicit bound for the Legendre coefficients has been established in [18, Theorem 2.2]

$$\begin{aligned} |a_n| \le \frac{2\overline{V}_m}{\sqrt{\pi (2n-2m-1)}} \prod _{k=1}^{m} \frac{1}{n-k+1/2}, \end{aligned}$$
(3.3)

where \(\overline{V}_m=\Vert f^{(m)}\Vert _{S}\) and \(\Vert \cdot \Vert _{S}\) is the weighted semi-norm defined by \(\Vert f\Vert _{S}=\int _{\Omega } (1-x^2)^{-1/4}|f{'}(x)|\textrm{d}x\). Comparing (3.2) and (3.3), it is easily seen that the weighted semi-norm of \(f^{(m)}(x)\) is replaced by the total variation of \(f^{(m)}(x)\), and (3.2) is better than (3.3) because \(V_m\le \overline{V}_m\). Moreover, the following bound was given in [25, Corollary 1]

$$\begin{aligned} |a_n| \le \frac{V_m}{2^{m}\sqrt{\pi }} \frac{(n+1/2) \Gamma ((n-m)/2)}{(n+m+1) \Gamma ((n+m+1)/2)}, \quad n\ge m+1. \end{aligned}$$
(3.4)

Comparing (3.2) with (3.4), one can easily check that both results are about equally accurate in the sense that their ratio tends to one as \(n\rightarrow \infty \). However, our result (3.2) is more explicit and informative.

Example 3.3

We consider the following example

$$\begin{aligned} f(x)=|x-\theta |, \quad \theta \in (-1,1). \end{aligned}$$
(3.5)

It is clear that this function is absolutely continuous on \(\Omega \) and its derivative is of bounded variation on \(\Omega \). Moreover, direct calculation gives \(V_1=2\), and thus

$$\begin{aligned} |a_n| \le \frac{4}{\sqrt{2\pi (n-1)}(n-1/2)} := \textrm{B}^{\textrm{New}}(n), \quad n=2,3,\ldots . \end{aligned}$$

The upper bound in (3.3) can be written as

$$\begin{aligned} |a_n| \le \frac{4(1-\theta ^2)^{-1/4}}{\sqrt{\pi (2n-3)} (n-1/2)} := \textrm{B}^{\textrm{Old}}(n), \quad n=2,3,\ldots . \end{aligned}$$

Comparing \(\textrm{B}^{\textrm{New}}(n)\) with \(\textrm{B}^{\textrm{Old}}(n)\), it is easily verified that the former is always better than the latter for all \(n\ge 2\) and \(\theta \in (-1,1)\), especially the former remains fixed while the latter blows up when \(\theta \rightarrow \pm 1\). Figure 3 shows the exact Legendre coefficients and the bound \(\textrm{B}^{\textrm{New}}(n)\) for three values of \(\theta \). It can be seen that \(\textrm{B}^{\textrm{New}}(n)\) is quite sharp whenever \(\theta \) is not close to both endpoints and is slightly overestimated whenever \(\theta \rightarrow \pm 1\).

Fig. 3
figure 3

The exact Legendre coefficients (circles) and the bound \(\textrm{B}^{\textrm{New}}(n)\) (line) for \(\theta =0.3\) (left), \(\theta =0.6\) (middle) and \(\theta =0.9\) (right). Here f(x) is defined in (3.5)

Example 3.4

We consider the truncated power function

$$\begin{aligned} f(x) = (x - \theta )_{+}^{2} = \left\{ \begin{array}{ll} (x-\theta )^2, &{} x\ge \theta , \\ 0, &{} x<\theta , \end{array} \right. \end{aligned}$$
(3.6)

where \(\theta \in (-1,1)\). It is clear that f and \(f{'}\) are absolutely continuous on \(\Omega \) and \(f{''}\) is of bounded variation on \(\Omega \). Moreover, direct calculation gives \(V_2=2\), and thus

$$\begin{aligned} |a_n| \le \frac{4}{\sqrt{2\pi (n-2)}(n-1/2)(n-3/2)}, \quad n=3,4,\ldots . \end{aligned}$$

In Fig. 4 we illustrate the exact Legendre coefficients and the above bound for three values of \(\theta \). Clearly, we can see that our new bound is quite sharp whenever \(\theta \) is not close to both endpoints and is slightly overestimated whenever \(\theta \rightarrow \pm 1\).

Fig. 4
figure 4

The exact Legendre coefficients (circles) and the bound (line) for \(\theta =0.2\) (left), \(\theta =0.4\) (middle) and \(\theta =0.8\) (right). Here f(x) is defined in (3.6)

In the following, we apply Theorem 3.1 to establish some new explicit error bounds for Legendre projections in the \(L^2\) and \(L^{\infty }\) norms.

Theorem 3.5

If \(f,f{'},\ldots ,f^{(m-1)}\) are absolutely continuous on \(\Omega \) and \(f^{(m)}\) is of bounded variation on \(\Omega \) for some nonnegative integer m, i.e., \(V_m=\mathcal {V}(f^{(m)},\Omega )<\infty \).

  1. (i)

    For \(m=0,1,\ldots \) and \(n\ge m+1\),

    $$\begin{aligned} \Vert f - f_n\Vert _{L^2(\Omega )} \le \frac{V_m}{\sqrt{\pi (m+1/2)}(n-m)^{m+1/2}}, \end{aligned}$$
    (3.7)
  2. (ii)

    For \(m\ge 1\) and \(n\ge m+1\),

    $$\begin{aligned} \Vert f - f_n\Vert _{L^{\infty }(\Omega )} \le \left\{ \begin{array}{ll} {\displaystyle \frac{4V_1}{\sqrt{2\pi (n-1)}}}, &{} m=1, \\ {\displaystyle \frac{2V_m/(m-1)}{\sqrt{2\pi (n+1-m)}}\prod _{k=1}^{m-1} \frac{1}{n-k+1/2}}, &{} {m\ge 2,} \end{array} \right. \end{aligned}$$
    (3.8)

    and the product is assumed to be one whenever \(m=1\).

Proof

As for (3.7), recalling the orthogonal property of Legendre polynomials (1.1), we find that

$$\begin{aligned} \Vert f - f_n\Vert _{L^2(\Omega )}^2&= \sum _{k=n+1}^{\infty } (a_k)^2 \left( k + \frac{1}{2} \right) ^{-1}. \end{aligned}$$

Furthermore, using Theorem 3.1 we obtain

$$\begin{aligned} \Vert f - f_n\Vert _{L^2(\Omega )}^2 \le \frac{2V_m^2}{\pi } \sum _{k=n+1}^{\infty } \frac{1}{(k-m)^{2m+2}}&\le \frac{2V_m^2}{\pi } \int _{n}^{\infty } \frac{1}{(x-m)^{2m+2}} \\&= \frac{V_m^2}{\pi (m+1/2)(n-m)^{2m+1}} . \end{aligned}$$

Taking the square root on both sides gives (3.7). As for (3.8), it follows by combining (3.2) in Theorem 3.1 with the fact that \(|P_k(x)|\le 1\). This ends the proof. \(\square \)

Fig. 5
figure 5

The error \(\Vert f-f_n\Vert _{L^2(\Omega )}\) (circles) and the error bound (line) for \(f(x)=|x-0.5|\) (left) and \(f(x)=(x-0.5)_{+}^2\) (right)

In Fig. 5 we show the actual error of \(f_n\) and the error bound (3.7) in the \(L^2\) norm as a function of n for two test functions. Clearly, we can see that the error bound (3.7) is optimal up to a constant factor.

Remark 3.6

Recently, the following error bound was proved in [10, Theorem 3.4]

$$\begin{aligned} \Vert f - f_n\Vert _{L^2(\Omega )} \le \frac{V_m}{\sqrt{\pi (m+1/2)}} \sqrt{\frac{\Gamma (n-m)}{\Gamma (n+m+1)}}, \quad n\ge m+1. \end{aligned}$$
(3.9)

Comparing (3.7) with (3.9), it is easily verified that their ratio asymptotes to one as \(n\rightarrow \infty \) and thus both bounds are almost identical for large n. Whenever n is small, direct calculations show that (3.9) is slightly sharper than (3.7). On the other hand, it is easily seen that our bound (3.7) is more explicit and informative.

Remark 3.7

In the case where f is piecewise analytic on \(\Omega \) and has continuous derivatives up to order \(m-1\) for some \(m\in \mathbb {N}\), the current author has proved in [19, Theorem 3] that the optimal rate of convergence of \(f_n\) is \(O(n^{-m})\). For such functions, however, the predicted rate of convergence by the error bound (3.8) is only \(O(n^{-m+1/2})\). Hence, the error bound (3.8) is suboptimal in the sense that it overestimates the actual error by a factor of \(n^{1/2}\).

We further ask: How to derive an optimal error bound of \(f_n\) in the \(L^{\infty }\) norm? In the following, we shall establish a weighted inequality for the error of \(f_n\) in the \(L^{\infty }\) norm and an explicit and optimal error bound for \(f_n\) in the \(L^{\infty }\) norm under the condition that the maximum error of \(f_n\) is attained in the interior of \(\Omega \).

Theorem 3.8

If \(f,f{'},\ldots ,f^{(m-1)}\) are absolutely continuous on \(\Omega \) and \(f^{(m)}\) is of bounded variation on \(\Omega \) for some \(m\in \mathbb {N}\), i.e., \(V_m=\mathcal {V}(f^{(m)},\Omega )<\infty \).

  1. (i)

    For \(n\ge m\), we have

    $$\begin{aligned} \max _{x\in \Omega } (1-x^2)^{1/4}|f(x) - f_n(x)| \le \frac{2V_m}{m\pi } \prod _{j=1}^{m} \frac{1}{n-j+1/2}. \end{aligned}$$
    (3.10)
  2. (ii)

    If the maximum error of \(f_n\) is attained at \(\tau \in (-1,1)\) for \(n\ge n_0\). Then, for \(n\ge \max \{n_0,m\}\), we have

    $$\begin{aligned} \Vert f - f_n\Vert _{L^{\infty }(\Omega )} \le \frac{2V_m}{\pi m(1-\tau ^2)^{1/4}} \prod _{j=1}^{m} \frac{1}{n-j+1/2}. \end{aligned}$$
    (3.11)

Proof

We first consider the proof of (3.10). Recall the Bernstein-type inequality satisfied by Legendre polynomials (see [1] or [12, Equation (18.14.7)])

$$\begin{aligned} (1-x^2)^{1/4} |P_n(x)| < \sqrt{\frac{2}{\pi }} \left( n+\frac{1}{2}\right) ^{-1/2}, \quad x\in \Omega , \end{aligned}$$
(3.12)

and the bound on the right hand side is optimal in the sense that \((n+1/2)^{-1/2}\) can not be improved to \((n+1/2+\epsilon )^{-1/2}\) for any \(\epsilon >0\) and the constant \((2/\pi )^{1/2}\) is best possible. Consequently, using the above inequality and Theorem 3.1, we deduce that

$$\begin{aligned} (1-x^2)^{1/4} |f(x) - f_n(x)|&\le \sqrt{\frac{2}{\pi }} \sum _{k=n+1}^{\infty } |a_k| \left( k+\frac{1}{2}\right) ^{-1/2} \nonumber \\&\le \frac{2V_m}{\pi } \sum _{k=n+1}^{\infty } \frac{1}{\sqrt{(k-m)(k+1/2)}} \prod _{j=1}^{m} \frac{1}{k-j+1/2} \nonumber \\&\le \frac{2V_m}{m\pi } \sum _{k=n+1}^{\infty } \left[ \prod _{j=1}^{m} \frac{1}{k-j-1/2} - \prod _{j=1}^{m} \frac{1}{k-j+1/2} \right] \nonumber \\&= \frac{2V_m}{m\pi } \prod _{j=1}^{m} \frac{1}{n-j+1/2}. \end{aligned}$$
(3.13)

This proves (3.10). As for (3.11), note that the maximum error of \(f_n\) is attained at \(x=\tau \), we have

$$\begin{aligned} \Vert f - f_n\Vert _{L^{\infty }(\Omega )} = |f(\tau ) - f_n(\tau )| \le \sum _{k=n+1}^{\infty } |a_k| |P_k(\tau )|. \end{aligned}$$

Combining the last inequality with Theorem 3.1 and (3.12) and using a similar process as in (3.13) gives (3.11). This ends the proof. \(\square \)

Remark 3.9

For functions with interior singularities, from the pointwise error analysis developed in [20, 21] we know that the maximum error of \(f_n\) is actually determined by the errors at these interior singularities for moderate and large values of n. In this case, the error bound in (3.11) is optimal in the sense that it can not be improved with respect to n up to a constant factor; see Fig. 6 for an illustration.

In Fig. 6 we show the maximum error of \(f_n\) and the error bound (3.11) as a function of n for the functions \(f(x)=|x-0.2|\) and \(f(x)=(x-0.5)_{+}^2\). For these two functions, the maximum errors of \(f_n\) are attained at \(x=0.2\) and \(x=0.5\), respectively, for moderate and large values of n. Thus, the error bound in (3.11) can be applied to these two examples. We see from Fig. 6 that the error bound (3.11) is optimal with respect to n up to a constant factor.

Fig. 6
figure 6

The maximum error \(\Vert f-f_n\Vert _{L^{\infty }(\Omega )}\) (circles) and the error bound (line) for \(f(x)=|x-0.2|\) (left) and \(f(x)=(x-0.5)_{+}^2\) (right)

4 Extensions

In this section we present two extensions of our results in Theorem 2.1, including Gegenbauer–Gauss–Lobatto (GGL) functions and optimal convergence rates of Legendre–Gauss–Lobatto interpolation and differentiation for analytic functions.

4.1 Gegenbauer–Gauss–Lobatto Functions

We introduce the Gegenbauer–Gauss–Lobatto (GGL) functions of the form

$$\begin{aligned} \phi _{n}^{\textrm{GGL}}(x)&= \frac{n+1}{n+2\lambda } \omega _{\lambda }(x) C_{n+1}^{\lambda }(x) - \frac{n+2\lambda -1}{n} \omega _{\lambda }(x) C_{n-1}^{\lambda }(x), \quad n\in \mathbb {N}, \end{aligned}$$
(4.1)

where \(C_{n}^{\lambda }(x)\) is the Gegenbauer polynomial of degree n defined in [12, Equation (18.5.9)] and \(\omega _{\lambda }(x)=(1-x^2)^{\lambda -1/2}\) is the Gegenbauer weight function. Moreover, by using [12, Equation (18.9.8)], they can also be written as

$$\begin{aligned} \phi _{n}^{\textrm{GGL}}(x)&= -\frac{4\lambda (n+\lambda )}{n(n+2\lambda )}\omega _{\lambda +1}(x)C_{n-1}^{\lambda +1}(x). \end{aligned}$$
(4.2)

Notice that \(\phi _{n}^{\textrm{GGL}}(x)=\phi _{n}^{\textrm{LGL}}(x)\) whenever \(\lambda =1/2\) and thus \(\phi _{n}^{\textrm{GGL}}(x)\) can be viewed as a generalization of \(\phi _{n}^{\textrm{LGL}}(x)\). We are now ready to prove the following theorem.

Theorem 4.1

Let \(\phi _{n}^{\textrm{GGL}}(x)\) be the function defined in (4.1) or (4.2) and let \(\lambda >-1/2\) and \(\lambda \ne 0\).

  1. (i)

    \(|\phi _{n}^{\textrm{GGL}}(x)|\) is even and \(\phi _{n}^{\textrm{GGL}}(\pm 1)=0\) for all \(n\in \mathbb {N}\).

  2. (ii)

    For all \(n\in \mathbb {N}\), the derivative of \(\phi _{n}^{\textrm{GGL}}(x)\) is

    $$\begin{aligned} \frac{\textrm{d}}{\textrm{d}x}\phi _{n}^{\textrm{GGL}}(x) = 2(n+\lambda )\omega _{\lambda }(x)C_{n}^{\lambda }(x). \end{aligned}$$
    (4.3)
  3. (iii)

    For all \(n\in \mathbb {N}\), the differential recurrence relation of \(\phi _n^{\textrm{GGL}}(x)\) is

    $$\begin{aligned} \phi _{n}^{\textrm{GGL}}(x)&= \frac{\textrm{d}}{\textrm{d}x} \left[ \frac{(n+1)}{2(n+\lambda +1)(n+2\lambda )} \phi _{n+1}^{\textrm{GGL}}(x) - \frac{(n+2\lambda -1) }{2n(n+\lambda -1)} \phi _{n-1}^{\textrm{GGL}}(x) \right] . \end{aligned}$$
    (4.4)
  4. (iv)

    Let \(\nu =\lfloor (n+1)/2 \rfloor \) for all \(n\in \mathbb {N}\) and let \(x_{\nu }<\cdots <x_1\) be the zeros of \(C_n^{\lambda }(x)\) on the interval [0, 1]. Then, \(|\phi _{n}^{\textrm{GGL}}(x)|\) attains its local maximum values at these points \(\{x_k\}_{k=1}^{\nu }\) and

    $$\begin{aligned} |\phi _{n}^{\textrm{GGL}}(x_1)|<|\phi _{n}^{\textrm{GGL}}(x_2)|<\cdots <|\phi _{n}^{\textrm{GGL}}(x_{\nu })|, \end{aligned}$$
    (4.5)

    for \(\lambda >0\) and

    $$\begin{aligned} |\phi _{n}^{\textrm{GGL}}(x_1)|>|\phi _{n}^{\textrm{GGL}}(x_2)|>\cdots >|\phi _{n}^{\textrm{GGL}}(x_{\nu })|, \end{aligned}$$
    (4.6)

    for \(\lambda <0\).

  5. (v)

    For \(\lambda >0\), the maximum value of \(|\phi _{n}^{\textrm{GGL}}(x)|\) satisfies

    $$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| \le \mathcal {B}_n^{\lambda } := \left\{ \begin{array}{ll} {\displaystyle \frac{4(n+\lambda )}{n(n+2\lambda )} \frac{\Gamma (\frac{n+1}{2}+\lambda )}{\Gamma (\frac{n+1}{2})\Gamma (\lambda )}}, &{} {n=1,3,\ldots ,} \\ {\displaystyle \frac{2(n+\lambda )}{\sqrt{n(n+2\lambda )}} \frac{\Gamma (\frac{n}{2}+\lambda )}{\Gamma (\frac{n+2}{2})\Gamma (\lambda )}}, &{} {n=2,4,\ldots .} \end{array} \right. \end{aligned}$$
    (4.7)

    For \(\lambda <0\), the maximum value of \(|\phi _{n}^{\textrm{GGL}}(x)|\) satisfies

    $$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| = |\phi _{n}^{\textrm{GGL}}(x_1)| = O(n^{-1}), \quad n\gg 1. \end{aligned}$$
    (4.8)

Proof

As for (i), the assertion \(|\phi _{n}^{\textrm{GGL}}(x)|\) is even follows from the symmetry of Gegenbauer polynomials (i.e., \(C_k^{\lambda }(-x)=(-1)^kC_k^{\lambda }(x)\) for all \(k=0,1,\ldots \)) and \(\phi _{n}^{\textrm{GGL}}(\pm 1)=0\) follows from (4.2). As for (ii), from [12, Equation (18.9.20)] we know that

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}x} \left( \omega _{\lambda +1}(x) C_n^{\lambda +1}(x) \right) = -\frac{(n+1)(n+2\lambda +1)}{2\lambda }\omega _{\lambda }(x) C_{n+1}^{\lambda }(x). \end{aligned}$$
(4.9)

The combination of (4.9) and (4.2) proves (4.3). As for (iii), it is a direct consequence of (4.1) and (4.3). Now we consider the proof of (iv). Since \(|\phi _{n,\lambda }^{\textrm{GGL}}(x)|\) is even, we only need to consider the maximum values of \(|\phi _{n,\lambda }^{\textrm{GGL}}(x)|\) at the nonnegative zeros of \(C_n^{\lambda }(x)\). Similar to the argument of Theorem 2.1, we introduce the following auxiliary function

$$\begin{aligned} \psi (x) = \frac{n(n+2\lambda )}{4(n+\lambda )^2} \left( \phi _{n}^{\textrm{GGL}}(x)\right) ^2 + (1-x^2) (\omega _{\lambda }(x) C_n^{\lambda }(x))^2. \end{aligned}$$
(4.10)

Combining (4.2), (4.3) and (4.9) and after some calculations, we get

$$\begin{aligned} \psi {'}(x)&= \frac{n(n+2\lambda )}{(n+\lambda )} \phi _{n}^{\textrm{GGL}}(x)\omega _{\lambda }(x)C_{n}^{\lambda }(x) -2x (\omega _{\lambda }(x) C_n^{\lambda }(x))^2 \nonumber \\&~~~~~~~~ + 2(1-x^2)\omega _{\lambda }(x) C_n^{\lambda }(x) \frac{\textrm{d}}{\textrm{d}x} (\omega _{\lambda }(x) C_n^{\lambda }(x)) \nonumber \\&= 2\omega _{\lambda }(x) C_n^{\lambda }(x) \left[ \frac{n(n+2\lambda )}{2(n+\lambda )}\phi _{n}^{\textrm{GGL}}(x) - x\omega _{\lambda }(x) C_n^{\lambda }(x) \right. \nonumber \\&~~~~~~~~ \left. + (1-x^2)\frac{\textrm{d}}{\textrm{d}x} ( \omega _{\lambda }(x) C_n^{\lambda }(x)) \right] \nonumber \\&= 2\omega _{\lambda }(x) C_n^{\lambda }(x) \bigg [ -2\lambda \omega _{\lambda +1}(x) C_{n-1}^{\lambda +1}(x) - x\omega _{\lambda }(x) C_n^{\lambda }(x) \nonumber \\&~~~~~~~~ \left. -\frac{(n+1)(n+2\lambda -1)}{2(n+\lambda )} \omega _{\lambda }(x) (C_{n+1}^{\lambda }(x) - C_{n-1}^{\lambda }(x)) \right] , \end{aligned}$$
(4.11)

where we have used [12, Equation (18.9.7)] in the last step. Furthermore, invoking [12, Equation (18.9.8)] and [12, Equation (18.9.1)], and after some lengthy but elementary calculations, we obtain that

$$\begin{aligned} \psi {'}(x) = -4\lambda x (\omega _{\lambda }(x) C_n^{\lambda }(x))^2. \end{aligned}$$
(4.12)

It is easily seen that \(\psi (x)\) is strictly decreasing on [0, 1] whenever \(\lambda >0\) and is strictly increasing on [0, 1] whenever \(\lambda <0\), and thus the inequalities (4.5) and (4.6) follow immediately. As for (v), we first consider the case \(\lambda >0\). From (iv) we infer that

$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| \le |\phi _{n}^{\textrm{GGL}}(x_{\nu })|. \end{aligned}$$

In the case when n is odd, it is easily seen that \(x_{\nu }=0\) and thus

$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| \le |\phi _{n}^{\textrm{GGL}}(0)| = \frac{4(n+\lambda )}{n(n+2\lambda )} \frac{\Gamma (\frac{n+1}{2}+\lambda )}{\Gamma (\frac{n+1}{2}) \Gamma (\lambda )}. \end{aligned}$$

This proves the case of odd n. In the case when n is even, notice that \(\psi (x)\) is strictly decreasing on the interval [0, 1] we obtain

$$\begin{aligned} \psi (x)\le \psi (0) \quad \Longrightarrow \quad \max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| \le \frac{2(n+\lambda )}{\sqrt{n(n+2\lambda )}} \frac{\Gamma (\frac{n}{2}+\lambda )}{\Gamma (\frac{n+2}{2}) \Gamma (\lambda )}. \end{aligned}$$

This proves the case of even n. Finally, we consider the case of \(\lambda <0\). On the one hand, from (4.6) we obtain immediately that \(\max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| = |\phi _{n}^{\textrm{GGL}}(x_1)|\). On the other hand, from [15, Theorem 8.9.1] we obtain that for \(n\gg 1\) that

$$\begin{aligned} x_1 = \cos \left( \frac{\pi }{n} + O(1) \right) , \quad \frac{\textrm{d}}{\textrm{d}x}C_n^{\lambda }(x)\big |_{x=x_1} = O(n^{2\lambda +1}). \end{aligned}$$

Combining these with (4.2) gives the desired estimate. This completes the proof. \(\square \)

As a direct consequence of Theorem 4.1, we have the following corollary.

Corollary 4.2

For \(\lambda \ge 1\), we have

$$\begin{aligned} \max _{x\in \Omega }|\omega _{\lambda }(x)C_{n}^{\lambda }(x)|&\le \left\{ \begin{array}{ll} {\displaystyle \frac{1}{\Gamma (\lambda )}\frac{\Gamma (\frac{n}{2}+\lambda )}{\Gamma (\frac{n}{2}+1)}}, &{} {n=0,2,4,\ldots ,} \\ {\displaystyle \sqrt{\frac{n+2\lambda -1}{n+1}} \frac{\Gamma (\frac{n-1}{2}+\lambda )}{\Gamma (\lambda )\Gamma (\frac{n+1}{2})}}, &{} {n=1,3,5,\ldots .} \end{array} \right. \end{aligned}$$
(4.13)

Proof

It follows by combining (4.2) and (4.7). \(\square \)

Remark 4.3

Based on a Nicholson-type formula for Gegenbauer polynomials, Durand proved the following inequality for \(\lambda \ge 1\) (see [5, Equation (19)])

$$\begin{aligned} \max _{x\in \Omega }|\omega _{\lambda }(x)C_{n}^{\lambda }(x)|&\le \frac{1}{\Gamma (\lambda )} \frac{\Gamma (\frac{n}{2}+\lambda )}{\Gamma (\frac{n}{2}+1)}, \quad n=0,1,2,\ldots . \end{aligned}$$
(4.14)

Comparing (4.14) with (4.13), it is easily seen that both bounds are the same whenever n is even. In the case of odd n, however, direct calculations show that

$$\begin{aligned} \sqrt{\frac{n+2\lambda -1}{n+1}} \frac{\Gamma (\frac{n-1}{2}+\lambda )}{\Gamma (\frac{n+1}{2})} \le \frac{\Gamma (\frac{n}{2}+\lambda )}{\Gamma (\frac{n}{2}+1)}, \quad n=0,1,\ldots , \end{aligned}$$

and thus we have derived an improved bound for \(\max _{x\in \Omega }|\omega _{\lambda }(x)C_{n}^{\lambda }(x)|\) whenever n is odd.

Remark 4.4

Making use of the asymptotic expansion of the ratio of gamma functions (see, e.g., [12, Equation (5.11.13)]), it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty } n^{1-\lambda } \mathcal {B}_n^{\lambda } = \frac{2^{2-\lambda }}{\Gamma (\lambda )}. \end{aligned}$$
(4.15)

Clearly, we see that \(\mathcal {B}_n^{\lambda }=O(n^{\lambda -1})\) for large n. Direct calculations show that the sequences \(\{\mathcal {B}_{2k-1}^{\lambda }\}_{k=1}^{\infty }\) and \(\{\mathcal {B}_{2k}^{\lambda }\}_{k=1}^{\infty }\) are strictly decreasing whenever \(0<\lambda \le 1\) and the sequences \(\{\mathcal {B}_{2k-1}^{\lambda }\}_{k=2}^{\infty }\) and \(\{\mathcal {B}_{2k}^{\lambda }\}_{k=1}^{\infty }\) are strictly increasing whenever \(\lambda \ge 1.2\). For \(\lambda \in (1,1.2)\), there exists a positive integer \(\varrho _{\lambda }\in \mathbb {N}\) such that the sequences \(\{\mathcal {B}_{2k-1}^{\lambda }\}_{k\ge \varrho _{\lambda }}\) and \(\{\mathcal {B}_{2k}^{\lambda }\}_{k\ge \varrho _{\lambda }}\) are strictly increasing.

4.2 Optimal Convergence Rates of Legendre–Gauss–Lobatto Interpolation and Differentiation for Analytic Functions

Legendre–Gauss–Lobatto interpolation is widely used in the numerical solution of differential and integral equations (see, e.g., [3, 6, 14]). Let \(p_n(x)\) be the unique polynomial which interpolates f(x) at the zeros of \(\phi _n^{\textrm{LGL}}(x)\) and it is well known that \(p_n(x)\) is the LGL interpolant of degree n. If f is analytic inside and on the ellipse \(\mathcal {E}_{\rho }\) for some \(\rho >1\), convergence rates of LGL interpolation and differentiation in the maximum norm have been thoroughly studied in [26] with the help of the Hermite’s contour integral. In particular, the following error bounds were proved (setting \(\lambda =1/2\) in [26, Theorem 4.3]):

$$\begin{aligned} \Vert f-p_n\Vert _{L^{\infty }(\Omega )} \le \mathcal {K} \frac{M(\rho )\sqrt{\pi (\rho ^2+\rho ^{-2})}(1+\rho ^{-2})^{3/2}}{(1-\rho ^{-1})^2(\rho -\rho ^{-1})^2} \frac{n^{3/2}}{\rho ^{n}}, \end{aligned}$$
(4.16)

and

$$\begin{aligned} \max _{0\le j\le n} \left| f{'}(x_j) - p_n{'}(x_j) \right| \le \mathcal {K} \frac{M(\rho )\sqrt{\pi (\rho ^2+\rho ^{-2})}(1+\rho ^{-2})^{3/2}}{4(1-\rho ^{-1})^2(\rho -\rho ^{-1})^2} \frac{n^{7/2}}{\rho ^{n}}, \end{aligned}$$
(4.17)

where \(\mathcal {K}\approx 1\) is a generic positive constant and \(\{x_j\}_{j=0}^{n}\) are LGL points (i.e., the zeros of \(\phi _n^{\textrm{LGL}}(x)\)). It is clear to see that the above results imply that the rate of convergence of \(p_n(x)\) in the \(L^{\infty }\) norm is \(O(n^{3/2}\rho ^{-n})\) and the maximum error of LGL spectral differentiation is \(O(n^{7/2}\rho ^{-n})\).

In the following, we shall improve the results (4.16) and (4.17) by using Theorem 2.1 and show that the factor \(n^{3/2}\) in (4.16) can actually be removed and the factor \(n^{7/2}\) in (4.17) can be improved to \(n^{3/2}\). We state our main results in the following theorem.

Theorem 4.5

If f is analytic inside and on the ellipse \(\mathcal {E}_{\rho }\) for some \(\rho >1\), then

$$\begin{aligned} \Vert f - p_n\Vert _{L^{\infty }(\Omega )} \le \frac{\mathcal {K} \sqrt{2} M(\rho ) L(\mathcal {E}_{\rho })}{\textrm{d}(\Omega ,\mathcal {E}_{\rho }) \pi \sqrt{\rho ^2-1}} \frac{1}{\rho ^n}. \end{aligned}$$
(4.18)

and

$$\begin{aligned} \max _{0\le j\le n} \left| f{'}(x_j) - p_n{'}(x_j) \right|&\le \frac{\mathcal {K} \sqrt{2} M(\rho ) L(\mathcal {E}_{\rho })}{\textrm{d}(\Omega ,\mathcal {E}_{\rho }) \sqrt{\pi (\rho ^2-1)}} \frac{n^{3/2}}{\rho ^n}, \end{aligned}$$
(4.19)

where \(\mathcal {K}\) is a generic positive constant and \(\mathcal {K}\approx 1\) for \(n\gg 1\) and \(\textrm{d}(\Omega ,\mathcal {E}_{\rho })\) denotes the distance from \(\Omega \) to \(\mathcal {E}_{\rho }\).

Proof

From the Hermite integral formula [4, Theorem 3.6.1] we know that the remainder of the LGL interpolants can be written as

$$\begin{aligned} f(x) - p_n(x) = \frac{1}{2\pi i}\oint _{\mathcal {E}_{\rho }} \frac{\phi _n^{\textrm{LGL}}(x) f(z)}{\phi _n^{\textrm{LGL}}(z)(z-x)} \textrm{d}z, \end{aligned}$$
(4.20)

from which we can deduce immediately that

$$\begin{aligned} \Vert f - p_n\Vert _{L^{\infty }(\Omega )}&\le \frac{\max _{x\in \Omega }|\phi _n^{\textrm{LGL}}(x)|}{\min _{z\in \mathcal {E}_{\rho }}|\phi _n^{\textrm{LGL}}(z)|} \times \frac{M(\rho ) L(\mathcal {E}_{\rho })}{2\pi \textrm{d}(\Omega ,\mathcal {E}_{\rho })} \nonumber \\&= \frac{1}{\min _{z\in \mathcal {E}_{\rho }}|\phi _n^{\textrm{LGL}}(z)|} \times \frac{2 M(\rho ) L(\mathcal {E}_{\rho })}{\pi \textrm{d}(\Omega ,\mathcal {E}_{\rho }) \sqrt{2n\pi }}, \end{aligned}$$
(4.21)

where we used (2.3) in the last step. Moreover, combining (4.20) with (2.4) we obtain

$$\begin{aligned} \max _{0\le j\le n} \left| f{'}(x_j) - p_n{'}(x_j) \right|&\le (2n+1) \frac{\max _{0\le j\le n}|P_n(x_j)|}{\min _{z\in \mathcal {E}_{\rho }}|\phi _n^{\textrm{LGL}}(z)|} \times \frac{M(\rho ) L(\mathcal {E}_{\rho })}{2\pi \textrm{d}(\Omega ,\mathcal {E}_{\rho })} \nonumber \\&= \frac{(2n+1)}{\min _{z\in \mathcal {E}_{\rho }}|\phi _n^{\textrm{LGL}}(z)|} \times \frac{M(\rho ) L(\mathcal {E}_{\rho })}{2\pi \textrm{d}(\Omega ,\mathcal {E}_{\rho })}, \end{aligned}$$
(4.22)

where we have used the fact that \(|P_k(x)|\le 1\) in the last step. In order to establish sharp error bounds for the LGL interpolation and differentiation, it is necessary to find the minimum value of \(|\phi _n^{\textrm{LGL}}(z)|\) for \(z\in \mathcal {E}_{\rho }\). Owing to [23, Equation (5.14)] we infer that

$$\begin{aligned} P_n(z) = \frac{u^n}{\sqrt{n\pi (1-u^{-2})}} \left[ 1 + \frac{1}{4n} \left( \frac{1}{u^2-1} - \frac{1}{2}\right) + O(n^{-2}) \right] , \quad n\gg 1, \end{aligned}$$

and, after some calculations, we obtain that

$$\begin{aligned} \min _{z\in \mathcal {E}_{\rho }}|\phi _n^{\textrm{LGL}}(z)| \ge \mathcal {K} \rho ^n \sqrt{\frac{\rho ^2-1}{n\pi }}, \end{aligned}$$
(4.23)

and \(\mathcal {K}\approx 1\) for \(n\gg 1\). Combining this with (4.21) and (4.22) gives the desired results. This ends the proof. \(\square \)

Remark 4.6

It is easily seen that the rate of convergence of \(p_n(x)\) in the \(L^{\infty }\) norm is \(O(\rho ^{-n})\), and thus we have improved the existing result (4.16). Moreover, the rate of convergence of LGL spectral differentiation is \(O(n^{3/2}\rho ^{-n})\), and thus we have improved the existing result (4.17).

In the proof of Theorem 4.5, we have used an asymptotic estimate of the minimum value of \(|\phi _n^{\textrm{LGL}}(z)|\) for \(z\in \mathcal {E}_{\rho }\). Now we provide a more detailed observation on this issue. By parameterizing the ellipse with \(z=(\rho e^{i\theta } + (\rho e^{i\theta })^{-1})/2\) with \(\rho >1\) and \(0\le \theta <2\pi \), we plot \(|\phi _n^{\textrm{LGL}}(z)|\) in Fig. 7 for several values of \(\rho \) and n. Clearly, we observe that the minimum value of \(|\phi _n^{\textrm{LGL}}(z)|\) is always attained at \(\theta =0,\pi \). This observation inspires us to raise the following conjecture:

Conjecture

For \(n\in \mathbb {N}\) and \(\rho >1\),

$$\begin{aligned} \min _{z\in \mathcal {E}_{\rho }}\left| \phi _n^{\textrm{LGL}}(z) \right| = \left| \phi _n^{\textrm{LGL}}(z_0) \right| , \end{aligned}$$
(4.24)

where \(z_0=\pm (\rho +\rho ^{-1})/2\).

Fig. 7
figure 7

The plot of \(|\phi _n^{\textrm{LGL}}(z)|\) as a function of \(\theta \). Top row shows \(n=3\) and \(\rho =1.05\) (left), \(\rho =1.25\) (middle) and \(\rho =1.5\) (right). Bottom row shows \(n=8\) and \(\rho =1.05\) (left), \(\rho =1.25\) (middle) and \(\rho =1.32\) (right)

To provide some insights into this conjecture, after some calculations we obtain

$$\begin{aligned} \left| \phi _1^{\textrm{LGL}}(z) \right|&= \frac{3}{8} \left( \rho ^2 + \frac{1}{\rho ^2} - 2\cos (2\theta ) \right) , \\ \left| \phi _2^{\textrm{LGL}}(z) \right|&= \frac{5}{16} \left[ \left( \left( \rho ^2 + \frac{1}{\rho ^2}\right) ^2 - 4\cos ^2(2\theta ) \right) \left( \rho ^2 + \frac{1}{\rho ^2} - 2\cos (2\theta ) \right) \right] ^{1/2}. \end{aligned}$$

It is easily seen that the minimum values of \(\left| \phi _1^{\textrm{LGL}}(z) \right| \) and \(\left| \phi _2^{\textrm{LGL}}(z) \right| \) are always attained at \(\theta =0,\pi \), which confirms the above conjecture for \(n=1,2\). For \(n\ge 3\), however, \(\left| \phi _n^{\textrm{LGL}}(z)\right| \) will involve a rather lengthy expression and it would be infeasible to find the minimum value of \(\left| \phi _n^{\textrm{LGL}}(z) \right| \) from its explicit expression. We will pursue the proof of this conjecture in future work.

Example 4.7

We consider the following Runge function

$$\begin{aligned} f(x)= \frac{1}{1+(ax)^2}, \quad a>0. \end{aligned}$$
(4.25)

It is easily verified that this function has a pair of poles at \(z=\pm i/a\) and thus the rates of convergence of \(p_n(x)\) is \(O(\rho ^{-n})\) with \(\rho =(1+\sqrt{a^2+1})/a\). In Fig. 8 we illustrate the maximum errors of LGL interpolants for two values of a. In our implementation, the LGL interpolants \(p_n(x)\) are computed by using the second barycentric formula

$$\begin{aligned} p_n(x) = \frac{\displaystyle \sum _{j=0}^{n}\frac{w_j}{x-x_j} f(x_j)}{\displaystyle \sum _{j=0}^{n}\frac{w_j}{x-x_j}}, \end{aligned}$$
(4.26)

where \(\{w_j\}_{j=0}^{n}\) are the barycentric weights of LGL points and the algorithm for computing these barycentric weights is described in [24]. Clearly, we see that numerical results are in good agreement with our theoretical analysis.

Fig. 8
figure 8

The maximum error \(\Vert f-p_n\Vert _{L^{\infty }(\Omega )}\) (circles) and the predicted convergence rate (line) for \(a=5\) (left) and \(a=6\) (right)

In Fig. 9 we illustrate the maximum errors of LGL spectral differentiation. In our implementation, we first compute the barycentric weights \(\{w_j\}_{j=0}^{n}\) by the algorithm in [24] and then compute the differentiation matrix D by

$$\begin{aligned} D_{j,k} = \left\{ \begin{array}{ll} {\displaystyle \frac{w_k/w_j}{x_j-x_k}}, &{} {j\ne k,} \\ {\displaystyle -\sum _{k\ne j} D_{j,k}}, &{}{j=k.} \end{array} \right. \end{aligned}$$
(4.27)

Finally, \((p_n{'}(x_0),\ldots ,p_n{'}(x_n))^T\) is evaluated by multiplying the differentiation matrix D with \((f(x_0),\ldots ,f(x_n))^T\). From (4.19) we know that the predicted rate of convergence of LGL spectral differentiation is \(O(n^{3/2}((1+\sqrt{a^2+1})/a)^{-n})\). Clearly, we can observe from Fig. 9 that the predicted rate of convergence is consistent with the errors of LGL spectral differentiation.

Fig. 9
figure 9

The maximum error of LGL spectral differentiation (circles) and the predicted convergence rate (line) for \(a=5\) (left) and \(a=6\) (right)

5 Conclusion

In this paper, we have studied the error analysis of Legendre approximations for differentiable functions from a new perspective. We introduced the sequence of LGL polynomials \(\{\phi _0^{\textrm{LGL}}(x),\phi _1^{\textrm{LGL}}(x),\ldots \}\) and proved their theoretical properties. Based on these properties, we derived a new explicit bound for the Legendre coefficients of differentiable functions. We then obtained an optimal error bound of Legendre projections in the \(L^2\) norm and an optimal error bound of Legendre projections in the \(L^{\infty }\) norms under the condition that the maximum error of Legendre projections is attained in the interior of \(\Omega \). Numerical examples were provided to demonstrate the sharpness of our new results. Finally, we presented two extensions of our analysis, including Gegenbauer–Gauss–Lobatto (GGL) functions \(\{\phi _1^{\textrm{GGL}}(x),\phi _2^{\textrm{GGL}}(x),\ldots \}\) and optimal convergence rates of Legendre–Gauss–Lobatto interpolation and differentiation for analytic functions.

In future work, we will explore the extensions of the current study to a more general setting, such as finding some new upper bounds for the weighted Jacobi polynomials (see, e.g., [9]) and establishing some sharp error bounds for Gegenbauer approximations of differentiable functions.