Abstract
In this paper we present a new perspective on error analysis for Legendre approximations of differentiable functions. We start by introducing a sequence of Legendre–Gauss–Lobatto polynomials and prove their theoretical properties, including an explicit and optimal upper bound. We then apply these properties to derive a new explicit bound for the Legendre coefficients of differentiable functions. Building on this, we establish an explicit and optimal error bound for Legendre approximations in the \(L^2\) norm and an explicit and optimal error bound for Legendre approximations in the \(L^{\infty }\) norm under the condition that their maximum error is attained in the interior of the interval. Illustrative examples are provided to demonstrate the sharpness of our new results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Legendre approximations are one of the most fundamental methods in the field of scientific computing, such as Gauss-type quadrature, finite element and spectral methods for the numerical solution of differential equations (see, e.g., [3, 6, 7, 14]). One of the most remarkable advantages of Legendre approximations is that their accuracy depends solely upon the smoothness of the underlying functions. From both theoretical and applied perspectives, it is of particular importance to study error estimates of various Legendre approximation methods, such as Legendre projection and interpolation.
Over the past one hundred years, there has been a continuing interest in developing error estimates and error bounds for classical spectral approximations (i.e., Chebyshev, Legendre, Jacobi, Laguerre and Hermite approximations) and many results can be found in monographes on approximation theories, orthogonal polynomials and spectral methods (see, e.g., [3, 4, 8, 11, 14,15,16]). However, the existing results have several typical drawbacks: (i) optimal error estimates of Laguerre and Hermite approximations are far from satisfactory; (ii) error estimates and error bounds for Legendre and, more generally, Gegenbauer and Jacobi approximations of differentiable functions might be suboptimal. Indeed, for the former, little literature is available on optimal error estimates or sharp error bounds for Laguerre and Hermite interpolation. For the latter, even for the very simple function \(f(x)=|x|\), the error bounds of its Legendre approximation of degree n in the maximum norm in [10, 18, 22] are suboptimal since they all behave like \(O(n^{-1/2})\) as \(n\rightarrow \infty \), while the actual convergence rate is \(O(n^{-1})\) (see [19, Theorem 3]). In view of these drawbacks, optimal error estimates and sharp error bounds for classical spectral approximations have received renewed interest in recent years. We refer to the monograph [16] and the literature [2, 10, 17,18,19,20,21,22, 25, 26] for more detailed discussions.
Let \(\Omega =[-1,1]\) and let \(P_k(x)\) be the Legendre polynomial of degree k. It is well known that the sequence \(\{P_k(x)\}_{k=0}^{\infty }\) forms a complete orthogonal system on \(\Omega \) and
where \(\delta _{j,k}\) is the Kronecker delta. For any \(f\in {L}^2(\Omega )\), its Legendre projection of degree n is defined by
In order to analyze the error estimate of Legendre projections in both the \(L^2\) and \(L^{\infty }\) norms, sharp estimates of the Legendre coefficients play an important role in the analysis (see, e.g., [10, 17,18,19, 21, 22, 25]). Indeed, these estimates are useful not only in understanding the convergence rates of Legendre projections but useful also in estimating the degree of the Legendre projection to approximate f(x) within a prescribed accuracy. In recent years, sharp estimates of the Legendre coefficients have experienced rapid development. In the case when f(x) is analytic inside and on the Bernstein ellipse
for some \(\rho >1\), an explicit and sharp bound for the Legendre coefficients was given in [19, Lemma 2]
where \(D(\rho )=2M(\rho )L(\mathcal {E}_{\rho })/(\pi \sqrt{\rho ^2-1})\) and \(M(\rho )=\max _{z\in \mathcal {E}_{\rho }} |f(z)|\) and \(L(\mathcal {E}_{\rho })\) is the length of the circumference of \(\mathcal {E}_{\rho }\). As a direct consequence, for each \(n\ge 0\), it was proved in [19, Theorem 2] that
Moreover, another direct consequence of (1.4) is the error bound of Legendre projection in the \(L^2\) norm:
In the case when f(x) is differentiable but not analytic on the interval \(\Omega \), upper bounds for the Legendre coefficients were extensively studied in [10, 18, 22, 25]. However, those bounds in [18, 22] depend on some semi-norms of high-order derivatives of f(x) which may be overestimated, especially when the singularity is close to endpoints, and those bounds in [10, 25] involve ratios of gamma functions, which are less favorable since their asymptotic behavior and computation still require further treatment.
In this paper, we give a new perspective on error bounds of Legendre approximations for differentiable functions. We start by introducing the Legendre–Gauss–Lobatto (LGL) polynomials
and then proving theoretical properties of these polynomials, including the differential recurrence relation and an optimal and explicit bound for their maximum value on \(\Omega \). Based on LGL polynomials, we obtain a new explicit and sharp bound for the Legendre coefficients of differentiable functions, which is sharper than the result in [18] and is more informative than the results in [10, 25]. Building on this new bound, we then establish an explicit and optimal error bound for Legendre projections in the \(L^2\) norm and an explicit and optimal error bound for Legendre projections in the \(L^\infty \) norm whenever the maximum error of \(f_n\) is attained in the interior of \(\Omega \). We emphasize that in contrast to those results in [10, 25] which involve ratios of gamma functions, our results are more explicit and informative.
This paper is organized as follows. In Sect. 2 we prove some theoretical properties of the LGL polynomials. In Sect. 3 we establish a new bound for the Legendre coefficients of differentiable functions, which improves the existing result in [18]. Building on this bound, we establish some new error bounds of Legendre projections in both \(L^2\) and \(L^{\infty }\) norms. In Sect. 4 we present two extensions, including the extension of LGL polynomials to Gegenbauer–Gauss–Lobatto functions and optimal convergence rates of LGL interpolation and differentiation for analytic functions. Finally, we give some concluding remarks in Sect. 5.
2 Properties of Legendre–Gauss–Lobatto Polynomials
In this section, we establish some theoretical properties of LGL polynomials \(\{\phi _k^{\textrm{LGL}}\}\). Our main result is stated in the following theorem.
Theorem 2.1
Let \(\phi _k^{\textrm{LGL}}(x)\) be defined in (1.7). Then, the following properties hold:
-
(i)
\(|\phi _k^{\textrm{LGL}}(x)|\) is even for \(k=0,1,\ldots \) and \(\phi _k^{\textrm{LGL}}(\pm 1)=0\) for \(k=1,2,\ldots \).
-
(ii)
The following differential recurrence relation holds
$$\begin{aligned} \phi _k^{\textrm{LGL}}(x) = \frac{\textrm{d}}{\textrm{d}x} \left( \frac{\phi _{k+1}^{\textrm{LGL}}(x)}{2k+3} - \frac{\phi _{k-1}^{\textrm{LGL}}(x)}{2k-1} \right) , \quad k=1,2,\ldots . \end{aligned}$$(2.1) -
(iii)
Let \(\nu =\lfloor (n+1)/2 \rfloor \) and let \(x_{\nu }<\cdots <x_1\) be the zeros of \(P_n(x)\) on the interval [0, 1]. Then, \(|\phi _n^{\textrm{LGL}}(x)|\) attains its local maximum values at these points and
$$\begin{aligned} |\phi _n^{\textrm{LGL}}(x_1)|<|\phi _n^{\textrm{LGL}}(x_2)|<\cdots <|\phi _n^{\textrm{LGL}}(x_{\nu })|. \end{aligned}$$(2.2) -
(iv)
The maximum value of \(|\phi _{n}^{\textrm{LGL}}(x)|\) satisfies
$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)| \le \frac{4}{\sqrt{2\pi n}}, \quad n=1,2,\ldots , \end{aligned}$$(2.3)and the bound on the right-hand side is optimal in the sense that it can not be improved further.
Proof
As for (i), they follow from the properties of Legendre polynomials, i.e., \(P_k(-x)=(-1)^kP_k(x)\) and \(P_k(\pm 1)=(\pm 1)^k\) for \(k=0,1,\ldots \). As for (ii), we obtain from [15, Equation (4.7.29)] that
The identity (2.1) follows immediately from (2.4). As for (iii), due to (2.4) we know that \(|\phi _n^{\textrm{LGL}}(x)|\) attains its local maximum values at the zeros of \(P_n(x)\). Since \(|\phi _n^{\textrm{LGL}}(x)|\) is an even function on \(\Omega \), we only need to consider the maximum values of \(|\phi _n^{\textrm{LGL}}(x)|\) at the nonnegative zeros of \(P_n(x)\). To this end, we introduce an auxiliary function
Direct calculation of the derivative of \(\psi (x)\) by using (2.4) gives
where we have used the first equality of [15, Equation (4.7.27)] in the last step. From (2.6) it is easy to see that \(\psi {'}(x)<0\) whenever \(x\in [0,1]\) and \(x\ne x_j\) for \(j=1,\ldots ,\mu \), and thus \(\psi (x)\) is strictly decreasing on the interval [0, 1]. Combining this with (2.5) gives the desired result (2.2). As for (iv), it follows from (2.2) that
Now, we first consider the case where n is odd. In this case, we know that \(x_{\nu }=0\) and thus
A straightforward calculation shows that the sequence \(\{ n^{1/2}\chi _n\}_{n=1}^{\infty }\) is a strictly increasing sequence, and thus
This proves the case of odd n. We next consider the case where n is even. Recall that \(\psi (x)\) is strictly decreasing on the interval [0, 1], we obtain that
A straightforward calculation shows that the sequence \(\{n^{1/2}\zeta _n\}_{n=1}^{\infty }\) is a strictly increasing sequence, and thus
This proves the case of even n. Since the constant \(4/\sqrt{2\pi }\) is the limit of the sequences \(\{ n^{1/2}\chi _n\}_{n=1}^{\infty }\) and \(\{n^{1/2}\zeta _n\}_{n=1}^{\infty }\), the bound in (2.3) is optimal in the sense that it can not be reduced further. This completes the proof. \(\square \)
In Fig. 1 we plot the function \(|\phi _n^{\textrm{LGL}}(x)|\) and the points \(\{(x_j,|\phi _n^{\textrm{LGL}}(x_j)|)\}_{j=1}^{\nu }\) for \(n=5,10,15\). Indeed, it is easily seen that the sequence \(\{|\phi _n^{\textrm{LGL}}(x_1)|,\ldots ,|\phi _n^{\textrm{LGL}}(x_{\nu })|\}\) is strictly increasing, which coincides with the result (iii) in Theorem 2.1. To verify the sharpness of the result (iv) in Theorem 2.1, we plot in Fig. 2 the scaled function \(\phi _n^{S}(x)=|\phi _{n}^{\textrm{LGL}}(x)|\sqrt{2\pi n}/4\) for \(x\in \Omega \). Clearly, we observe that the maximum value of \(|\phi _n^{S}(x)|\) approaches to one as \(n\rightarrow \infty \), which implies the inequality (2.3) is quite sharp.
Remark 2.2
In the proof of Theorem 2.1, we have actually proved the following sharper inequality
Since the bound on the right-hand side of (2.8) involves the ratio of gamma functions, we therefore established a simpler upper bound for \(\max _{x\in \Omega }|\phi _{n}^{\textrm{LGL}}(x)|\) in (2.3). On the other hand, we mention that a rough estimate for \(\phi _n^{\textrm{LGL}}(x)\) has been given in the classical monograph [15, Theorem 7.33.3]:
and the bound of the factor \(O(n^{-1/2})\) is independent of \(\theta \). Comparing (2.9) with (2.3), it is clear to see that the latter is more precise than the former.
Remark 2.3
The polynomials \(\{\phi _k^{\textrm{LGL}}\}\) can also be viewed as weighted Gegenbauer or Sobolev orthogonal polynomials. Indeed, from (4.2) in Sect. 4 we know that each \(\phi _k^{\textrm{LGL}}(x)\), where \(k\ge 1\), can be expressed by a weighted Gegenbauer polynomial up to a constant factor. On the other hand, it is easily verified that \(\{\phi _k^{\textrm{LGL}}\}\) are orthogonal with respect to the Sobolev inner product of the form
and hence they can also be viewed as Sobolev orthogonal polynomials.
3 A New Explicit Bound for Legendre Coefficients of Differentiable Functions
In this section, we establish a new explicit bound for the Legendre coefficients of differentiable functions with the help of the properties of LGL polynomials. As will be shown later, our new result is better than the existing result in [18] and is more informative than the result in [25].
Let the total variation of f(x) on \(\Omega \) be defined by
where the supremum is taken over all partitions \(-1=x_0<x_1<\cdots <x_n=1\) of the interval \(\Omega \).
Theorem 3.1
If \(f,f{'},\ldots ,f^{(m-1)}\) are absolutely continuous on \(\Omega \) and \(f^{(m)}\) is of bounded variation on \(\Omega \) for some nonnegative integer m, i.e., \(V_m=\mathcal {V}(f^{(m)},\Omega )<\infty \). Then, for each \(n\ge m+1\),
where the product is assumed to be one whenever \(m=0\).
Proof
We prove the inequality (3.2) by induction on m. Invoking (2.4) and using integration by part, we obtain
where we have used Theorem 2.1 and the integral is a Riemann-Stieltjes integral (see, e.g., [13, Chapter 12]) in the last equation. Furthermore, taking advantage of the inequality of Riemann-Stieltjes integral (see, e.g., [13, Theorem 12.15]) and making use of Theorem 2.1, we have
This proves the case \(m=0\). In the case of \(m=1\), making use of (2.1) and employing once again integration by part, we obtain
from which, using again the inequality of Riemann-Stieltjes integral and (2.3), we find that
This proves the case \(m=1\). In the case of \(m=2\), we can continue the above process to obtain
from which we infer that
This proves the case \(m=2\). For \(m\ge 3\), the repeated application of integration by parts brings in higher derivatives of f and corresponding higher variations up to \(V_m\). Hence we can obtain the desired result (3.2) and this ends the proof. \(\square \)
Remark 3.2
An explicit bound for the Legendre coefficients has been established in [18, Theorem 2.2]
where \(\overline{V}_m=\Vert f^{(m)}\Vert _{S}\) and \(\Vert \cdot \Vert _{S}\) is the weighted semi-norm defined by \(\Vert f\Vert _{S}=\int _{\Omega } (1-x^2)^{-1/4}|f{'}(x)|\textrm{d}x\). Comparing (3.2) and (3.3), it is easily seen that the weighted semi-norm of \(f^{(m)}(x)\) is replaced by the total variation of \(f^{(m)}(x)\), and (3.2) is better than (3.3) because \(V_m\le \overline{V}_m\). Moreover, the following bound was given in [25, Corollary 1]
Comparing (3.2) with (3.4), one can easily check that both results are about equally accurate in the sense that their ratio tends to one as \(n\rightarrow \infty \). However, our result (3.2) is more explicit and informative.
Example 3.3
We consider the following example
It is clear that this function is absolutely continuous on \(\Omega \) and its derivative is of bounded variation on \(\Omega \). Moreover, direct calculation gives \(V_1=2\), and thus
The upper bound in (3.3) can be written as
Comparing \(\textrm{B}^{\textrm{New}}(n)\) with \(\textrm{B}^{\textrm{Old}}(n)\), it is easily verified that the former is always better than the latter for all \(n\ge 2\) and \(\theta \in (-1,1)\), especially the former remains fixed while the latter blows up when \(\theta \rightarrow \pm 1\). Figure 3 shows the exact Legendre coefficients and the bound \(\textrm{B}^{\textrm{New}}(n)\) for three values of \(\theta \). It can be seen that \(\textrm{B}^{\textrm{New}}(n)\) is quite sharp whenever \(\theta \) is not close to both endpoints and is slightly overestimated whenever \(\theta \rightarrow \pm 1\).
Example 3.4
We consider the truncated power function
where \(\theta \in (-1,1)\). It is clear that f and \(f{'}\) are absolutely continuous on \(\Omega \) and \(f{''}\) is of bounded variation on \(\Omega \). Moreover, direct calculation gives \(V_2=2\), and thus
In Fig. 4 we illustrate the exact Legendre coefficients and the above bound for three values of \(\theta \). Clearly, we can see that our new bound is quite sharp whenever \(\theta \) is not close to both endpoints and is slightly overestimated whenever \(\theta \rightarrow \pm 1\).
In the following, we apply Theorem 3.1 to establish some new explicit error bounds for Legendre projections in the \(L^2\) and \(L^{\infty }\) norms.
Theorem 3.5
If \(f,f{'},\ldots ,f^{(m-1)}\) are absolutely continuous on \(\Omega \) and \(f^{(m)}\) is of bounded variation on \(\Omega \) for some nonnegative integer m, i.e., \(V_m=\mathcal {V}(f^{(m)},\Omega )<\infty \).
-
(i)
For \(m=0,1,\ldots \) and \(n\ge m+1\),
$$\begin{aligned} \Vert f - f_n\Vert _{L^2(\Omega )} \le \frac{V_m}{\sqrt{\pi (m+1/2)}(n-m)^{m+1/2}}, \end{aligned}$$(3.7) -
(ii)
For \(m\ge 1\) and \(n\ge m+1\),
$$\begin{aligned} \Vert f - f_n\Vert _{L^{\infty }(\Omega )} \le \left\{ \begin{array}{ll} {\displaystyle \frac{4V_1}{\sqrt{2\pi (n-1)}}}, &{} m=1, \\ {\displaystyle \frac{2V_m/(m-1)}{\sqrt{2\pi (n+1-m)}}\prod _{k=1}^{m-1} \frac{1}{n-k+1/2}}, &{} {m\ge 2,} \end{array} \right. \end{aligned}$$(3.8)and the product is assumed to be one whenever \(m=1\).
Proof
As for (3.7), recalling the orthogonal property of Legendre polynomials (1.1), we find that
Furthermore, using Theorem 3.1 we obtain
Taking the square root on both sides gives (3.7). As for (3.8), it follows by combining (3.2) in Theorem 3.1 with the fact that \(|P_k(x)|\le 1\). This ends the proof. \(\square \)
In Fig. 5 we show the actual error of \(f_n\) and the error bound (3.7) in the \(L^2\) norm as a function of n for two test functions. Clearly, we can see that the error bound (3.7) is optimal up to a constant factor.
Remark 3.6
Recently, the following error bound was proved in [10, Theorem 3.4]
Comparing (3.7) with (3.9), it is easily verified that their ratio asymptotes to one as \(n\rightarrow \infty \) and thus both bounds are almost identical for large n. Whenever n is small, direct calculations show that (3.9) is slightly sharper than (3.7). On the other hand, it is easily seen that our bound (3.7) is more explicit and informative.
Remark 3.7
In the case where f is piecewise analytic on \(\Omega \) and has continuous derivatives up to order \(m-1\) for some \(m\in \mathbb {N}\), the current author has proved in [19, Theorem 3] that the optimal rate of convergence of \(f_n\) is \(O(n^{-m})\). For such functions, however, the predicted rate of convergence by the error bound (3.8) is only \(O(n^{-m+1/2})\). Hence, the error bound (3.8) is suboptimal in the sense that it overestimates the actual error by a factor of \(n^{1/2}\).
We further ask: How to derive an optimal error bound of \(f_n\) in the \(L^{\infty }\) norm? In the following, we shall establish a weighted inequality for the error of \(f_n\) in the \(L^{\infty }\) norm and an explicit and optimal error bound for \(f_n\) in the \(L^{\infty }\) norm under the condition that the maximum error of \(f_n\) is attained in the interior of \(\Omega \).
Theorem 3.8
If \(f,f{'},\ldots ,f^{(m-1)}\) are absolutely continuous on \(\Omega \) and \(f^{(m)}\) is of bounded variation on \(\Omega \) for some \(m\in \mathbb {N}\), i.e., \(V_m=\mathcal {V}(f^{(m)},\Omega )<\infty \).
-
(i)
For \(n\ge m\), we have
$$\begin{aligned} \max _{x\in \Omega } (1-x^2)^{1/4}|f(x) - f_n(x)| \le \frac{2V_m}{m\pi } \prod _{j=1}^{m} \frac{1}{n-j+1/2}. \end{aligned}$$(3.10) -
(ii)
If the maximum error of \(f_n\) is attained at \(\tau \in (-1,1)\) for \(n\ge n_0\). Then, for \(n\ge \max \{n_0,m\}\), we have
$$\begin{aligned} \Vert f - f_n\Vert _{L^{\infty }(\Omega )} \le \frac{2V_m}{\pi m(1-\tau ^2)^{1/4}} \prod _{j=1}^{m} \frac{1}{n-j+1/2}. \end{aligned}$$(3.11)
Proof
We first consider the proof of (3.10). Recall the Bernstein-type inequality satisfied by Legendre polynomials (see [1] or [12, Equation (18.14.7)])
and the bound on the right hand side is optimal in the sense that \((n+1/2)^{-1/2}\) can not be improved to \((n+1/2+\epsilon )^{-1/2}\) for any \(\epsilon >0\) and the constant \((2/\pi )^{1/2}\) is best possible. Consequently, using the above inequality and Theorem 3.1, we deduce that
This proves (3.10). As for (3.11), note that the maximum error of \(f_n\) is attained at \(x=\tau \), we have
Combining the last inequality with Theorem 3.1 and (3.12) and using a similar process as in (3.13) gives (3.11). This ends the proof. \(\square \)
Remark 3.9
For functions with interior singularities, from the pointwise error analysis developed in [20, 21] we know that the maximum error of \(f_n\) is actually determined by the errors at these interior singularities for moderate and large values of n. In this case, the error bound in (3.11) is optimal in the sense that it can not be improved with respect to n up to a constant factor; see Fig. 6 for an illustration.
In Fig. 6 we show the maximum error of \(f_n\) and the error bound (3.11) as a function of n for the functions \(f(x)=|x-0.2|\) and \(f(x)=(x-0.5)_{+}^2\). For these two functions, the maximum errors of \(f_n\) are attained at \(x=0.2\) and \(x=0.5\), respectively, for moderate and large values of n. Thus, the error bound in (3.11) can be applied to these two examples. We see from Fig. 6 that the error bound (3.11) is optimal with respect to n up to a constant factor.
4 Extensions
In this section we present two extensions of our results in Theorem 2.1, including Gegenbauer–Gauss–Lobatto (GGL) functions and optimal convergence rates of Legendre–Gauss–Lobatto interpolation and differentiation for analytic functions.
4.1 Gegenbauer–Gauss–Lobatto Functions
We introduce the Gegenbauer–Gauss–Lobatto (GGL) functions of the form
where \(C_{n}^{\lambda }(x)\) is the Gegenbauer polynomial of degree n defined in [12, Equation (18.5.9)] and \(\omega _{\lambda }(x)=(1-x^2)^{\lambda -1/2}\) is the Gegenbauer weight function. Moreover, by using [12, Equation (18.9.8)], they can also be written as
Notice that \(\phi _{n}^{\textrm{GGL}}(x)=\phi _{n}^{\textrm{LGL}}(x)\) whenever \(\lambda =1/2\) and thus \(\phi _{n}^{\textrm{GGL}}(x)\) can be viewed as a generalization of \(\phi _{n}^{\textrm{LGL}}(x)\). We are now ready to prove the following theorem.
Theorem 4.1
Let \(\phi _{n}^{\textrm{GGL}}(x)\) be the function defined in (4.1) or (4.2) and let \(\lambda >-1/2\) and \(\lambda \ne 0\).
-
(i)
\(|\phi _{n}^{\textrm{GGL}}(x)|\) is even and \(\phi _{n}^{\textrm{GGL}}(\pm 1)=0\) for all \(n\in \mathbb {N}\).
-
(ii)
For all \(n\in \mathbb {N}\), the derivative of \(\phi _{n}^{\textrm{GGL}}(x)\) is
$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}x}\phi _{n}^{\textrm{GGL}}(x) = 2(n+\lambda )\omega _{\lambda }(x)C_{n}^{\lambda }(x). \end{aligned}$$(4.3) -
(iii)
For all \(n\in \mathbb {N}\), the differential recurrence relation of \(\phi _n^{\textrm{GGL}}(x)\) is
$$\begin{aligned} \phi _{n}^{\textrm{GGL}}(x)&= \frac{\textrm{d}}{\textrm{d}x} \left[ \frac{(n+1)}{2(n+\lambda +1)(n+2\lambda )} \phi _{n+1}^{\textrm{GGL}}(x) - \frac{(n+2\lambda -1) }{2n(n+\lambda -1)} \phi _{n-1}^{\textrm{GGL}}(x) \right] . \end{aligned}$$(4.4) -
(iv)
Let \(\nu =\lfloor (n+1)/2 \rfloor \) for all \(n\in \mathbb {N}\) and let \(x_{\nu }<\cdots <x_1\) be the zeros of \(C_n^{\lambda }(x)\) on the interval [0, 1]. Then, \(|\phi _{n}^{\textrm{GGL}}(x)|\) attains its local maximum values at these points \(\{x_k\}_{k=1}^{\nu }\) and
$$\begin{aligned} |\phi _{n}^{\textrm{GGL}}(x_1)|<|\phi _{n}^{\textrm{GGL}}(x_2)|<\cdots <|\phi _{n}^{\textrm{GGL}}(x_{\nu })|, \end{aligned}$$(4.5)for \(\lambda >0\) and
$$\begin{aligned} |\phi _{n}^{\textrm{GGL}}(x_1)|>|\phi _{n}^{\textrm{GGL}}(x_2)|>\cdots >|\phi _{n}^{\textrm{GGL}}(x_{\nu })|, \end{aligned}$$(4.6)for \(\lambda <0\).
-
(v)
For \(\lambda >0\), the maximum value of \(|\phi _{n}^{\textrm{GGL}}(x)|\) satisfies
$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| \le \mathcal {B}_n^{\lambda } := \left\{ \begin{array}{ll} {\displaystyle \frac{4(n+\lambda )}{n(n+2\lambda )} \frac{\Gamma (\frac{n+1}{2}+\lambda )}{\Gamma (\frac{n+1}{2})\Gamma (\lambda )}}, &{} {n=1,3,\ldots ,} \\ {\displaystyle \frac{2(n+\lambda )}{\sqrt{n(n+2\lambda )}} \frac{\Gamma (\frac{n}{2}+\lambda )}{\Gamma (\frac{n+2}{2})\Gamma (\lambda )}}, &{} {n=2,4,\ldots .} \end{array} \right. \end{aligned}$$(4.7)For \(\lambda <0\), the maximum value of \(|\phi _{n}^{\textrm{GGL}}(x)|\) satisfies
$$\begin{aligned} \max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| = |\phi _{n}^{\textrm{GGL}}(x_1)| = O(n^{-1}), \quad n\gg 1. \end{aligned}$$(4.8)
Proof
As for (i), the assertion \(|\phi _{n}^{\textrm{GGL}}(x)|\) is even follows from the symmetry of Gegenbauer polynomials (i.e., \(C_k^{\lambda }(-x)=(-1)^kC_k^{\lambda }(x)\) for all \(k=0,1,\ldots \)) and \(\phi _{n}^{\textrm{GGL}}(\pm 1)=0\) follows from (4.2). As for (ii), from [12, Equation (18.9.20)] we know that
The combination of (4.9) and (4.2) proves (4.3). As for (iii), it is a direct consequence of (4.1) and (4.3). Now we consider the proof of (iv). Since \(|\phi _{n,\lambda }^{\textrm{GGL}}(x)|\) is even, we only need to consider the maximum values of \(|\phi _{n,\lambda }^{\textrm{GGL}}(x)|\) at the nonnegative zeros of \(C_n^{\lambda }(x)\). Similar to the argument of Theorem 2.1, we introduce the following auxiliary function
Combining (4.2), (4.3) and (4.9) and after some calculations, we get
where we have used [12, Equation (18.9.7)] in the last step. Furthermore, invoking [12, Equation (18.9.8)] and [12, Equation (18.9.1)], and after some lengthy but elementary calculations, we obtain that
It is easily seen that \(\psi (x)\) is strictly decreasing on [0, 1] whenever \(\lambda >0\) and is strictly increasing on [0, 1] whenever \(\lambda <0\), and thus the inequalities (4.5) and (4.6) follow immediately. As for (v), we first consider the case \(\lambda >0\). From (iv) we infer that
In the case when n is odd, it is easily seen that \(x_{\nu }=0\) and thus
This proves the case of odd n. In the case when n is even, notice that \(\psi (x)\) is strictly decreasing on the interval [0, 1] we obtain
This proves the case of even n. Finally, we consider the case of \(\lambda <0\). On the one hand, from (4.6) we obtain immediately that \(\max _{x\in \Omega }|\phi _{n}^{\textrm{GGL}}(x)| = |\phi _{n}^{\textrm{GGL}}(x_1)|\). On the other hand, from [15, Theorem 8.9.1] we obtain that for \(n\gg 1\) that
Combining these with (4.2) gives the desired estimate. This completes the proof. \(\square \)
As a direct consequence of Theorem 4.1, we have the following corollary.
Corollary 4.2
For \(\lambda \ge 1\), we have
Proof
It follows by combining (4.2) and (4.7). \(\square \)
Remark 4.3
Based on a Nicholson-type formula for Gegenbauer polynomials, Durand proved the following inequality for \(\lambda \ge 1\) (see [5, Equation (19)])
Comparing (4.14) with (4.13), it is easily seen that both bounds are the same whenever n is even. In the case of odd n, however, direct calculations show that
and thus we have derived an improved bound for \(\max _{x\in \Omega }|\omega _{\lambda }(x)C_{n}^{\lambda }(x)|\) whenever n is odd.
Remark 4.4
Making use of the asymptotic expansion of the ratio of gamma functions (see, e.g., [12, Equation (5.11.13)]), it follows that
Clearly, we see that \(\mathcal {B}_n^{\lambda }=O(n^{\lambda -1})\) for large n. Direct calculations show that the sequences \(\{\mathcal {B}_{2k-1}^{\lambda }\}_{k=1}^{\infty }\) and \(\{\mathcal {B}_{2k}^{\lambda }\}_{k=1}^{\infty }\) are strictly decreasing whenever \(0<\lambda \le 1\) and the sequences \(\{\mathcal {B}_{2k-1}^{\lambda }\}_{k=2}^{\infty }\) and \(\{\mathcal {B}_{2k}^{\lambda }\}_{k=1}^{\infty }\) are strictly increasing whenever \(\lambda \ge 1.2\). For \(\lambda \in (1,1.2)\), there exists a positive integer \(\varrho _{\lambda }\in \mathbb {N}\) such that the sequences \(\{\mathcal {B}_{2k-1}^{\lambda }\}_{k\ge \varrho _{\lambda }}\) and \(\{\mathcal {B}_{2k}^{\lambda }\}_{k\ge \varrho _{\lambda }}\) are strictly increasing.
4.2 Optimal Convergence Rates of Legendre–Gauss–Lobatto Interpolation and Differentiation for Analytic Functions
Legendre–Gauss–Lobatto interpolation is widely used in the numerical solution of differential and integral equations (see, e.g., [3, 6, 14]). Let \(p_n(x)\) be the unique polynomial which interpolates f(x) at the zeros of \(\phi _n^{\textrm{LGL}}(x)\) and it is well known that \(p_n(x)\) is the LGL interpolant of degree n. If f is analytic inside and on the ellipse \(\mathcal {E}_{\rho }\) for some \(\rho >1\), convergence rates of LGL interpolation and differentiation in the maximum norm have been thoroughly studied in [26] with the help of the Hermite’s contour integral. In particular, the following error bounds were proved (setting \(\lambda =1/2\) in [26, Theorem 4.3]):
and
where \(\mathcal {K}\approx 1\) is a generic positive constant and \(\{x_j\}_{j=0}^{n}\) are LGL points (i.e., the zeros of \(\phi _n^{\textrm{LGL}}(x)\)). It is clear to see that the above results imply that the rate of convergence of \(p_n(x)\) in the \(L^{\infty }\) norm is \(O(n^{3/2}\rho ^{-n})\) and the maximum error of LGL spectral differentiation is \(O(n^{7/2}\rho ^{-n})\).
In the following, we shall improve the results (4.16) and (4.17) by using Theorem 2.1 and show that the factor \(n^{3/2}\) in (4.16) can actually be removed and the factor \(n^{7/2}\) in (4.17) can be improved to \(n^{3/2}\). We state our main results in the following theorem.
Theorem 4.5
If f is analytic inside and on the ellipse \(\mathcal {E}_{\rho }\) for some \(\rho >1\), then
and
where \(\mathcal {K}\) is a generic positive constant and \(\mathcal {K}\approx 1\) for \(n\gg 1\) and \(\textrm{d}(\Omega ,\mathcal {E}_{\rho })\) denotes the distance from \(\Omega \) to \(\mathcal {E}_{\rho }\).
Proof
From the Hermite integral formula [4, Theorem 3.6.1] we know that the remainder of the LGL interpolants can be written as
from which we can deduce immediately that
where we used (2.3) in the last step. Moreover, combining (4.20) with (2.4) we obtain
where we have used the fact that \(|P_k(x)|\le 1\) in the last step. In order to establish sharp error bounds for the LGL interpolation and differentiation, it is necessary to find the minimum value of \(|\phi _n^{\textrm{LGL}}(z)|\) for \(z\in \mathcal {E}_{\rho }\). Owing to [23, Equation (5.14)] we infer that
and, after some calculations, we obtain that
and \(\mathcal {K}\approx 1\) for \(n\gg 1\). Combining this with (4.21) and (4.22) gives the desired results. This ends the proof. \(\square \)
Remark 4.6
It is easily seen that the rate of convergence of \(p_n(x)\) in the \(L^{\infty }\) norm is \(O(\rho ^{-n})\), and thus we have improved the existing result (4.16). Moreover, the rate of convergence of LGL spectral differentiation is \(O(n^{3/2}\rho ^{-n})\), and thus we have improved the existing result (4.17).
In the proof of Theorem 4.5, we have used an asymptotic estimate of the minimum value of \(|\phi _n^{\textrm{LGL}}(z)|\) for \(z\in \mathcal {E}_{\rho }\). Now we provide a more detailed observation on this issue. By parameterizing the ellipse with \(z=(\rho e^{i\theta } + (\rho e^{i\theta })^{-1})/2\) with \(\rho >1\) and \(0\le \theta <2\pi \), we plot \(|\phi _n^{\textrm{LGL}}(z)|\) in Fig. 7 for several values of \(\rho \) and n. Clearly, we observe that the minimum value of \(|\phi _n^{\textrm{LGL}}(z)|\) is always attained at \(\theta =0,\pi \). This observation inspires us to raise the following conjecture:
Conjecture
For \(n\in \mathbb {N}\) and \(\rho >1\),
where \(z_0=\pm (\rho +\rho ^{-1})/2\).
To provide some insights into this conjecture, after some calculations we obtain
It is easily seen that the minimum values of \(\left| \phi _1^{\textrm{LGL}}(z) \right| \) and \(\left| \phi _2^{\textrm{LGL}}(z) \right| \) are always attained at \(\theta =0,\pi \), which confirms the above conjecture for \(n=1,2\). For \(n\ge 3\), however, \(\left| \phi _n^{\textrm{LGL}}(z)\right| \) will involve a rather lengthy expression and it would be infeasible to find the minimum value of \(\left| \phi _n^{\textrm{LGL}}(z) \right| \) from its explicit expression. We will pursue the proof of this conjecture in future work.
Example 4.7
We consider the following Runge function
It is easily verified that this function has a pair of poles at \(z=\pm i/a\) and thus the rates of convergence of \(p_n(x)\) is \(O(\rho ^{-n})\) with \(\rho =(1+\sqrt{a^2+1})/a\). In Fig. 8 we illustrate the maximum errors of LGL interpolants for two values of a. In our implementation, the LGL interpolants \(p_n(x)\) are computed by using the second barycentric formula
where \(\{w_j\}_{j=0}^{n}\) are the barycentric weights of LGL points and the algorithm for computing these barycentric weights is described in [24]. Clearly, we see that numerical results are in good agreement with our theoretical analysis.
In Fig. 9 we illustrate the maximum errors of LGL spectral differentiation. In our implementation, we first compute the barycentric weights \(\{w_j\}_{j=0}^{n}\) by the algorithm in [24] and then compute the differentiation matrix D by
Finally, \((p_n{'}(x_0),\ldots ,p_n{'}(x_n))^T\) is evaluated by multiplying the differentiation matrix D with \((f(x_0),\ldots ,f(x_n))^T\). From (4.19) we know that the predicted rate of convergence of LGL spectral differentiation is \(O(n^{3/2}((1+\sqrt{a^2+1})/a)^{-n})\). Clearly, we can observe from Fig. 9 that the predicted rate of convergence is consistent with the errors of LGL spectral differentiation.
5 Conclusion
In this paper, we have studied the error analysis of Legendre approximations for differentiable functions from a new perspective. We introduced the sequence of LGL polynomials \(\{\phi _0^{\textrm{LGL}}(x),\phi _1^{\textrm{LGL}}(x),\ldots \}\) and proved their theoretical properties. Based on these properties, we derived a new explicit bound for the Legendre coefficients of differentiable functions. We then obtained an optimal error bound of Legendre projections in the \(L^2\) norm and an optimal error bound of Legendre projections in the \(L^{\infty }\) norms under the condition that the maximum error of Legendre projections is attained in the interior of \(\Omega \). Numerical examples were provided to demonstrate the sharpness of our new results. Finally, we presented two extensions of our analysis, including Gegenbauer–Gauss–Lobatto (GGL) functions \(\{\phi _1^{\textrm{GGL}}(x),\phi _2^{\textrm{GGL}}(x),\ldots \}\) and optimal convergence rates of Legendre–Gauss–Lobatto interpolation and differentiation for analytic functions.
In future work, we will explore the extensions of the current study to a more general setting, such as finding some new upper bounds for the weighted Jacobi polynomials (see, e.g., [9]) and establishing some sharp error bounds for Gegenbauer approximations of differentiable functions.
References
Antonov, V.A., Holševnikov, K.V.: An estimate of the remainder in the expansion of the generating function for the Legendre polynomials (Generalization and improvement of Bernstein’s inequality). Vestnik Leningrad Univ. Math. 13, 163–166 (1981)
Babuška, I., Hakula, H.: Pointwise error estimate of the Legendre expansion: the known and unknown features. Comput. Methods Appl. Mech. Eng. 345(1), 748–773 (2019)
Canuto, C., Hussaini, M.Y., Quarteroni, A., Zang, T.A.: Spectral Methods: Fundamentals in Single Domains. Springer, New York (2006)
Davis, P.J.: Interpolation and Approximation. Dover Publications, New York (1975)
Durand, L.: Nicholson-Type Integrals for Products of Gegenbauer Functions and Related Topics. Theory and Application of Special Functions, pp. 353–374. Academic Press, New York (1975)
Ern, A., Guermond, J.-L.: Finite Elements I: Approximation and Interpolation Texts in Applied. Mathematics, vol. 72. Springer, Cham (2021)
Gautschi, W.: Orthogonal Polynomials: Computation and Approximation. Oxford University Press, London (2004)
Jackson, D.: The Theory of Approximation, vol. 11. American Mathematical Society Colloquium Publications, New York (1930)
Krasikov, I.: An upper bound on Jacobi polynomials. J. Approx. Theory 149, 116–130 (2007)
Liu, W.-J., Wang, L.-L., Wu, B.-Y.: Optimal error estimates for Legendre expansions of singular functions with fractional derivatives of bounded variation. Adv. Comput. Math. 47, 79 (2021)
Mason, J.C., Handscomb, D.C.: Chebyshev Polynomials. Chapman and Hall, Boca Raton (2003)
Olver, F.W.J., Lozier, D.W., Boisvert, R.F., Clark, C.W.: NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge (2010)
Protter, M.H., Morrey, C.B.: A First Course in Real Analysis, 2nd edn. Springer, New York (1991)
Shen, J., Tang, T., Wang, L.-L.: Spectral Methods: Algorithms Analysis and Applications. Heidelberg, Springer (2011)
Szegő, G.: Orthogonal Polynomials, vol. 23. American Mathematical Society, Providence (1975)
Trefethen, L.N.: Approximation Theory and Approximation Practice, Extended SIAM, Philadephia (2019)
Wang, H.-Y.: On the optimal estimates and comparison of Gegenbauer expansion coefficients. SIAM J. Numer. Aanl. 54(3), 1557–1581 (2016)
Wang, H.-Y.: A new and sharper bound for Legendre expansion of differentiable functions. Appl. Math. Lett. 85, 95–102 (2018)
Wang, H.-Y.: How much faster does the best polynomial approximation converge than Legendre projections? Numer. Math. 147, 481–503 (2021)
Wang, H.-Y.: Optimal rates of convergence and error localization of Gegenbauer projections. IMA J. Numer. Anal. (2022). https://doi.org/10.1093/imanum/drac047
Wang, H.-Y.: Analysis of error localization of Chebyshev spectral approximations. SIAM J. Numer. Anal. 61(2), 952–972 (2023)
Wang, H.-Y., Xiang, S.-H.: On the convergence rates of Legendre approximation. Math. Comput. 81(278), 861–877 (2012)
Wang, H.-Y., Zhang, L.: Jacobi polynomials on the Bernstein ellipse. J. Sci. Comput. 75, 457–477 (2018)
Wang, H.-Y., Huybrechs, D., Vandewalle, S.: Explicit barycentric weights for polynomial interpolation in the roots or extrema of classical orthogonal polynomials. Math. Comput. 83(290), 2893–2914 (2014)
Xiang, S.-H., Liu, G.-D.: Optimal decay rates on the asymptotics of orthogonal polynomial expansions for functions of limited regularities. Numer. Math. 145, 117–148 (2020)
Xie, Z.-Q., Wang, L.-L., Zhao, X.-D.: On exponential convergence of Gegenbauer interpolation and spectral differentiation. Math. Comput. 82(282), 1017–1036 (2013)
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant 11671160. The author wishes to thank the editor and two anonymous referees for their valuable comments on the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Arieh Iserles.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, H. New Error Bounds for Legendre Approximations of Differentiable Functions. J Fourier Anal Appl 29, 42 (2023). https://doi.org/10.1007/s00041-023-10024-4
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00041-023-10024-4
Keywords
- Legendre approximations
- Differentiable functions
- Legendre coefficients
- Legendre–Gauss–Lobatto functions
- Optimal convergence rates