Abstract
Polynomial approximation is studied in the Sobolev space \(W_p^r(w_{\alpha ,\beta })\) that consists of functions whose r-th derivatives are in weighted \(L^p\) space with the Jacobi weight function \(w_{\alpha ,\beta }\). This requires simultaneous approximation of a function and its consecutive derivatives up to s-th order with \(s \le r\). We provide sharp error estimates given in terms of \(E_n(f^{(r)})_{L^p(w_{\alpha ,\beta })}\), the error of best approximation to \(f^{(r)}\) by polynomials in \(L^p(w_{\alpha ,\beta })\), and an explicit construction of the polynomials that approximate simultaneously with the sharp error estimates.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Polynomial approximation on a finite interval is a classical problem at the center of approximation theory. The purpose of this paper is to consider simultaneous approximation of a function and its derivatives by polynomials on an interval in \(L^p\) norms defined with respect to a Jacobi weight function. Although this problem has been studied by several researchers, our results are new in several aspects.
Let \(w_{{\alpha },{\beta }}\) be the Jacobi weight function defined by \(w_{{\alpha },{\beta }}(x) : = (1-x)^{\alpha }(1+x)^{\beta }\) for \({\alpha },{\beta }> -1\) and \(x \in (-1,1)\). For \(1 \le p < \infty \), define
and, for \(p = \infty \), define this norm as the usual uniform norm \(\Vert f\Vert _\infty \). For \(r \in {\mathbb N}\), let \(C^r[-1,1]\) denote the space of functions that have r-th continuous derivatives on \([-1,1]\). For \(1\le p < \infty \), let \(W_p^r(w_{{\alpha },{\beta }})\) be the Sobolev space
and, for \(p = \infty \), define this space as \(C^r[-1,1]\). We define the norm of \(W_p^r(w_{{\alpha },{\beta }})\) by
For \(n \in {\mathbb N}\), let \(\Pi _n\) denote the space of polynomials of degree at most n in one variable. The standard error of best approximation by polynomials in \(\Pi _n\) is defined by
The characterization of this quantity via an appropriate modulus of smoothness lies in the center of Approximation Theory and is widely studied; see, for example, [5, 6]. For \(p=2\), the n-th partial sum \(S_n^{{\alpha },{\beta }} f\) of the Fourier–Jacobi series satisfies
However, using \(S_n^{{\alpha },{\beta }} f\) for approximation in \(W_2^r (w_{{\alpha },{\beta }})\) gives a much weaker result than optimal (cf. [2, 7]), which will be discussed in Sect. 2 below.
Throughout this paper, we denote by c a generic constant, independent of n, whose value may vary from line by line. We prove two types of results for approximation in the Sobolev space.
Theorem 1.1
Let \({\alpha },{\beta }> -1\). Assume \(f\in W^s_p(w_{{\alpha },{\beta }})\) if \(1 \le p < \infty \) or \(f \in C^s[-1,1]\) if \( p =\infty \). Then there exists a polynomial \(p_n \in \Pi _{2n}\) such that
Theorem 1.2
Let \({\alpha },{\beta }> -1\). Assume \(f\in W^s_p(w_{{\alpha },{\beta }})\) if \(1 \le p < \infty \) or \(f \in C^s[-1,1]\) if \( p =\infty \). Then there exists a polynomial \(p_n \in \Pi _{2n}\) such that
provided either \({\alpha }=0\) or \({\beta }=0\).
Evidently, the estimate (1.3) is stronger than the estimate (1.2) but it holds under more restrictive conditions. Moreover, (1.3) is sharp; in fact, the order of the estimate is sharp for each fixed k.
The estimate (1.2) provides a sharp estimate for the error of best polynomial approximation in the Sobolev norm. It is known [13] that, for \(r \in {\mathbb N}\),
so that the righthand side of (1.2) and (1.3) can be stated with \(E_n(f^{(s)})_{L^p(w_{{\alpha },{\beta }})}\) replaced by \(n^{-r+s} \Vert f^{(r)}\Vert _{L^p(w_{{\alpha },{\beta }})}\). In the case \(p=2\), estimates in the form
have been established and used in the spectral method for numerical solution of differential equations in some cases; more precisely, such an estimate was first established in [2, 3] for the case of Chebyshev and Legendre polynomials (\({\alpha }={\beta }= -1/2\) or 0) when \(s =1\), and later established for general \({\alpha },{\beta }\) and \(s=1\) by several researchers, see [8, 15] and references therein. In latter works, the norm in the lefthand side of (1.5) is often replaced by
and the norm in the righthand side is replaced by \(\Vert f^{(r)}\Vert _{L^2(w_{{\alpha }+r,{\beta }+r})}\), which we call \(*\)-version. By (1.4), our estimates can be stated in terms of the norm of \(f^{(r)}\) for \(r \ge s\), so that (1.2) is stronger than (1.5) and offers an estimate somewhat different from (1.5) in \(*\)-version.
The estimate (1.3) is what is known as simultaneous approximation in the Approximation Theory community, where it is a folklore that each increased derivative reduces the order of approximation by \(n^{-1}\). However, such estimates are usually established with a modulus of smoothness of \(f^{(r)}\) in place of \(E_n(f^{(r)})_{L^p(w_{{\alpha },{\beta }})}\) in (1.3) in the literature (cf. [9, 10]).
The polynomial \(p_n\) in the theorems can be expressed in a simple explicit formula. For \(p=2\), it is the n-th partial sum operator of the Fourier series in the Sobolev orthogonal polynomials in \(W_2^s(w_{{\alpha },{\beta }})\), which are polynomials that are orthogonal with respect to an inner product that involves derivatives. It should be pointed out, however, that this fact does not follow from the usual Hilbert space argument, since the norm of \(W_2^s(w_{{\alpha },{\beta }})\) is not arising from the square root of the inner product that defines the Sobolev orthogonality. Sobolev orthogonal polynomials have been studied extensively in the special function community (cf. [12] and the references therein), but not their orthogonal series, and in spectral method community, often with zero boundary at the end of the intervals. These polynomials are usually given in terms of the Jacobi polynomials with negative integer parameters, which requires appropriate extensions that could be rather delicate (cf. [1, 4, 8, 11, 18]). We shall give a more direct definition of the Sobolev orthogonal polynomials in \(W_2^s(w_{{\alpha },{\beta }})\) that does not require such extensions.
The paper is organized as follows. In the following section, we discuss the approximation behavior of the \(L^2\) partial sum operator \(S_n^{{\alpha },{\beta }} f\) in the Sobolev space, which gives suboptimal result. In Sect. 3, we consider a Sobolev inner product in \(W_2^r(w_{{\alpha },b})\) and define a family of its orthogonal polynomials, for all \({\alpha },{\beta }> -1\), in an elegant formula that is more suitable for studying orthogonal series in terms of them. Approximation by polynomials or orthogonal series are studied in the following two sections. In particular, more elaborate version of Theorems 1.2 and 1.3 will be established in Sects. 4 and 5, respectively.
2 Jacobi Polynomials and Fourier Jacobi Series
For \({\alpha },{\beta }>-1\), the Jacobi polynomials are defined by [16, (4.21.2)],
in terms of the hypergeometric function \({}_2F_1\). For convenience, we shall define
One advantage of this normalization is the following identity, by [16, (4.5.5)],
These polynomials are orthogonal with respect to the inner product
The Fourier orthogonal expansion of f in \(L^2(w_{{\alpha },{\beta }})\) is defined by
and \(h_n^{{\alpha },{\beta }}:= {\langle }J_n^{{\alpha },{\beta }}, J_n^{{\alpha },{\beta }}{\rangle }_{{\alpha },{\beta }}\) is given by [16, (4.3.3)]
The n-th partial sum of this expansion, defined by
is the least square polynomial of degree n, that is, (1.1) holds. The operator \(S_n^{{\alpha },{\beta }}\) can be written as a linear integral operator.
For approximation in \(L^p(w_{{\alpha },{\beta }})\), \(p\ne 2\), we can define a near best approximation operator as follows. We call \(\eta \) an admissible function if \(\eta \) is a \(C^\infty \) function on \({\mathbb R}_+\) satisfying \(\eta (t) = 1\) for \(0\le t \le 1\) and \(\eta (t) =0\) for \(t \ge 2\). For an admissible \(\eta \), define
It is well known that \(V_{n}^{{\alpha },{\beta }}\) defines a bounded linear operator in \(L^p(w_{{\alpha },{\beta }})\) for \(1 \le p \le \infty \) and it preserves polynomials up to degree n, that is, \(V_{n}^{{\alpha },b} f = f\) for \(f \in \Pi _n\); consequently, the following theorem holds (see, for example, [17]).
Theorem 2.1
Let \({\alpha },{\beta }> -1\). For \(f\in L^p(w_{{\alpha },{\beta }})\) if \(1 \le p < \infty \), or \(f \in C[-1,1]\) if \(p = \infty \),
We will also need the Jackson type estimate for the error of best approximation. Let \(\phi (x):= \sqrt{1-x^2}\). The following theorem was established in [13].
Theorem 2.2
Let \({\alpha },{\beta }> -1\). For \(f \in W_p^r(w_{{\alpha },{\beta }})\) if \(1 \le p < \infty \), or \(f \in C^r[-1,1]\) if \(p = \infty \),
In the rest of this section, we consider the approximation behavior of \(S_n^{{\alpha },{\beta }} f\) in \(L^2(w_{{\alpha },{\beta }})\) and in \(W_2^r(w_{{\alpha },{\beta }})\). Some of the results below are no doubt known but they provide contrast to our latter development and our proof is simple. We start with a lemma that is suggestive for our later study. Let \(\partial \) denote the differential operator.
Lemma 2.3
Let \({\alpha },{\beta }>-1\). For \(n =1,2,\ldots \),
Proof
It is well-known that the Jacobi polynomials are eigenfunctions of a second order differential operator [16, (4.21.1)]
where \({\lambda }_n = n(n+{\alpha }+{\beta }+1)\). Integrating by parts and applying this identity, we obtain by (2.2) that
Setting \(f = J_{n+1}^{{\alpha },{\beta }}(t)\), it follows readily from the definition that
Consequently, by (2.2) again, we see that
This completes the proof. \(\square \)
Theorem 2.4
Let \({\alpha },{\beta }> -1\) and \(r \in {\mathbb N}\). For \(f \in L^2(w_{{\alpha },{\beta }})\) such that \(f^{(r)} \in L^2(w_{{\alpha }+r,{\beta }+r})\),
Proof
By the Parseval identity, (2.7) and the formula for \(h_k^{{\alpha },{\beta }}\),
where we have used the Parseval identity again at the last step. Iterating this inequality proves the stated result. \(\square \)
The identity (2.6) allows us to derive error estimates for simultaneous approximation by \(S_n^{{\alpha },{\beta }} f\). For comparison with our later results, we formulate the following corollary.
Corollary 2.5
Let \({\alpha },{\beta }> -1\) and \(r, s \in {\mathbb N}\). For \(f \in L^2(w_{{\alpha }+r,{\beta }+r})\) such that \(f^r \in L^2(w_{{\alpha },{\beta }})\) and \(n \ge r \ge s\),
Proof
By (2.6), the left hand side of (2.9) can be rewritten as \(E_{n-k} (f^{(k)})_{L^2(w_{{\alpha }+k,{\beta }+k})}\), which is bounded by, by (2.8), \(n^{-r+k} E_{n-r} (f^{(r)})_{L^2(w_{{\alpha }+ r-k, {\beta }+ r-k})}\), in which we can remove \(r-k\) since \(w_{{\alpha },{\beta }}(t) \le 1\) if \({\alpha },{\beta }\ge 0\) and \(r - k \ge 0\). \(\square \)
It is possible to remove \(\phi ^k\) in the left hand side of (2.9) with the penalty of a higher power of n in the righthand side. This was first done in [2], see the proof in [3], for the Chebyshev and the Legendre cases with \(r=1\) and later extended to the Gegenbauer weight in [7], but with \(\Vert f^{(r)}\Vert _{L^2(w_{{\alpha },{\beta }})}\) in place of \(E_{n-r} (f^{(r)})_{L^2(w_{{\alpha },{\beta }})}\) in (2.10) below, which is weaker than (2.10) by (2.5). We give a complete proof for the Jacobi weight.
Theorem 2.6
Let \({\alpha },{\beta }> -1\), \(r =1,2,\ldots \), and \(f \in W_2^r(w_{{\alpha },{\beta }})\). Then
where \(c_{{\alpha },{\beta }}\) is proportional to \(1/ \sqrt{\min \{{\alpha },{\beta }\}+1}\) when \(k=1\). Moreover, the estimate (2.10) is sharp.
Comparing with (1.3), the order of n in (2.10) is much weaker, which shows that the least polynomials for \(L^2(w_{{\alpha },{\beta }})\) is not suitable for simultaneous approximation.
The proof of this theorem depends on two lemmas. The first one is an identity on the Jacobi polynomials.
Lemma 2.7
For \({\alpha }, {\beta }> -1\) and \(n \in {\mathbb N}\),
where
Proof
The following relations on the Jacobi polynomials are stated in [18],
where \(\tau _{n}^{{\alpha },{\beta }}: = (n+{\beta })/((2n+{\alpha }+{\beta })(2n+{\alpha }+{\beta }+1))\). Iterating these identities, it is easy to see that
Together, these two identities imply that
where we have interchanged the order of summations. By induction on n, we can establish that
from which the stated result follows from a quick simplification. \(\square \)
Remark 2.8
The connection coefficients that appear when writing \(J_n^{{\alpha },{\beta }}\) in terms of \(J_n^{{\gamma },{\delta }}\) are non-negative if \({\alpha }= {\gamma }\) and \({\beta }> {\delta }> -1\) or \({\beta }= {\delta }\) and \({\alpha }> {\gamma }> -1\), or \({\alpha }={\beta }> {\gamma }= {\delta }> -1\). It is interesting to observe that the coefficients in (2.11) may not be all positive when \({\alpha }\ne {\beta }\). For example, it is easy to see that the coefficient for \(j =1\) and \(n =4\) is negative if \({\alpha }> {\beta }\).
Our main effort lies in establishing the identity (2.13) in the following lemma.
Lemma 2.9
Let \({\alpha },{\beta }> -1\). If \( f\in W_2^1(w_{{\alpha },{\beta }})\), then
where \(A_j^{{\alpha },{\beta }}\) and \(B_n^{{\alpha },{\beta }}\) are defined in (2.11) and
Proof
First we assume that \(f \in W_2^r(w_{{\alpha },{\beta }})\) for r sufficiently large. Since \(f' \in L^2 (w_{{\alpha },{\beta }})\), its Fourier orthogonal expansion is
Moreover, since \(L^2 (w_{{\alpha },{\beta }}) \subset L^2(w_{a+1,{\beta }+1})\), we can also write, by (2.11), that
Comparing the two expansions of \(f'\), we conclude, by (2.11), that
where
The last two series are absolutely convergent, since \(| \widehat{\partial f}_{k+1}^{{\alpha },{\beta }}| (h_{k+1}^{{\alpha },{\beta }})^{\frac{1}{2}} \le E_k(f)_{{\alpha },{\beta }}\), which decays fast by (2.8), and \(B_k^{{\alpha },{\beta }}/(h_{k+1}^{{\alpha },{\beta }})^{\frac{1}{2}} \le c/ k^{2{\beta }}\) and \(B_k^{{\alpha },{\beta }}/(h_{k+1}^{{\alpha },{\beta }})^{\frac{1}{2}} \le c / k^{2{\alpha }}\). Since it is easy to check that \(A_{j+1}^{{\alpha },{\beta }} B_j^{{\beta },{\alpha }} = A_{j+1}^{{\beta },{\alpha }} B_j^{{\alpha },{\beta }}\), we also have
where we have used \(\Sigma _{1,j+1} = \Sigma _{1,j} - (-1)^j \widehat{f}_{j+1}^{{\alpha },{\beta }} B_j^{{\alpha },{\beta }}\) and \(\Sigma _{2, j+1} = \Sigma _{2,j} - \widehat{f}_{j+1}^{{\alpha },{\beta }} B_j^{{\beta },{\alpha }}\). Solving (2.14) and (2.15) and simplifying, we obtain
so that, by (2.14), we conclude that
Inserting the expressions for \(\Sigma _{1,n}\) and \(\Sigma _{2,n}\) in (2.15) completes the proof for smooth f. Since both sides of (2.13) is bounded for \(f \in W_2^1(w_{{\alpha },{\beta }})\), as shown in the proof of Theorem 2.6, the identity holds in \(W_2^1(w_{{\alpha },{\beta }})\) by the usual density argument. \(\square \)
Proof of Theorem 2.6
Assuming (2.13), we proceed with the proof. First we consider the case \(k=1\). Let \(f \in W_2^r(w_{{\alpha },{\beta }})\). By the triangle inequality,
The first term in the right hand side is bounded by \(E_{n-1}(f')_{{\alpha },{\beta }}\), which is small than the desired bound. We now bound the second term. By (2.13)
By the expression of \(A_j^{{\alpha },{\beta }}\) and \(B_n^{{\alpha },{\beta }}\) in (2.11), it is not difficult to verify that
and \(h_n^{{\alpha },{\beta }} |D_n^{{\alpha },{\beta }}|^2/h_{n+1}^{{\alpha },{\beta }} \sim 1\). Consequently, we deduce that
where the last step follows from the Parseval identity. This proves (2.10) for \(k=1\) and \(r =1\), which implies the case \(k=1\) and \(r \ge 1\) by (2.8).
The case \(k > 1\) follows inductively. Our main effort lies in proving the inequality
for \(m=1,2,\ldots \). Using (2.13) and \(\partial ^m J_j^{{\alpha },{\beta }} = J_{j-m}^{{\alpha }+m,{\beta }+m}\), we see that the main ingredient is the estimate the sum
where the first inequality follows from the Cauchy–Schwartz inequality and the second one follows from
which can be easily verified by (2.3) and the asymptotic of the Gamma function, and the third one follows from a straightforward estimate. Consequently, it follows readily that
and the similar estimate holds when \({\alpha }\) and \({\beta }\) are exchanged in \(A_j^{{\alpha },{\beta }}\) and \(B_n^{{\alpha },{\beta }}\). These estimates allow us to estimate, by (2.13), that
from which (2.18) follows readily.
Assume now (2.10) has been established for a fixed k, we prove that it also holds for \(k+1\). By the triangle inequality,
The second term is the right hand side can be bounded, by applying (2.18) with \(m=k\) and (2.8), by \(c\, n^{2k+1/2} E_{n-1}(f')_{{\alpha },{\beta }} \le c\, n^{2k+1/2} n^{-r+1} E_{n-r}(f^{(r)})_{{\alpha },{\beta }}\), in which the power of n can be written as \(-r+2(k+1)-1/2\), which agrees with that in (2.10) for \(k+1\), whereas the first term in the right hand side can be bounded, by induction hypothesis with r replaced by \(r-1\), by a bound that is less than the above bound. This completes the proof of (2.10) for \(k=1\) and the proof.
To show that the order is sharp, we consider \(g(t) = \widehat{J}_{n+1}^{{\alpha }-k,{\beta }-k}(t)\), which is well defined for n large even if \({\alpha }<0\) or \({\beta }< 0\). Then \(g^{(k)}(t) = \widehat{J}_{n+1-k}^{{\alpha },{\beta }}(t)\). Since the orthogonal expansion of \(\widehat{J}_{n+1 -k}^{{\alpha },{\beta }}\) is itself, \(E_{n-k}(g^{(k)})_{{\alpha },{\beta }} = \Vert \widehat{J}_{n+1-k}^{{\alpha },{\beta }}\Vert _{{\alpha },{\beta }}\). Furthermore, by (2.12), \(J_{n+1-k}^{{\alpha },{\beta }} - J_{n+1-k}^{{\alpha }+k,{\beta }+k}\) is a polynomial of degree \(n-k\), so that
Consequently, \(\partial ^k g - \partial ^k S_n^{{\alpha },{\beta }} g = \widehat{J}_{n+1-k}^{{\alpha }+k,{\beta }+k}\). It then follows from (2.19) that (2.10) is sharp for \(k=r\). \(\square \)
3 Sobolev Orthogonal Polynomials and Orthogonal Expansions
As mentioned in the Sect. 1, for approximation in the Sobolev space \(W_p^s(w_{{\alpha },{\beta }})\), we need to work with the Jacobi polynomials with parameters \({\alpha },{\beta }\) being negative integers. Setting \({\alpha },{\beta }\) as negative integers in (2.1) leads to a reduction of polynomial degrees, which causes problems when one considers orthogonal expansions. There have been several ways of remedying the definition of the Jacobi polynomials in the literature; see, for example, [1, 4, 8, 11] and the references therein. Motivating by the study in [18], which will be explained in the end of this section, we give another definition that can be regarded as either avoiding delicate extensions of the Jacobi polynomials to negative integers or as an alternative definition that holds for all negative indices.
Definition 3.1
Let \({\alpha },{\beta }> -1\) and \(s \in {\mathbb N}\). For \({\theta }\in [-1,1]\) and \(n \in {\mathbb N}_0\), define
It is evident that \({\mathcal J}_n^{{\alpha }-s,{\beta }-s}\) is a polynomial of degree n. Furthermore, these polynomials evidently satisfy the following properties:
where \(\partial ^k\) denotes the k-th derivative. Comparing with (2.2), the identity (3.2) suggests that these polynomials can be regarded as an extension of the Jacobi polynomials with negative parameters when \({\alpha }- s \le -1\) and/or \({\beta }-s \le -1\). If \({\alpha }-s > -1\) and \({\beta }-s > -1\), then both \(J_n^{{\alpha }-s,{\beta }-s}\) and \({\mathcal J}_n^{{\alpha }-s,{\beta }-s}\) satisfy (3.2), but \(J_n^{{\alpha }-s,{\beta }-s}\) does not satisfy (3.3). These polynomials are orthogonal with respect to the inner product
where \({\lambda }_k\) are positive constants.
Theorem 3.2
For \({\alpha },{\beta }> -1\) and \(s \in {\mathbb N}\). The polynomial \({\mathcal J}_{n,{\theta }}^{{\alpha }-s,{\beta }-s}\) is orthogonal with respect to the inner product \({\langle }\cdot , \cdot {\rangle }_{{\alpha },{\beta }}^{-s}\) and its norm square, \(\mathfrak {h}_{n}^{{\alpha }-s,{\beta }-s} := {\langle }{\mathcal J}_n^{{\alpha }-s,{\beta }-s}, {\mathcal J}_n^{{\alpha }-s,{\beta }-s}{\rangle }_{{\alpha },{\beta }}^{-s}\), satisfies
Proof
Let \(m \le n\). We consider the orthogonality of \({\mathcal J}_n^{{\alpha }-s,{\beta }-s}\) and \({\mathcal J}_m^{{\alpha }-s,{\beta }-s}\). If \(n \le s-1\) then, by (3.2) and (3.3),
Whereas if \(n \ge s\), then, by (3.2) and (3.3),
by the orthogonality of the Jacobi polynomials. \(\square \)
As we mentioned before, the inner product \({\langle }\cdot ,\cdot {\rangle }_{{\alpha },{\beta }}\) and its associated orthogonal polynomials have been studied in the literature (see, [12] and its references). Instead of starting with an extension of the Jacobi polynomials to parameters being negative integers and constructing orthogonal polynomials accordingly, our construction is more direct with a strikingly, in comparison, simple proof and works for all real parameters.
For \(f \in {\mathcal W}_p^s(w_{{\alpha },{\beta }})\), we can study the Fourier orthogonal expansion of f with respect to the orthogonal system \( {\mathcal J}_{n,{\theta }}^{{\alpha }-s,{\beta }-s}\),
The n-th partial sum of this expansion is defined by
For \(s =0\), the operator \({\mathcal S}_n^{{\alpha },{\beta }} f = S_n^{{\alpha },{\beta }} f\) is the partial sum of the usual Jacobi expansion. This operator satisfies several simple properties and can be written, in particular, in terms of the partial sum \(S_{n-s}^{{\alpha },{\beta }} f\), where we define \(S_n^{{\alpha },{\beta }} f =0\) if \(n < 0\).
Lemma 3.3
Let \({\alpha },{\beta }> -1\) and \(s \in {\mathbb N}\). For \(f \in W_p^s(w_{{\alpha },{\beta }})\),
-
(1)
\(\widehat{\mathfrak {f}}_n^{{\alpha }-s,{\beta }-s} = f^{(n)}({\theta })\) if \(0 \le n \le s-1\), and \(\widehat{\mathfrak {f}}_n^{{\alpha }-s,{\beta }-s} = \widehat{\partial ^s f}_{n-s}^{{\alpha },{\beta }}\) if \(n \ge s\);
-
(2)
\(\partial ^s {\mathcal S}_n^{{\alpha }-s,{\beta }-s} f = S_{n-s}^{{\alpha },{\beta }} f^{(s)}\) if \(n \ge s\);
-
(3)
For \(n = 0,1, \ldots \),
$$\begin{aligned} {\mathcal S}_n^{{\alpha }-s,{\beta }-s} f(x) = \sum _{k=0}^{\min \{n,s-1\}}f^{(k)}({\theta }) \frac{(x-{\theta })^k}{k!} + \int _{\theta }^x \frac{(x-t)^{s-1}}{(s-1)!} S_{n-s}^{{\alpha },{\beta }} f^{(s)}(t) dt. \end{aligned}$$
Proof
As in the proof of the Theorem 3.2, it is easy to see that, if \(n \le s-1\), then \({\langle }f, {\mathcal J}_n^{{\alpha }-s,{\beta }-s}{\rangle }_{{\alpha },{\beta }}^{-s} = {\lambda }_n f^{(n)}({\theta })\) and \({\mathfrak {h}}_n^{{\alpha }-s,{\beta }-s} = {\lambda }_n\), whereas if \(n \ge s\) then \({\langle }f^{(s)}, J_{n-s}^{{\alpha },{\beta }}{\rangle }_{{\alpha },{\beta }}\) and \({\mathfrak {h}}_n^{{\alpha }-s,{\beta }-s} = h_{n-s}^{{\alpha },{\beta }}\) for \(n \ge s\), from which (1) for \(n \ge s\) follows readily. The case for \(n \le s-1\) follows similarly. By the definition of the partial sum operator, we then obtain, for \(n \ge s\),
which is, after changing the order of the sum and the integral, exactly the right hand side of (3). Finally, taking s-th derivative of the identity in (3) proves (2). \(\square \)
In [11], we extended the Jacobi polynomials to allow the parameters to be negative integers. The extended polynomial, again denoted by \(J_n^{{\alpha },{\beta }}\), satisfies (2.2) for all \({\alpha },{\beta }\in {\mathbb R}\), \(n \in {\mathbb N}\), and its leading coefficient is \(x^n/n!\), that is, \(J_n^{{\alpha },{\beta }}(x)= x^n/n! + \ldots \). Based on this extension of the Jacobi polynomials, we proved in [18] that the polynomials
are orthogonal with respect to the inner product \({\langle }\cdot ,\cdot {\rangle }_{0,{\beta }}^{-s}\) with \({\theta }=1\). Below we prove that \(\widehat{J}_n^{-s,{\beta }-s}(x) \equiv {\mathcal J}_n^{-s,{\beta }-s}(x)\) for \(n \ge 0\). Indeed, if \(n<s\), we use (2.2) and the leading coefficient of \(J_n^{{\alpha },{\beta }}\) to conclude, by the Taylor expansion, that
whereas if n \(\ge s\), the equivalence follows from the identity [18, (2.9)]
and the integral form of the Taylor reminder formula. In fact, it is this equivalence that motivated our definition of \({\mathcal J}_n^{{\alpha }-s,{\beta }-s}\) for all \({\alpha }, {\beta }\in {\mathbb R}\).
4 Polynomial Approximation in Sobolev Spaces
We shall show that \({\mathcal S}_{n,{\theta }}^{{\alpha }-s,{\beta }-s}f \) approximates f with the least error, up to a multiple constant, in the \(W_2^s(w_{{\alpha },{\beta }})\) norm. We also define a near best approximation polynomial that will play the role of \({\mathcal S}_{n,{\theta }}^{{\alpha },{\beta }} f\) when \(p\ne 2\). Fix \({\theta }\), we define \({\mathcal V}_n^{{\alpha }-s,{\beta }-s} ={\mathcal V}_{n,{\theta }}^{{\alpha }-s,{\beta }-s}\) for \(s =1,2,\ldots \) by
It is easy to see that \({\mathcal V}_{n,{\theta }}^{{\alpha }-s,{\beta }-s}f\) is a polynomial of degree \(2n+s\) and it preserves polynomials of degree \(\le n\). Furthermore, it satisfies
To study the approximation property of \({\mathcal S}_{n,{\theta }}^{{\alpha },{\beta }}f\) or \({\mathcal V}_{n,{\theta }}^{{\alpha }-s,{\beta }-s}f\), we will need the weighted Hardy inequality with respect to the Jacobi weight:
Lemma 4.1
For \({\alpha },{\beta }> -1\) and \(f \in L^p(w_{{\alpha },{\beta }})\), \(1< p < \infty \),
if and only if \({\beta }< p-1\).
Proof
With \(w_{{\alpha },{\beta }}\) replaced by a general weight function w, it is known (see, for example, [14]) that the inequality holds if and only if
where \(q = p/(p-1)\). If \(-1 < x \le 0\), then the first integral is finite and the second integral is finite if and only if, since \(1-t \sim 1\), \({\beta }(1-q) > -1\), or \({\beta }< 1/(q-1) = p-1\). If \(0 \le x < 1\), then the first integral is bounded by \(c (1-x)^{({\alpha }+1)/p}\), whereas the second integral is bounded by, since \(1+t \sim 1\), \((1-x)^{({\alpha }(1-q)+1)/q}\), it is easy to verify that their product is finite. \(\square \)
Theorem 4.2
Let \({\alpha }, {\beta }> -1\) and \(s \in {\mathbb N}\). For \(f\in W_p^s(w_{{\alpha },{\beta }})\) if \(1 \le p < \infty \), or \(f \in C^s[-1,1]\) if \(p = \infty \), the estimate
holds under either one of the following assumptions:
-
(1)
\({\theta }\in (-1,1)\);
-
(2)
\({\theta }=-1\), \( {\beta }< p-1\) if \(1< p \le \infty \) and \({\beta }\le 0\) if \(p=1\);
-
(3)
\({\theta }=1\), \({\alpha }< p-1\) if \(1< p \le \infty \) and \({\alpha }\le 0\) if \(p=1\).
Proof
By the definition of the \(\Vert \cdot \Vert _{W_p^s(w_{{\alpha },{\beta }})}\) norm, we need to show that
Since \(\partial ^s f - \partial ^s {\mathcal V}_{n,{\theta }}^{{\alpha }-s,{\beta }-s} f = f^{(s)} - V_{n}^{{\alpha },{\beta }}f^{(s)}\), the estimate (4.4) when \(k =s\) follows immediately. By the Taylor reminder formula, we can write
so that
Taking k-th derivatives, \(1 \le k \le s-1\), only reduces the power of \((t-{\theta })^{s-1}\) by k. Since we need to consider \(k =0,1,\ldots , s-1\), we can ignore the factor \((t-{\theta })^{s-k-1}\). For \(p = \infty \), the estimate (4.4) follows trtivially from the above identity.
For \(p > 1\), applying the Hölder inequality, we obtain
If \({\theta }\in (-1,1)\), it is easy to see, since \(w_{{\alpha },{\beta }}\) has no singularity at \({\theta }\), that the last integral is bounded by a constant for all \(p > 1\). If \( {\theta }= -1\), the inequality follows from Lemma 4.1 and the case \({\theta }= 1\) follows likewise from the dual Hardy inequality [14]. A direct proof can also be easily carried out by dividing the integral over \([-1,1]\) into two integrals over \([-1,0]\) and [0, 1], respectively, so that
the first term in the right hand side is bounded if \( - {\beta }q/p > -1\) or \({\beta }< p/q = p-1\), whereas the second term can be seen to be bounded, after splitting the inner integral as a sum of two, one over \([-1,0]\) and the other over [0, x]. The case \({\theta }=1\) works similarly.
For \( p =1\), we divide the integral into two terms and exchanging the orders of the integrals in each term,
For \( {\theta }\in (-1,1)\), \(w_{{\alpha },{\beta }}(x) \sim (1-x)^{\alpha }\) in the first term in the right hand side, which implies that \(\int _{-1}^t w_{{\alpha },{\beta }}(x)dx \le c w_{{\alpha },{\beta }}(t)\), whereas \(w_{{\alpha },{\beta }}(x) \sim (1+x)^{\beta }\) in the second term, which implies that \(\int _t^1 w_{{\alpha },{\beta }}(x) dx \le c w_{{\alpha },{\beta }}(t)\). This proves the case for \({\theta }\in (-1,1)\). For \({\theta }= -1\), there is only the second term, for which we use \((1+x)^{\beta }\le (1+t)^{\beta }\) for \({\beta }\le 0\) to conclude that \(\int _t^1 w_{{\alpha },{\beta }}(x) dx \le c w_{{\alpha },{\beta }}(t)\) for \({\beta }\le 0\). The case \({\theta }= 1\) is proved similarly. The proof is completed. \(\square \)
Evidently, Theorem 1.1 follows from this theorem with any \({\theta }\in (0,1)\). In the case of \(p=2\), we can replace \({\mathcal V}_{n,{\theta }}^{{\alpha }-s,{\beta }-s}f \) by the partial sum operator \({\mathcal S}_n^{{\alpha }-s,{\beta }-s} f\).
Corollary 4.3
Let \({\alpha }, {\beta }> -1\), \(s \in {\mathbb N}\). For \(f\in W_2^s(w_{{\alpha },{\beta }})\), the estimate
holds if (a) \({\theta }\in (-1,1)\), or (b) \({\theta }=-1\) and \( {\beta }< 1\), or (c) \({\theta }=1\) and \({\alpha }< 1\).
In particular, if \(f \in W_2^r(w_{{\alpha },{\beta }})\) for \(r \ge s\), then we have
and the righthand is also bounded by \(c\, n^{-r+s} \Vert \phi ^{r-s} f^{(r)}\Vert _{L^p(w_{{\alpha },{\beta }})}\) by Theorem 2.2.
5 Simultaneous Approximation
We now consider the simultaneous approximation by polynomials. Our goal is to establish the estimate of the type
Our result in the previous section shows that this estimate holds for \(k=s\). We apply a technique called Aubin-Nitsche duality (see [3, p. 321] and [2]) to handle the case \(0 \le k \le s-1\). For this, we need to work with either \({\theta }= -1\) or \({\theta }=1\). We mainly work with \({\theta }=-1\), since the other case is similar.
Let g be a measurable function on \([-1,1]\). For \( 0\le k \le s-1\), we define a function
It is straightforward to verify that \(u_{g,k}\) is the solution of the following boundary value problem of a \((2s-k)\)-th order differential equation:
Lemma 5.1
If \(v \in W_p^s\) satisfies \(v^{(j)}(-1) =0\) for \(0 \le j \le s-k-1\), then
Proof
By the differential equation in (5.2),
With the help of the boundary conditions of \(u_{g,k}^{(s)}\) at \(x=1\) and v at \(-1\), successive integration by parts proves the stated identity. \(\square \)
Throughout the rest of the paper, we let p and q be related by \(\frac{1}{p} + \frac{1}{q} =1\).
Lemma 5.2
Let \({\alpha }> -1\) and \(g \in L^q(w_{{\alpha },{\beta }})\), \( 1 \le q \le \infty \). Then, for \(1 \le q \le \infty \),
provided (a) \({\beta }=0\) or (b) \(s=1\), \({\beta }< p/2-1\) if \(1 \le q < \infty \) and \({\beta }\le -1/2\) if \(q=\infty \).
Proof
By (5.1), taking s-th derivative gives
Taking another \((s-k)\)-th order derivatives by the product rule, we conclude that
If \({\beta }= 0\), then \(\left| \frac{d^j}{dx^j}\left( \frac{1}{w_{{\alpha },0}(x)}\right) \right| (1-x)^{j-1} = |(-a)_j| (1-x)^{-{\alpha }-1}\). Hence, by \((y-x)^{j-1} \le (1-x)^{j-1}\) for \(x \le y\le 1\), it follows that
If \(q = \infty \), it follows immediately that \(\Vert u_{g,k}^{(2s-k)}\Vert _\infty \le c \Vert g\Vert _\infty \). For \(1< q < \infty \), by the Hölder inequality and \(\phi (x)^{s-k} \le c (1-x)^{1/2}\) as \(0\le k \le s-1\),
whereas for \(q=1\), exchanging the order of integration shows that
Putting this together, we have completed proof of (5.3) under (a).
We now prove (5.3) under the condition in (b). Since \(s =1\), k has to be zero. Since
and \( \frac{d}{dx} (w_{{\alpha },{\beta }}(x))^{-1} = ({\alpha }(1-x)^{-1} - {\beta }(1+x)^{-1} ) (w_{{\alpha },{\beta }}(x))^{-1}\), it is easy to see that our main task is to establish the inequality
For \(q = \infty \), it is easy to see that we need \({\beta }+1-1/2 \le 0\) or \({\beta }\le -1/2\). For \(1< q < \infty \), we can follow the proof in (a) and conclude that the inequality holds if \(-{\beta }(q-1)> q/2-1\) or \({\beta }< p/2 -1\). This also holds for \(q=1\) as can be seen by exchanging the order of the integrals. \(\square \)
It is worthwhile to mention that if \(s > 1\) and \({\beta }\ne 0\), then we apply the product rule on \(1/w_{{\alpha },{\beta }}(x) = (1-x)^{-{\alpha }}(1+x)^{-{\beta }}\) to obtain
so that, since the summand in the last expression is independent of j,
Following the same proof as before, it is not difficult to see that we need \({\beta }< (1- s/2)p -1\) for (5.3). Since \({\beta }> -1\), this makes sense only if \(s=1\) and \({\beta }< p/2 -1\).
Theorem 5.3
Let \({\alpha },{\beta }> -1\) and \(f \in W_p^s(w_{{\alpha },{\beta }})\) for \(1\le p < \infty \) and \(f\in C^s[-1,1]\) if \(p = \infty \). Then
holds under either one of the following sets of assumptions:
-
(a)
\({\theta }= -1\), with (i) \(s \in {\mathbb N}\), \({\beta }=0\) and \(1 \le p \le \infty \), or (ii) \(s=1\), \({\beta }< p/2-1\) for \(1< p \le \infty \) and \({\beta }\le p/2-1\) if \(p=1\);
-
(b)
\({\theta }= 1\), with (i) \(s \in {\mathbb N}\), \({\alpha }=0\) and \(1 \le p \le \infty \), or (ii) \(s=1\), \({\alpha }< p/2-1\) for \(1< p \le \infty \) and \({\alpha }\le p/2-1\) if \(p=1\).
Furthermore, for \(p=2\), we can replace (5.4) by
Proof
Let us first consider the case \({\theta }= -1\) and write \({\mathcal V}_{n}^{{\alpha }-s,{\beta }-s} f = {\mathcal V}_{n,-1}^{{\alpha }-s,{\beta }-s} f\) for convenience. Applying Lemma 5.1 with \(v = \partial ^k f - \partial ^k {\mathcal V}_{n}^{{\alpha }-s,{\beta }-s} f\) and using (4.2), we have
where the second equation follows from the orthogonality of \(f^{(s)} - {\mathcal V}_{n}^{{\alpha },{\beta }} f^{(s)}\) to any polynomials of degree at most n. By Theorem 2.2 and (5.3),
so that, by the Hölder inequality, we conclude that
Applying the above inequality to the following expression of the \(L^p\) norm
completes the proof (5.4) for \({\theta }= -1\).
The proof for \({\theta }=1\) follows similarly with \(u_{g,k}\) in (5.1) replaced by
for \(0 \le k \le s-1\). The proof follows along the same line and we omit the details. \(\square \)
Evidently, Theorem 1.2 follows from this theorem with either \({\alpha }= 0\) or \({\beta }=0\).
References
Alfaro, M., de Morales, M.Á., Rezola, M.L.: Orthogonality of the Jacobi polynomials with negative integer parameters. J. Comput. Appl. Math. 145, 379–386 (2002)
Canuto, C., Quarteroni, A.: Approximation results for orthogonal polynomials in Sobolev spaces. Math. Comput. 38, 67–86 (1982)
Canuto, C., Hussaini, M.Y., Quarteroni, A., Zang, T.A.: Spectral Methods: Fundamentals in Single Domains. Springer, Berlin (2006)
de Morales, M.Á., Pérez, T.E., Piñar, M.A.: Sobolev orthogonality for the Gegenbauer polynomials \(\{C_n^{(-N+1/2)}\}_{n\ge 0}\). J. Comput. Appl. Math. 100, 111–120 (1998)
DeVore, R., Lorentz, G.: Constructive Approximation. Springer, Berlin (1993)
Ditzian, Z., Totik, V.: Moduli of Smoothness. Springe, Berlin (1987)
Guo, B.Y.: Gegenbauer approximation in certain Hilbert spaces and its applications to singular differential equations. SIAM J. Numer. Anal. 37, 621–645 (2000)
Guo, B.Y., Wang, L.L.: Jacobi approximations in non-uniformly Jacobi-weighted Sobolev spaces. J. Approx. Theory 128, 1–41 (2004)
Kilgore, T.: On the simultaneous approximation of functions and their derivatives. Appl. Math. Rev. 1, 69–118 (2000)
Kopotun, K.: A note on simultaneous approximation in \(L_p[-1,1]\) (\(1\le p <\infty \)). Analysis 15, 151–158 (1995)
Li, H., Xu, Y.: Spectral approximation on the unit ball. SIAM J. Numer. Anal. 52, 2647–2675 (2014)
Marcellán, F., Xu, Y.: On Sobolev orthogonal polynomials. Expos. Math. 33, 308–352 (2015)
Mastroianni, G., Totik, V.: Jackson type inequalities for doubling and \(A_p\) weights. Rend. Del Circolo Math. Di Palermo Serie II Suppl. 52, 83–99 (1998)
Opic, B., Kufner, A.: Hardy-Type Inequalities. Pitman Research Notes in Mathematics Series. Longman Scientific & Technical, Harlow (1990)
Shen, J., Tao, T., Wang, L.: Spectral Methods: Algorithms, Analysis and Applications. Springer Series in Computational Mathematics, vol. 41. Springer, New York (2011)
Szegő, G.: Orthogonal Polynomials, 4th edn. Am. Math. Soc, Providence (1975)
Xu, Y.: Weighted approximation of functions on the unit sphere. Constr. Approx. 21, 1–28 (2005)
Xu, Y.: Approximation and orthogonality in Sobolev spaces on a triangle. Constr. Approx. 46, 349–434 (2017)
Acknowledgements
The author thanks Danny Leviatan for his careful readings and corrections. The author was supported in part by NSF Grant DMS-1510296.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Alex Iosevich.
Rights and permissions
About this article
Cite this article
Xu, Y. Approximation by Polynomials in Sobolev Spaces with Jacobi Weight. J Fourier Anal Appl 24, 1438–1459 (2018). https://doi.org/10.1007/s00041-017-9581-3
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00041-017-9581-3