1 Introduction

We consider the interpolatory quadrature formula relative to the Legendre weight function \(w(t)=1\) on the interval \([-1,1]\)

$$\begin{aligned} \int _{-1}^{1}f(t)dt=\sum _{\nu =1}^{N}w_{\nu }f(\tau _{\nu })+R_{N}^{}(f), \end{aligned}$$
(1.1)

where the nodes \(\tau _{\nu }=\tau _{\nu }^{(N)}\), ordered decreasingly, are all in the interval \([-1,1]\), and the weights \(w_{\nu }=w_{\nu }^{(N)}\) are real numbers. Formula (1.1) has degree of exactness d at least \(N-1\), i.e., \(R_{N}^{}(f)=0\) for all \(f\in \mathbb {P}_{N-1}\).

For functions \(f\in C^{d+1}[-1,1]\), the error term of formula (1.1) can be estimated by

$$\begin{aligned} |R_{N}^{}(f)|\le c_{d}\max _{-1\le t\le 1}|f^{(d+1)}(t)|,\ \ c_{d}=\int _{-1}^{1}|K_{d}(t)|dt, \end{aligned}$$
(1.2)

where \(K_{d}\) is the dth Peano kernel (cf. [3, Section 4.3]). Estimate (1.2), although frequently quoted, is of limited use. For one reason, higher order derivatives of a function are not readily available, and, even if they are, the resulting error bound cannot be applied to functions of lower-order continuity. Furthermore, estimates like (1.2) do not lend themselves for comparing quadrature formulae with different degrees of exactness.

A more practical estimate can be obtained by means of a Hilbert space method proposed by Hämmerlin in [5]. If f is a single-valued holomorphic function in the disc \(C_{r}=\{z\in \mathbb {C}:|z|<r\}\), \(r>1\), then it can be written as

$$\begin{aligned} f(z)=\sum _{k=0}^{\infty }a_{k}z^{k},\ \ z\in C_{r}. \end{aligned}$$

Define

$$\begin{aligned} |f|_{r}=\sup {\left\{ |a_{k}|r^{k}:k\in \mathbb {N}_{0}\ \mathrm{and}\ R_{N}^{}(t^{k})\ne 0\right\} }, \end{aligned}$$
(1.3)

which is a seminorm in the space

$$\begin{aligned} X_{r}=\{f:f\ \mathrm{holomorphic\ in}\ C_{r}\ \mathrm{and}\ |f|_{r}<\infty \}. \end{aligned}$$
(1.4)

Then it can be shown that the error term \(R_{N}^{}\) of formula (1.1) is a continuous linear functional in \((X_{r},|\cdot |_{r})\), and its norm is given by

$$\begin{aligned} \Vert R_{N}^{}\Vert =\sum _{k=0}^{\infty }\frac{|R_{N}^{}(t^{k})|}{r^{k}}. \end{aligned}$$
(1.5)

If, in addition, for an \(\varepsilon \in \{-1,1\}\),

figure a

or

figure b

then, letting

$$\begin{aligned} \pi _{N}^{}(t)=\prod _{\nu =1}^{N}(t-\tau _{\nu }), \end{aligned}$$

we can derive the representations

figure c

or

figure d

respectively (cf. [11, Section 2]). The error norm can lead to estimates for the error functional itself. If \(f\in X_{R}\), then

$$\begin{aligned} |R_{N}^{}(f)|\le \Vert R_{N}^{}\Vert |f|_{r}, \ \ 1<r\le R, \end{aligned}$$

which, optimized as a function of r, gives

$$\begin{aligned} |R_{N}^{}(f)|\le \inf _{1<r\le R}(\Vert R_{N}^{}\Vert |f|_{r}). \end{aligned}$$
(1.8)

Furthermore, if \(|f|_{r}\) is estimated by \(\max _{|z|=r}|f(z)|\), which exists at least for \(r<R\) (cf. [11, Equation (2.9)]), we get

$$\begin{aligned}&\displaystyle |R_{N}^{}(f)|\le \Vert R_{N}^{}\Vert \max _{|z|=r} |f(z)|, \ \ 1<r<R,\nonumber \\&\displaystyle |R_{N}^{}(f)|\le \inf _{1<r<R}\left( \Vert R_{N}^{}\Vert \max _{|z|=r}|f(z)|\right) . \end{aligned}$$
(1.9)

The latter can also be derived by a contour integration technique on circular contours (cf. [4]).

Representations (\(1.7_i\)) and (\(1.7_{ii}\)) have been successfully applied for computing the norm of the error functional for many well-known quadrature formulae, among them the Gauss, Gauss–Lobatto, Gauss–Radau and Gauss–Kronrod quadrature formulae for various weight functions as well as the Fejér formula of the second kind also known as Filippi rule.

However, how do we proceed if the error term of formula (1.1) does not satisfy one of conditions (\(1.6_i\)) and (\(1.6_{ii}\)), in particular, if \(R_{N}^{}(t^{k})\) does not keep a constant sign for all \(k\ge 0\)? This is the case with a number of well-known formulae, such as the Clenshaw–Curtis formula, the Basu formula and the Fejér formula of the first kind also known as Pólya rule. In the present paper, we show that if \(R_{N}^{}(t^{k})\) changes sign, only once, at some specific \(k=k_{N}^{}\) (cf. (3.2) and (3.16) below), then \(\Vert R_{N}^{}\Vert \) can still be effectively estimated by means of (1.5). We do this in Sect. 3. Our approach is new and general, and in that sense it could likely be applied to interpolatory quadrature formulae whose error term at the monomials changes sign more than once, although this case would require a more delicate handling. In Sect. 4, we apply the estimates derived in Sect. 3 to the Clenshaw–Curtis formula, the Basu formula and the Fejér formula of the first kind. A detailed numerical example, depicting the quality of our bounds, is given in Sect. 5. Our exposition begins in Sect. 2, with some integral formulas for the Chebyshev polynomials of the first and second kind useful in our development.

2 Integral formulas for Chebyshev polynomials

The formulas presented in this section, besides been important in their own right, are also useful in deriving the error norm of Clenshaw–Curtis and Basu formulae.

Throughout this and all subsequent sections, by \([\cdot ]\) we denote the integer part of a real number, while the notation \(\mathop {{\sum }'}\) means that the last term in the sum must be halved when n is odd.

Let \(T_{n}\) and \(U_{n}\) be the nth degree Chebyshev polynomials of the first and second kind, respectively, expressed by

$$\begin{aligned}&\displaystyle T_{n}(\cos {\theta })=\cos {n\theta }, \end{aligned}$$
(2.1)
$$\begin{aligned}&\displaystyle U_{n}(\cos {\theta })=\frac{\sin {(n+1)\theta }}{\sin {\theta }}. \end{aligned}$$
(2.2)

Both satisfy the three-term recurrence relation

$$\begin{aligned} p_{k+1}(t)=2tp_{k}(t)-p_{k-1}(t),\ \ k=1,2,\ldots , \end{aligned}$$
(2.3)

where

$$\begin{aligned} \begin{array}{c} p_{0}(t)=1,\ p_{1}(t)=t \; \mathrm{if}\ p_{n}^{}=T_{n},\\ p_{0}(t)=1,\ p_{1}(t)=2t \; \mathrm{if}\ p_{n}^{}=U_{n}. \end{array} \end{aligned}$$
(2.4)

Proposition 2.1

Let \(r\in \mathbb {R}\) with \(|r|>1\).

(i) We have

$$\begin{aligned} \int _{-1}^{1}\frac{(t^{2}-1)T_{n}(t)}{r-t}dt= & {} (r^{2}-1)T_{n}(r)\ln {\left( \frac{r+1}{r-1}\right) } -4(r^{2}-1) \mathop {{\sum }'}_{k=1}^{[(n+1)/2]}\frac{T_{n-2k+1}(r)}{2k-1} \nonumber \\&+\,\left\{ \begin{array}{ll} \frac{2r}{n^{2}-1},&{}\quad n\; \mathrm{even,}\\ \vspace{-0.3cm}\\ \frac{2}{n^{2}-4},&{} \quad n\; \mathrm{odd.} \end{array}\right. \end{aligned}$$
(2.5)

Moreover,

$$\begin{aligned} \int _{-1}^{1}\frac{(t^{2}-1)T_{n}(t)}{r+t}dt=(-1)^{n} \int _{-1}^{1}\frac{(t^{2}-1)T_{n}(t)}{r-t}dt. \end{aligned}$$
(2.6)

(ii) We have

$$\begin{aligned} \int _{-1}^{1}\frac{(t^{2}{-}1)U_{n}(t)}{r-t}dt= & {} (r^{2}-1)U_{n}(r)\ln {\left( \frac{r+1}{r{-}1}\right) } -4(r^{2}{-}1) \, \mathop {{\sum }}_{k=1}^{[(n+1)/2]}\frac{U_{n{-}2k+1}(r)}{2k{-}1}\nonumber \\&-\, \left\{ \begin{array}{ll} \frac{2r}{n+1},&{}\quad n\; \mathrm{even,}\\ \vspace{-0.3cm}\\ \frac{2(n+1)}{n(n+2)},&{}\quad n\; \mathrm{odd.} \end{array}\right. \end{aligned}$$
(2.7)

Moreover,

$$\begin{aligned} \int _{-1}^{1}\frac{(t^{2}-1)U_{n}(t)}{r+t}dt=(-1)^{n} \int _{-1}^{1}\frac{(t^{2}-1)U_{n}(t)}{r-t}dt. \end{aligned}$$
(2.8)

Proof

(i) Writing

$$\begin{aligned} \int _{-1}^{1}\frac{(t^{2}-1)T_{n}(t)}{r-t}dt=\int _{-1}^{1}\frac{(t^{2}-r^{2}+r^{2}-1)T_{n}(t)}{r-t}dt, \end{aligned}$$

splitting the integral on the right-hand side in two, and using

$$\begin{aligned} tT_{n}(t)=\frac{1}{2}T_{n+1}(t)+\frac{1}{2}T_{n-1}(t) \end{aligned}$$

(cf. (2.3) and (2.4)), we get

$$\begin{aligned} \begin{array}{rcl} \displaystyle \int _{-1}^{1}\frac{(t^{2}-1)T_{n}(t)}{r-t} dt&{}=&{}\displaystyle (r^{2}-1)\int _{-1}^{1}\frac{T_{n}(t)}{r-t}dt\\ \vspace{-0.3cm}\\ &{}&{}\displaystyle -\frac{1}{2}\int _{-1}^{1}T_{n+1}(t) dt-r\int _{-1}^{1}T_{n}(t)dt-\frac{1}{2}\int _{-1}^{1}T_{n-1}(t)dt. \end{array} \end{aligned}$$
(2.9)

Now,

$$\begin{aligned} \int _{-1}^{1}\frac{T_{n}(t)}{r-t}dt=T_{n}(r) \ln {\left( \frac{r+1}{r-1}\right) } -4\mathop {{\sum }'}_{k=1}^{[(n+1)/2]}\frac{T_{n-2k+1}(r)}{2k-1} \end{aligned}$$

(cf. [10, Proposition 2.2, Equation (2.8)]), and

$$\begin{aligned} \int _{-1}^{1}T_{m}(t)dt= \left\{ \begin{array}{ll} -\frac{2}{m^{2}-1},&{}\quad m\ \mathrm{even,}\\ \vspace{-0.3cm}\\ 0,&{}\quad m\ \mathrm{odd} \end{array}\right. \end{aligned}$$

(cf. [8, Equation (2.43)]), which, inserted into (2.9), give (2.5).

To prove (2.6), all we have to do is to set \(-t\) for t into the integral on the left-hand side and take into account that \(T_{n}(-t)=(-1)^{n}T_{n}(t)\) (cf. (2.1)).

(ii) The proof of (2.7) and (2.8) is similar to that of (2.5) and (2.6), respectively, except that here we use

$$\begin{aligned} \int _{-1}^{1}\frac{U_{n}(t)}{r-t}dt=U_{n}(r) \ln {\left( \frac{r+1}{r-1}\right) } -4\mathop {{\sum }}_{k=1}^{[(n+1)/2]}\frac{U_{n-2k+1}(r)}{2k-1} \end{aligned}$$

(cf. [10, Proposition 2.2, Equation (2.9)]),

$$\begin{aligned} \int _{-1}^{1}U_{m}(t)dt= \left\{ \begin{array}{ll} \frac{2}{m+1},&{}\quad m\ \mathrm{even,}\\ 0,&{}\quad m\ \mathrm{odd} \end{array}\right. \end{aligned}$$
(2.10)

(cf. [8, Equation (2.46)]), and \(U_{n}(-t)=(-1)^{n}U_{n}(t)\) (cf. (2.2)). \(\square \)

3 Estimates for the error norm of interpolatory quadrature formulae

In the present section, we assume that the error term of formula (1.1) does not satisfy one of conditions (\(1.6_i\)) and (\(1.6_{ii}\)), but instead \(R_{N}^{}(t^{k})\) changes sign, once, at some specific \(k=k_{N}^{}\). Then, based on (1.5), we obtain estimates for the error norm, which are both efficient and easy to apply.

Theorem 3.1

Consider the quadrature formula (1.1) satisfying

$$\begin{aligned} \sum _{\nu =1}^{N}|w_{\nu }|\le M \end{aligned}$$
(3.1)

and

$$\begin{aligned} R_{N}^{}(t^{k}) \left\{ \begin{array}{ll} \ge 0,&{}\quad 0\le k\le k_{N}^{},\\ \le 0,&{}\quad k>k_{N}^{}, \end{array}\right. \end{aligned}$$
(3.2)

where \(M>0\) and \(k_{N}^{}=k_{N}^{(N)}\) are constants. Then

$$\begin{aligned} \Vert R_{N}^{}\Vert\le & {} \frac{r}{\pi _{N}^{}(r)}\int _{-1}^{1} \frac{\pi _{N}^{}(t)}{r-t}dt+\frac{2M}{r^{k_{N}^{}}(r-1)}\nonumber \\&- 2r\left\{ \ln {\left( \frac{r+1}{r-1}\right) }-2 \sum _{k=1}^{[k_{N}^{}/2]+1}\frac{1}{(2k-1)r^{2k-1}}\right\} . \end{aligned}$$
(3.3)

Proof

From (1.5), we have, in view of (3.2),

$$\begin{aligned} \Vert R_{N}^{}\Vert= & {} \sum _{k=0}^{k_{N}^{}} \frac{R_{N}^{}(t^{k})}{r^{k}}-\sum _{k=k_{N}^{}+1}^{\infty } \frac{R_{N}^{}(t^{k})}{r^{k}} \nonumber \\= & {} \sum _{k=0}^{\infty }\frac{R_{N}^{}(t^{k})}{r^{k}}-2\sum _{k=k_{N}^{}+1}^{\infty }\frac{R_{N}^{}(t^{k})}{r^{k}}. \end{aligned}$$
(3.4)

The first part on the right-hand side of the last line in (3.4), using the continuity of \(R_{N}^{}\) on \((C[-1,1]\), \(\Vert \cdot \Vert _{\infty })\), and proceeding as in the proof of Theorem 2.1(a) in [11], computes to

$$\begin{aligned} \sum _{k=0}^{\infty }\frac{R_{N}^{}(t^{k})}{r^{k}}= \frac{r}{\pi _{N}^{}(r)}\int _{-1}^{1}\frac{\pi _{N}^{}(t)}{r-t}dt. \end{aligned}$$
(3.5)

The second part, on the other hand, again by the continuity of \(R_{N}^{}\) on \((C[-1,1],\Vert \cdot \Vert _{\infty })\), can be written as

$$\begin{aligned} -2\sum _{k=k_{N}^{}+1}^{\infty }\frac{R_{N}^{}(t^{k})}{r^{k}} =-2R_{N}^{}\left( \sum _{k=k_{N}^{}+1}^{\infty } \left( \frac{t}{r}\right) ^{k}\right) =-\frac{2}{r^{k_{N}^{}}}R_{N}^{} \left( \frac{t^{k_{N}^{}+1}}{r-t}\right) , \end{aligned}$$
(3.6)

and, setting \(f(t)=t^{k_{N}^{}+1}/(r-t)\) in formula (1.1), (3.6) takes the form

$$\begin{aligned} -2\sum _{k=k_{N}^{}+1}^{\infty }\frac{R_{N}^{}(t^{k})}{r^{k}} =\frac{2}{r^{k_{N}^{}}}\left( \sum _{\nu =1}^{N}w_{\nu } \frac{\tau _{\nu }^{k_{N}^{}+1}}{r-\tau _{\nu }} -\int _{-1}^{1}\frac{t^{k_{N}^{}+1}}{r-t}dt\right) . \end{aligned}$$
(3.7)

Given that

$$\begin{aligned} t^{k_{N}^{}+1}=(r-t)\left( -t^{k_{N}^{}}-rt^{k_{N}^{}-1} -\cdots -r^{k_{N}^{}}\right) +r^{k_{N}^{}+1}, \end{aligned}$$

the integral on the right-hand side of (3.7) computes to

$$\begin{aligned} \displaystyle \int _{-1}^{1}\frac{t^{k_{N}^{}+1}}{r-t}dt= & {} -\displaystyle \sum _{k=0}^{k_{N}^{}}r^{k_{N}^{}-k}\int _{-1}^{1}t^{k}dt +r^{k_{N}^{}+1}\int _{-1}^{1}\frac{dt}{r-t}\nonumber \\= & {} r^{k_{N}^{}+1}\displaystyle \left\{ \ln {\left( \frac{r+1}{r-1}\right) }-2\sum _{k=1}^{[k_{N}^{}/2]+1}\frac{1}{(2k-1)r^{2k-1}}\right\} . \end{aligned}$$
(3.8)

Moreover, by means of \(|\tau _{\nu }|\le 1,\ \nu =1,2,\ldots ,N\), and (3.1),

$$\begin{aligned} \sum _{\nu =1}^{N}w_{\nu }\frac{\tau _{\nu }^{k_{N}^{}+1}}{r-\tau _{\nu }}\le \sum _{\nu =1}^{N}|w_{\nu }| \frac{|\tau _{\nu }|^{k_{N}^{}+1}}{r-\tau _{\nu }} \le \frac{M}{r-1}, \end{aligned}$$
(3.9)

which inserted, together with (3.8), into (3.7), gives

(3.10)

Now, (3.4), together with (3.5) and (3.10), yields (3.3).\(\square \)

An immediate consequence is the following

Corollary 3.2

(a) Consider the quadrature formula (1.1) having all weights nonnegative and satisfying condition (3.2). Then

$$\begin{aligned} \Vert R_{N}^{}\Vert\le & {} \frac{r}{\pi _{N}^{}(r)}\int _{-1}^{1} \frac{\pi _{N}^{}(t)}{r-t}dt+\frac{4}{r^{k_{N}^{}}(r-1)}\nonumber \\&- 2r\left\{ \ln {\left( \frac{r+1}{r-1}\right) }-2 \sum _{k=1}^{[k_{N}^{}/2]+1}\frac{1}{(2k-1)r^{2k-1}}\right\} . \end{aligned}$$
(3.11)

(b) Consider the quadrature formula (1.1) being symmetric, having all weights nonnegative and satisfying condition (3.2). Then

$$\begin{aligned} \Vert R_{N}^{}\Vert\le & {} \frac{r}{\pi _{N}^{}(r)}\int _{-1}^{1}\frac{\pi _{N}^{}(t)}{r-t}dt+\frac{4}{r^{k_{N}^{}}(r^{2}-1)}\nonumber \\&- 2r\left\{ \ln {\left( \frac{r+1}{r-1}\right) }-2\sum _{k=1}^{[k_{N}^{}/2]+1}\frac{1}{(2k-1)r^{2k-1}}\right\} . \end{aligned}$$
(3.12)

Proof

(a) If all weights of formula (1.1) are nonnegative, then

$$\begin{aligned} \sum _{\nu =1}^{N}|w_{\nu }|=\sum _{\nu =1}^{N}w_{\nu }=\int _{-1}^{1}dt=2, \end{aligned}$$

hence, \(M=2\) (cf. (3.1)), which, in view of (3.3), implies (3.11).

(b) Formula (1.1) is symmetric if

$$\begin{aligned} \tau _{N-\nu +1}^{}=-\tau _{\nu },\ \ w_{N-\nu +1}^{}=w_{\nu },\ \ \nu =1,2,\ldots ,N, \end{aligned}$$
(3.13)

and as, for N odd, \(\tau _{(N+1)/2}^{}=0\), we have

$$\begin{aligned} \sum _{\nu =1}^{N}w_{\nu }\frac{\tau _{\nu }^{k_{N}^{}+1}}{r-\tau _{\nu }}=\sum _{\nu =1}^{[N/2]}w_{\nu }\frac{\tau _{\nu }^{k_{N}^{}+1}}{r-\tau _{\nu }} +\sum _{\nu =1}^{[N/2]}w_{\nu }\frac{(-\tau _{\nu })^{k_{N}^{}+1}}{r+\tau _{\nu }}. \end{aligned}$$
(3.14)

Furthermore, by symmetry,

$$\begin{aligned}R_{N}^{}\big (t^{2l-1}\big )=0,\ \ l\ge 1,\end{aligned}$$

consequently, \(k_{N}^{}\) in (3.2) is even, and (3.14), by virtue of \(|\tau _{\nu }|\le 1,\ \nu =1,2,\ldots ,N\), takes the form

$$\begin{aligned} \displaystyle \sum _{\nu =1}^{N}w_{\nu }\frac{\tau _{\nu }^{k_{N}^{}+1}}{r-\tau _{\nu }}= & {} 2\displaystyle \sum _{\nu =1}^{[N/2]}w_{\nu }\frac{\tau _{\nu }^{k_{N}^{}+2}}{r^{2}-\tau _{\nu }^{2}}\nonumber \\\le & {} 2\displaystyle \sum _{\nu =1}^{[N/2]}w_{\nu }\frac{1}{r^{2}-1}\le \frac{1}{r^{2}-1}\sum _{\nu =1}^{N}w_{\nu }=\frac{2}{r^{2}-1}. \end{aligned}$$
(3.15)

Now, following the proof of Theorem 3.1, and replacing (3.9) by (3.15), we obtain (3.12). \(\square \)

A case similar to that of Theorem 3.1 is presented in the following

Theorem 3.3

Consider the quadrature formula (1.1) satisfying condition (3.1) and

$$\begin{aligned} R_{N}^{}(t^{k}) \left\{ \begin{array}{ll} \le 0,&{} \quad 0\le k\le k_{N}^{},\\ \ge 0,&{}\quad k>k_{N}^{}, \end{array}\right. \end{aligned}$$
(3.16)

where \(k_{N}^{}=k_{N}^{(N)}\) is a constant. Then

$$\begin{aligned} \Vert R_{N}^{}\Vert\le & {} 2r\left\{ \ln {\left( \frac{r+1}{r-1}\right) }-2\sum _{k=1}^{[k_{N}^{}/2]+1} \frac{1}{(2k-1)r^{2k-1}}\right\} +\frac{2M}{r^{k_{N}^{}}(r-1)}\nonumber \\&-\,\frac{r}{\pi _{N}^{}(r)} \int _{-1}^{1}\frac{\pi _{N}^{}(t)}{r-t}dt. \end{aligned}$$
(3.17)

Proof

From (1.5), we have, in view of (3.16),

$$\begin{aligned} \begin{array}{rcl} \Vert R_{N}^{}\Vert &{}=&{}-\displaystyle \sum _{k=0}^{k_{N}^{}} \frac{R_{N}^{}(t^{k})}{r^{k}}+\sum _{k=k_{N}^{}+1}^{\infty } \frac{R_{N}^{}(t^{k})}{r^{k}}\\ \vspace{-0.3cm}\\ &{}=&{} - \displaystyle \sum _{k=0}^{\infty }\frac{R_{N}^{}(t^{k})}{r^{k}}+2\sum _{k=k_{N}^{}+1}^{\infty }\frac{R_{N}^{}(t^{k})}{r^{k}}. \end{array} \end{aligned}$$

Then, proceeding as in the proof of Theorem 3.1, we obtain (3.17). \(\square \)

Also, in a like manner as in Corollary 3.2, we derive the following consequence of Theorem 3.3.

Corollary 3.4

(a) Consider the quadrature formula (1.1) having all weights nonnegative and satisfying condition (3.16). Then

$$\begin{aligned} \Vert R_{N}^{}\Vert\le & {} 2r\left\{ \ln {\left( \frac{r+1}{r-1}\right) }-2 \sum _{k=1}^{[k_{N}^{}/2]+1}\frac{1}{(2k-1)r^{2k-1}}\right\} \nonumber \\&+ \frac{4}{r^{k_{N}^{}}(r-1)}-\frac{r}{\pi _{N}^{}(r)} \int _{-1}^{1}\frac{\pi _{N}^{}(t)}{r-t}dt. \end{aligned}$$
(3.18)

(b) Consider the quadrature formula (1.1) being symmetric, having all weights nonnegative and satisfying condition (3.16). Then

$$\begin{aligned} \Vert R_{N}^{}\Vert\le & {} 2r\left\{ \ln {\left( \frac{r+1}{r-1}\right) }-2 \sum _{k=1}^{[k_{N}^{}/2]+1}\frac{1}{(2k-1)r^{2k-1}}\right\} \nonumber \\&+ \frac{4}{r^{k_{N}^{}}(r^{2}-1)}-\frac{r}{\pi _{N}^{}(r)}\int _{-1}^{1}\frac{\pi _{N}^{}(t)}{r-t}dt. \end{aligned}$$
(3.19)

4 Estimates for the error norm of the Clenshaw–Curtis formula, Basu formula and Fejér formula of the first kind

We treat each case separately.

4.1 The Clenshaw–Curtis formula

This is the quadrature formula

$$\begin{aligned} \int _{-1}^{1}f(t)dt=w_{0}^{*(2)}f(1)+\sum _{\nu =1}^{n} w_{\nu }^{*(2)}f(\tau _{\nu }^{(2)})+w_{n+1}^{*(2)}f(-1)+R_{n}^{*(2)}(f), \end{aligned}$$
(4.1)

with

$$\begin{aligned} \tau _{\nu }^{(2)}=\cos {\theta _{\nu }^{(2)}},\ \ \theta _{\nu }^{(2)}=\frac{\nu }{n+1}\pi , \ \ \nu =1,2,\ldots ,n, \end{aligned}$$
(4.2)

the zeros of the nth degree Chebyshev polynomial of the second kind \(U_{n}\) (cf. (2.2)), and

$$\begin{aligned} \begin{array}{c} w_{\nu }^{*(2)} =\displaystyle \frac{2}{n+1}\left( 1-2\mathop {{\sum }'}_{k=1}^{[(n+1)/2]} \frac{\cos {2k\theta _{\nu }^{(2)}}}{4k^{2}-1}\right) ,\ \ \nu =1,2,\ldots ,n,\\ w_{0}^{*(2)}=w_{n+1}^{*(2)}= \left\{ \begin{array}{ll} \displaystyle \frac{1}{(n+1)^{2}},&{} \quad n\ \mathrm{even,}\\ \displaystyle \frac{1}{n(n+2)},&{}\quad n\ \mathrm{odd.} \end{array}\right. \end{array} \end{aligned}$$
(4.3)

The weights \(w_{\nu }^{*(2)},\ \nu =1,2,\ldots ,n\), are all positive, and formula (4.1) has precise degree of exactness \(d=n+1\) if n is even and \(d=n+2\) if n is odd, i.e., \(d=2[(n+1)/2]+1\) (cf. [9]).

In order to obtain an estimate for \(\Vert R_{n}^{*(2)}\Vert \), we need to get an assessment for the sign of \(R_{n}^{*(2)}(t^{k})\), \(k\ge 0\). Our findings are summarized in the following

Lemma 4.1

The error term of the Clenshaw–Curtis quadrature formula (4.1), when \(n\ge 2\), satisfies

$$\begin{aligned} R_{n}^{*(2)}(t^{k}) \left\{ \begin{array}{ll} \ge 0,&{}\quad 0\le k\le k_{n}^{*(2)},\\ \le 0,&{}\quad k>k_{n}^{*(2)}, \end{array}\right. \end{aligned}$$
(4.4)

where \(k_{n}^{*(2)}>2[(n+1)/2]+2\) is a constant.

For \(n=1\), there holds

$$\begin{aligned} R_{1}^{*(2)}(t^{k})\le 0, k\ge 0. \end{aligned}$$
(4.5)

Proof

First of all,

$$\begin{aligned} R_{n}^{*(2)}(t^{k})=0, k=0,1,\ldots , \left\{ \begin{array}{ll} n+1,&{} \quad n\ \mathrm{even,}\\ n+2,&{}\quad n\ \mathrm{odd.} \end{array}\right. \end{aligned}$$
(4.6)

Moreover, formula (4.1) is symmetric (cf. (3.13)), hence,

$$\begin{aligned} R_{n}^{*(2)}(t^{2l-1})=0,\ \ l\ge 1. \end{aligned}$$
(4.7)

Thus, we have to look at the sign of \(R_{n}^{*(2)}(t^{2l})\). Let \(n\mathrm{(even)}\) \(\ge \)2. Then, by (4.6) and (4.1),

$$\begin{aligned} R_{n}^{*(2)}(t^{n+2})=R_{n}^{*(2)}\left( (t^{2}-1) \frac{1}{2^{n}}U_{n}(t)\right) =\frac{1}{2^{n}} \int _{-1}^{1}(t^{2}-1)U_{n}(t)dt, \end{aligned}$$

from which, in view of

$$\begin{aligned} (t^{2}-1)U_{n}(t)=\frac{1}{4}U_{n+2}(t) -\frac{1}{2}U_{n}(t)+\frac{1}{4}U_{n-2}(t) \end{aligned}$$

(cf. (2.3) and (2.4) or [8, Equation (2.42)]), we get, by means of (2.10),

$$\begin{aligned}\begin{array}{rcl} R_{n}^{*(2)}(t^{n+2})&{}=&{}\displaystyle \frac{1}{2^{n}} \left( \frac{1}{4}\int _{-1}^{1}U_{n+2}(t)dt-\frac{1}{2} \int _{-1}^{1}U_{n}(t)dt +\frac{1}{4}\int _{-1}^{1}U_{n-2}(t)dt\right) \\ &{}=&{}\displaystyle \frac{1}{2^{n-2}(n-1)(n+1)(n+3)}. \end{array} \end{aligned}$$

In a like manner, we obtain, for \(n\mathrm{(odd)}\ge 3\),

$$\begin{aligned} R_{n}^{*(2)}(t^{n+3})=\frac{n+1}{2^{n-2}(n-2)n(n+2)(n+4)}. \end{aligned}$$

Hence, all together

$$\begin{aligned} R_{n}^{*(2)}(t^{2[(n+1)/2]+2})>0,\ \ n\ge 2. \end{aligned}$$
(4.8)

Furthermore, setting \(f(t)=t^{2l}\) in (4.1), we have

$$\begin{aligned} R_{n}^{*(2)}(t^{2l})=\frac{2}{2l+1}-w_{0}^{*(2)} -\sum _{\nu =1}^{n}w_{\nu }^{*(2)}(\tau _{\nu }^{(2)})^{2l} -w_{n+1}^{*(2)}, \end{aligned}$$
(4.9)

and, as \(\frac{2}{2l+1}\) is decreasing with l while \(w_{0}^{*(2)}=w_{n+1}^{*(2)}>0\) are independent of l,

$$\begin{aligned} R_{n}^{*(2)}(t^{2l})<0,\ \ l>k_{n}^{*(2)}/2, \end{aligned}$$
(4.10)

for some constant \(k_{n}^{*(2)}>2[(n+1)/2]+2\). Combining (4.6)–(4.8) and (4.10), we obtain (4.4).

For \(n=1\), the Clenshaw–Curtis formula (4.1) is Simpson’s rule on the interval \([-1,1]\),

$$\begin{aligned} \int _{-1}^{1}f(t)dt=\frac{1}{3}f(1)+ \frac{4}{3}f(0)+\frac{1}{3}f(-1)+R_{1}^{*(2)}(f), \end{aligned}$$

with

$$\begin{aligned} R_{1}^{*(2)}(f)=-\frac{1}{90}f^{(4)}(\xi ),\ \ -1<\xi <1. \end{aligned}$$
(4.11)

From (4.11), there follows that

$$\begin{aligned} R_{1}^{*(2)}(t^{2l})\le 0,\ \ l\ge 2, \end{aligned}$$

which, combined with (4.6) and (4.7), yields (4.5). \(\square \)

From (4.9), in view of (4.2) and (4.3), we can compute the precise values of \(k_{n}^{*(2)}\). It suffices to find the highest value of \(l=l^{*(2)}\) such that \(R_{n}^{*(2)}(t^{2l^{*(2)}})>0\); then \(k_{n}^{*(2)}=2l^{*(2)}\). The values of \(k_{n}^{*(2)},\ 2\le n\le 40\), are given in Table 1.

Table 1 Values of \(k_{n}^{*(2)},\ 2\le n\le 40\)

Based on the previous lemma, we can derive estimates for \(\Vert R_{n}^{*(2)}\Vert \).

Theorem 4.2

Consider the Clenshaw–Curtis quadrature formula (4.1). For \(n\ge 2\), we have

$$\begin{aligned} \Vert R_{n}^{*(2)}\Vert\le & {} 4r\displaystyle \sum _{k=1}^{[k_{n}^{*(2)}/2]+1} \frac{1}{(2k-1)r^{2k-1}}+\frac{4}{r^{k_{n}^{*(2)}}(r^{2}-1)} - r\ln {\left( \frac{r+1}{r-1}\right) }\nonumber \\&-\frac{4r}{U_{n}(r)} \mathop {{\sum }}_{k=1}^{[(n+1)/2]} \frac{U_{n-2k+1}(r)}{2k-1} - \left\{ \begin{array}{ll} \frac{2r^{2}}{(n+1)(r^{2}-1)U_{n}(r)},&{} \quad n\ \mathrm{even,}\\ \frac{2(n+1)r}{n(n+2)(r^{2}-1)U_{n}(r)},&{} \quad n\ \mathrm{odd.} \end{array}\right. \end{aligned}$$
(4.12)

On the other hand, for \(n=1\), we have

$$\begin{aligned} \Vert R_{1}^{*(2)}\Vert =\frac{2(3r^{2}-2)}{3(r^{2}-1)} -r\ln {\left( \frac{r+1}{r-1}\right) }. \end{aligned}$$
(4.13)

Proof

The Clenshaw–Curtis formula (4.1) is the case of the quadrature formula (1.1) with \(N=n+2,\ \tau _{1}=1,\ \tau _{\nu +1}=\tau _{\nu }^{(2)},\ \nu =1,2,\ldots ,n\), and \(\tau _{n+2}=-1\).

For \(n\ge 2\), in view of (4.4), we get, from Eq. (3.12) in Corollary 3.2(b),

$$\begin{aligned} \begin{array}{rcl} \Vert R_{n}^{*(2)}\Vert &{}\le &{}\displaystyle \frac{r}{(r^{2}-1)U_{n}(r)} \int _{-1}^{1}\frac{(t^{2}-1)U_{n}(t)}{r-t}dt+\frac{4}{r^{k_{n}^{*(2)}}(r^{2}-1)}\\ &{}&{}-2r\displaystyle \left\{ \ln {\left( \frac{r+1}{r-1}\right) } -2\sum _{k=1}^{[k_{n}^{*(2)}/2]+1}\frac{1}{(2k-1)r^{2k-1}}\right\} , \end{array} \end{aligned}$$

and, inserting (2.7), we obtain (4.12).

On the other hand, for \(n=1\), in view of (4.5) (cf. (\(1.6_i\)) with \(\varepsilon =-1\)), we have, from (\(1.7_i\)),

$$\begin{aligned} \Vert R_{1}^{*(2)}\Vert =-\frac{r}{(r^{2}-1)U_{1}(r)} \int _{-1}^{1}\frac{(t^{2}-1)U_{1}(t)}{r-t}dt, \end{aligned}$$

and, again by (2.7), we get (4.13). \(\square \)

Remark 4.1

The value of \(U_{m}(r)\) in (4.12) can be computed by either the three-term recurrence relation (2.3) and (2.4) or directly as

$$\begin{aligned} U_{m}(r)=\frac{1-\tau ^{2m+2}}{2\tau ^{m+1}\sqrt{r^{2}-1}},\ \ m\ge 0, \end{aligned}$$

where \(\tau =r-\sqrt{r^{2}-1}\) (cf. [8, Equation (1.52)]).

4.2 The Basu formula

This is the quadrature formula

$$\begin{aligned} \int _{-1}^{1}f(t)dt=w_{0}^{*(1)}f(1)+ \sum _{\nu =1}^{n}w_{\nu }^{*(1)}f(\tau _{\nu }^{(1)}) +w_{n+1}^{*(1)}f(-1)+R_{n}^{*(1)}(f), \end{aligned}$$
(4.14)

with

$$\begin{aligned} \tau _{\nu }^{(1)}=\cos {\theta _{\nu }^{(1)}},\ \ \theta _{\nu }^{(1)}=\frac{2\nu -1}{2n}\pi , \ \ \nu =1,2,\ldots ,n, \end{aligned}$$
(4.15)

the zeros of the nth degree Chebyshev polynomial of the first kind \(T_{n}\) (cf. (2.1)), and

$$\begin{aligned} \begin{array}{c} \displaystyle w_{\nu }^{*(1)}= \left\{ \begin{array}{ll} \displaystyle \frac{2}{n}\left( 1-2\mathop {{\sum }}_{k=1}^{n/2} \frac{\cos {2k\theta _{\nu }^{(1)}}}{4k^{2}-1} +\frac{(-1)^{\nu -1}\cot {\theta _{\nu }^{(1)}}}{n^{2}-1}\right) , &{} n\ \mathrm{even,}\\ \displaystyle \frac{2}{n}\left( 1-2\mathop {{\sum }}_{k=1}^{(n-1)/2} \frac{\cos {2k\theta _{\nu }^{(1)}}}{4k^{2}-1}+\frac{(-1)^{\nu -1} \csc {\theta _{\nu }^{(1)}}}{n^{2}-4}\right) ,&{}n\ \mathrm{odd,} \end{array}\right. \ \ \nu =1,2,\ldots ,n,\\ \displaystyle w_{0}^{*(1)}=w_{n+1}^{*(1)}= \left\{ \begin{array}{ll} -\displaystyle \frac{1}{n^{2}-1},&{}\quad n\ \mathrm{even,}\\ -\displaystyle \frac{1}{n^{2}-4},&{}\quad n\ \mathrm{odd.} \end{array}\right. \end{array} \end{aligned}$$
(4.16)

The weights \(w_{\nu }^{*(1)},\ \nu =1,2,\ldots ,n\), are all positive, and formula (4.14) has precise degree of exactness \(d=n+1\) if n is even and \(d=n+2\) if n is odd, i.e., \(d=2[(n+1)/2]+1\) (cf. [9]).

As in the case of the Clenshaw–Curtis formula, we need an assessment for the sign of \(R_{n}^{*(1)}(t^{k})\), \(k\ge 0\).

Lemma 4.3

The error term of the Basu quadrature formula (4.14), when \(n\ge 4\), satisfies

$$\begin{aligned} R_{n}^{*(1)}(t^{k}) \left\{ \begin{array}{ll} \le 0,&{} \quad 0\le k\le k_{n}^{*(1)},\\ \ge 0,&{} \quad k>k_{n}^{*(1)}, \end{array}\right. \end{aligned}$$
(4.17)

where \(k_{n}^{*(1)}>2[(n+1)/2]+2\) is a constant.

For \(n=1,2\) or 3, there holds

$$\begin{aligned}&\displaystyle R_{1}^{*(1)}(t^{k})\le 0,\quad k\ge 0, \end{aligned}$$
(4.18)
$$\begin{aligned}&\displaystyle R_{2}^{*(1)}(t^{k})\ge 0,\quad k\ge 0, \end{aligned}$$
(4.19)
$$\begin{aligned}&\displaystyle R_{3}^{*(1)}(t^{k})\ge 0,\quad k\ge 0. \end{aligned}$$
(4.20)

Proof

First of all,

$$\begin{aligned} R_{n}^{*(1)}(t^{k})=0,\ \ k=0,1,\ldots , \left\{ \begin{array}{ll} n+1,&{} \quad n\ \mathrm{even,}\\ n+2,&{} \quad n\ \mathrm{odd,} \end{array}\right. \end{aligned}$$
(4.21)

and, as formula (4.14) is symmetric (cf. (3.13)),

$$\begin{aligned} R_{n}^{*(1)}(t^{2l-1})=0,\ \ l\ge 1. \end{aligned}$$
(4.22)

Moreover, proceeding exactly as in the case of the Clenshaw–Curtis formula, we find, for \(n\mathrm{(even)}\ge 4\),

$$\begin{aligned} R_{n}^{*(1)}(t^{n+2})=-\frac{3}{2^{n-3}(n^{2}-1)(n^{2}-9)}, \end{aligned}$$

and, for \(n\mathrm{(odd)}\ge 5\),

$$\begin{aligned} R_{n}^{*(1)}(t^{n+3})=-\frac{3}{2^{n-3}(n^{2}-4)(n^{2}-16)}, \end{aligned}$$

i.e.,

$$\begin{aligned} R_{n}^{*(1)}(t^{2[(n+1)/2]+2})<0,\ \ n\ge 4. \end{aligned}$$
(4.23)

Furthermore, from (4.14),

$$\begin{aligned} R_{n}^{*(1)}(t^{2l})=\frac{2}{2l+1}-w_{0}^{*(1)} -\sum _{\nu =1}^{n}w_{\nu }^{*(1)}(\tau _{\nu }^{(1)})^{2l} -w_{n+1}^{*(1)}, \end{aligned}$$
(4.24)

and, as \(w_{0}^{*(1)}=w_{n+1}^{*(1)}<0\) while \(|\tau _{\nu }^{(1)}|<1,\ \nu =1,2,\ldots ,n\), and therefore \(\sum _{\nu =1}^{n}w_{\nu }^{*(1)}(\tau _{\nu }^{(1)})^{2l}\) is decreasing with l,

$$\begin{aligned} R_{n}^{*(1)}(t^{2l})>0,\ \ l>k_{n}^{*(1)}/2, \end{aligned}$$
(4.25)

for some constant \(k_{n}^{*(1)}>2[(n+1)/2]+2\). Now, combining (4.21)–(4.23) and (4.25), we obtain (4.17).

For \(n=1\), the Basu formula (4.14) is Simpson’s rule, hence, (4.18) follows as in Lemma 4.1.

For \(n=2\), formula (4.14) has the form

$$\begin{aligned} \int _{-1}^{1}f(t)dt=-\frac{1}{3}f(1)+\frac{4}{3}f \left( \frac{\sqrt{2}}{2}\right) +\frac{4}{3}f \left( -\frac{\sqrt{2}}{2}\right) -\frac{1}{3}f(-1) +R_{2}^{*(1)}(f), \end{aligned}$$

and, after a simple computation,

$$\begin{aligned} R_{2}^{*(1)}(t^{2l})=2\left\{ \frac{1}{2l+1}+\frac{1}{3} \left( 1-\frac{1}{2^{l-2}}\right) \right\} >0,\ \ l\ge 2, \end{aligned}$$

which, combined with (4.21) and (4.22), yields (4.19).

Similarly, for \(n=3\), formula (4.14) is

$$\begin{aligned} \int _{-1}^{1}f(t)dt= & {} -\frac{1}{5}f(1)+\frac{32}{45}f \left( \frac{\sqrt{3}}{2}\right) +\frac{44}{45}f(0)\\&+\frac{32}{45}f\left( -\frac{\sqrt{3}}{2}\right) -\frac{1}{5}f(-1)+R_{3}^{*(1)}(f), \end{aligned}$$

hence,

$$\begin{aligned} R_{3}^{*(1)}(t^{2l})=2\left\{ \frac{1}{2l+1}+\frac{1}{5} \left( 1-2\left( \frac{3}{4}\right) ^{l-2}\right) \right\} >0,\ \ l\ge 3, \end{aligned}$$

and, as in the previous case, gives (4.20). \(\square \)

From (4.24), in view of (4.15) and (4.16), we can compute, in a like manner as in the Clenshaw–Curtis formula, the constants \(k_{n}^{*(1)}\); their values, for \(4\le n\le 40\), are given in Table 2.

Table 2 Values of \(k_{n}^{*(1)},\ 4\le n\le 40\)

We are now in a position to derive estimates for \(\Vert R_{n}^{*(1)}\Vert \).

Theorem 4.4

Consider the Basu quadrature formula (4.14). For \(n\ge 4\), we have

$$\begin{aligned} \Vert R_{n}^{*(1)}\Vert\le & {} r\displaystyle \ln { \left( \frac{r+1}{r-1}\right) }-4r\sum _{k=1}^{[k_{n}^{*(1)}/2]+1} \frac{1}{(2k-1)r^{2k-1}} + \frac{4}{r^{k_{n}^{*(1)}}(r^{2}-1)}\nonumber \\&+\frac{4r}{T_{n}(r)} \mathop {{\sum }'}_{k=1}^{[(n+1)/2]}\frac{T_{n-2k+1}(r)}{2k-1} - \left\{ \begin{array}{ll} \frac{2r^{2}}{(n^{2}-1)(r^{2}-1)T_{n}(r)},&{} \quad n\ \mathrm{even,}\\ \frac{2r}{(n^{2}-4)(r^{2}-1)T_{n}(r)},&{} \quad n\ \mathrm{odd.} \end{array}\right. \end{aligned}$$
(4.26)

On the other hand, we have, for \(n=1\),

$$\begin{aligned} \Vert R_{1}^{*(1)}\Vert =\frac{2(3r^{2}-2)}{3(r^{2}-1)} -r\ln {\left( \frac{r+1}{r-1}\right) }, \end{aligned}$$
(4.27)

for \(n=2\),

$$\begin{aligned} \Vert R_{2}^{*(1)}\Vert =r\ln {\left( \frac{r+1}{r-1}\right) } -\frac{2r^{2}(6r^{2}-7)}{3(r^{2}-1)(2r^{2}-1)}, \end{aligned}$$
(4.28)

and, for \(n=3\),

$$\begin{aligned} \Vert R_{3}^{*(1)}\Vert =r\ln {\left( \frac{r+1}{r-1}\right) } -\frac{2r(60r^{4}-85r^{2}+22)}{15(r^{2}-1)(4r^{3}-3r)}. \end{aligned}$$
(4.29)

Proof

The Basu formula (4.14) is the case of the quadrature formula (1.1) with \(N=n+2,\ \tau _{1}=1,\ \tau _{\nu +1}=\tau _{\nu }^{(1)},\ \nu =1,2,\ldots ,n\), and \(\tau _{n+2}=-1\).

For \(n\ge 4\), in view of (4.17), and in spite of \(w_{0}^{*(1)}\) and \(w_{n+1}^{*(1)}\) being negative, a minor modification of Corollary 3.4(b), together with (2.5), yields (4.26).

On the other hand, the case \(n=1\) was treated in Theorem 4.2; while, for \(n=2\) or 3, in view of (4.19) and (4.20) (cf. (\(1.6_i\)) with \(\varepsilon =1\)), we have, from (\(1.7_i\)),

$$\begin{aligned} \Vert R_{n}^{*(1)}\Vert =\frac{r}{(r^{2}-1)T_{n}(r)} \int _{-1}^{1}\frac{(t^{2}-1)T_{n}(t)}{r-t}dt, \end{aligned}$$

which, by (2.5), gives (4.28) and (4.29). \(\square \)

Remark 4.2

As for \(U_{m}(r)\) (cf. Remark 4.1), the value of \(T_{m}(r)\) in (4.26) can be computed by either the three-term recurrence relation (2.3) and (2.4) or directly as

$$\begin{aligned} T_{m}(r)=\frac{1+\tau ^{2m}}{2\tau ^{m}},\ \ m\ge 0 \end{aligned}$$

(cf. [8, Equation (1.49)]).

4.3 The Fejér formula of the first kind

This is the quadrature formula

$$\begin{aligned} \int _{-1}^{1}f(t)dt=\sum _{\nu =1}^{n}w_{\nu }^{(1)} f(\tau _{\nu }^{(1)})+R_{n}^{(1)}(f), \end{aligned}$$
(4.30)

with the \(\tau _{\nu }^{(1)}\) given by (4.15), and

$$\begin{aligned} w_{\nu }^{(1)}=\frac{2}{n}\left( 1-2\mathop {{\sum }}_{k=1}^{[n/2]} \frac{\cos {2k\theta _{\nu }^{(1)}}}{4k^{2}-1}\right) ,\ \ \nu =1,2,\ldots ,n. \end{aligned}$$
(4.31)

The weights \(w_{\nu }^{(1)},\ \nu =1,2,\ldots ,n\), are all positive, and formula (4.30) has precise degree of exactness \(d=n-1\) if n is even and \(d=n\) if n is odd, i.e., \(d=2[(n+1)/2]-1\) (cf. [9]).

As in the previous two cases, we begin our analysis by examining the sign of \(R_{n}^{(1)}(t^{k}),\ k\ge 0\).

Lemma 4.5

The error term of the Fejér quadrature formula of the first kind (4.30), when \(n\ge 2\), satisfies

$$\begin{aligned} R_{n}^{(1)}(t^{k}) \left\{ \begin{array}{ll} \le 0,&{} \quad 0\le k\le k_{n}^{(1)},\\ \ge 0,&{} \quad k>k_{n}^{(1)}, \end{array}\right. \end{aligned}$$
(4.32)

where \(k_{n}^{(1)}>2[(n+1)/2]\) is a constant.

For \(n=1\), there holds

$$\begin{aligned} R_{1}^{(1)}(t^{k})\ge 0,\ \ k\ge 0. \end{aligned}$$
(4.33)

Proof

First of all,

$$\begin{aligned} R_{n}^{(1)}(t^{k})=0,\ \ k=0,1,\ldots , \left\{ \begin{array}{ll} n-1,&{} \quad n\ \mathrm{even,}\\ n,&{} \quad n\ \mathrm{odd,} \end{array}\right. \end{aligned}$$
(4.34)

and, as formula (4.30) is symmetric (cf. (3.13)),

$$\begin{aligned} R_{n}^{(1)}(t^{2l-1})=0,\ \ l\ge 1. \end{aligned}$$
(4.35)

Also, proceeding as in the case of the Clenshaw–Curtis formula, we find, for \(n\mathrm{(even)}\ge ~2\),

$$\begin{aligned} R_{n}^{(1)}(t^{n})=-\frac{1}{2^{n-2}(n^{2}-1)}, \end{aligned}$$

and, for \(n\mathrm{(odd)}\ge 3\),

$$\begin{aligned} R_{n}^{(1)}(t^{n+1})=-\frac{1}{2^{n-2}(n^{2}-4)}, \end{aligned}$$

i.e.,

$$\begin{aligned} R_{n}^{(1)}(t^{2[(n+1)/2]})<0,\ \ n\ge 2. \end{aligned}$$
(4.36)

Furthermore, from (4.30),

$$\begin{aligned} R_{n}^{(1)}(t^{2l})=\frac{2}{2l+1}-\sum _{\nu =1}^{n} w_{\nu }^{(1)}(\tau _{\nu }^{(1)})^{2l} =\frac{1}{2l+1}\left\{ 2-\sum _{\nu =1}^{n}w_{\nu }^{(1)} (2l+1)(\tau _{\nu }^{(1)})^{2l}\right\} , \end{aligned}$$
(4.37)

and, as \(|\tau _{\nu }^{(1)}|<1,\ \nu =1,2,\ldots ,n\), hence,

$$\begin{aligned} \lim _{l\rightarrow \infty }(2l+1)(\tau _{\nu }^{(1)})^{2l}=0,\ \ \nu =1,2,\ldots ,n, \end{aligned}$$

we get

$$\begin{aligned} R_{n}^{(1)}(t^{2l})>0,\ \ l>k_{n}^{(1)}/2, \end{aligned}$$
(4.38)

for some constant \(k_{n}^{(1)}>2[(n+1)/2]\). Now, combining (4.34)–(4.36) and (4.38), we obtain (4.32).

For \(n=1\), the Fejér formula of the first kind (4.30) is the 1-point Gauss formula for the Legendre weight function \(w(t)=1\) on \([-1,1]\),

$$\begin{aligned} \int _{-1}^{1}f(t)dt=2f(0)+R_{1}^{(1)}(f), \end{aligned}$$

with

$$\begin{aligned} R_{1}^{(1)}(f)=\frac{1}{3}f''(\xi ),\ \ -1<\xi <1, \end{aligned}$$

consequently,

$$\begin{aligned} R_{1}^{(1)}(t^{2l})\ge 0,\ \ l\ge 1, \end{aligned}$$

which, combined with (4.34) and (4.35), yields (4.33).\(\square \)

The constants \(k_{n}^{(1)}\) are computed from (4.37), in view of (4.15) and (4.31); their values, for \(2\le n\le 40\), are given in Table 3.

Table 3 Values of \(k_{n}^{(1)},\ 2\le n\le 40\)

We can now derive the estimates for \(\Vert R_{n}^{(1)}\Vert \).

Theorem 4.6

Consider the Fejér quadrature formula of the first kind (4.30). For \(n\ge 2\), we have

$$\begin{aligned} \Vert R_{n}^{(1)}\Vert\le & {} r\displaystyle \ln {\left( \frac{r+1}{r-1}\right) }-4r \sum _{k=1}^{[k_{n}^{(1)}/2]+1}\frac{1}{(2k-1)r^{2k-1}} +\frac{4}{r^{k_{n}^{(1)}}(r^{2}-1)} \nonumber \\&+ \frac{4r}{T_{n}(r)} \mathop {{\sum }'}_{k=1}^{[(n+1)/2]}\frac{T_{n-2k+1}(r)}{2k-1}. \end{aligned}$$
(4.39)

On the other hand, for \(n=1\), we have

$$\begin{aligned} \Vert R_{1}^{(1)}\Vert =r\ln {\left( \frac{r+1}{r-1}\right) }-2. \end{aligned}$$
(4.40)

Proof

The Fejér formula of the first kind (4.30) is the case of the quadrature formula (1.1) with \(N=n\) and \(\tau _{\nu }=\tau _{\nu }^{(1)},\ \nu =1,2,\ldots ,n\).

For \(n\ge 2\), in view of (4.32), Corollary 3.4(b), together with [10, Proposition 2.2, Equation (2.8)], yields (4.39).

On the other hand, for \(n=1\), in view of (4.33) (cf. (\(1.6_i\)) with \(\varepsilon =1\)), we have, from (\(1.7_i\)),

$$\begin{aligned} \Vert R_{1}^{(1)}\Vert =\frac{r}{T_{1}(r)}\int _{-1}^{1}\frac{T_{1}(t)}{r-t}dt, \end{aligned}$$

which, again by [10, Proposition 2.2, Equation (2.8)], gives (4.40). \(\square \)

5 A numerical example

We choose the same example as in [10, Section 4], for the reader to be able to make comparisons.

We want to approximate the integral

$$\begin{aligned} \int _{-1}^{1}e^{\omega t}dt=\frac{e^{\omega }-e^{-\omega }}{\omega },\ \ \omega >0, \end{aligned}$$
(5.1)

by using either the Clenshaw–Curtis formula (4.1) or the Fejér formula of the first kind (4.30).

The function \(f(z)=e^{\omega z}=\sum _{k=0}^{\infty }\frac{\omega ^{k}z^{k}}{k!}\) is entire and, in view of (1.3) and (4.7), we have

$$\begin{aligned} |f|_{r}^{*(2)}=\left\{ \begin{array}{ll} \displaystyle \frac{\omega ^{2([(n+1)/2]+1)}r^{2([(n+1)/2]+1)}}{(2([(n+1)/2]+1))!}, &{}1<r\le \displaystyle \frac{\sqrt{(2[(n+1)/2]+3)(2[(n+1)/2]+4)}}{\omega },\\ \displaystyle \frac{\omega ^{2([(n+1)/2]+k+1)}r^{2([(n+1)/2]+k+1)}}{(2([(n+1)/2]+k+1))!}, &{}\begin{array}{l}\displaystyle \frac{\sqrt{(2[(n+1)/2]+2k+1)(2[(n+1)/2]+2k+2)}}{\omega }<r\\ \le \displaystyle \frac{\sqrt{(2[(n+1)/2]+2k+3)(2[(n+1)/2]+2k+4)}}{\omega },\\ \ \ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad k=1,2,\ldots .\end{array} \end{array}\right. \nonumber \\ \end{aligned}$$
(5.2)

The above formula holds as it stands if \(\sqrt{(2[(n+1)/2]+3)(2[(n+1)/2]+4)}>\omega \); otherwise, the formula for \(|f|_{r}^{*(2)}\) starts at the branch of (5.2) for which \(\sqrt{(2[(n+1)/2]+2k+3)(2[(n+1)/2]+2k+4)}>\omega \). Similarly,

$$\begin{aligned} |f|_{r}^{(1)}=\left\{ \begin{array}{ll} \displaystyle \frac{\omega ^{2[(n+1)/2]}r^{2[(n+1)/2]}}{(2[(n+1)/2])!}, &{}1<r\le \displaystyle \frac{\sqrt{(2[(n+1)/2]+1)(2[(n+1)/2]+2)}}{\omega },\\ \displaystyle \frac{\omega ^{2([(n+1)/2]+k)}r^{2([(n+1)/2]+k)}}{(2([(n+1)/2]+k))!}, &{}\begin{array}{l}\displaystyle \frac{\sqrt{(2[(n+1)/2]+2k-1)(2[(n+1)/2]+2k)}}{\omega }<r\\ \le \displaystyle \frac{\sqrt{(2[(n+1)/2]+2k+1)(2[(n+1)/2]+2k+2)}}{\omega },\\ \ \ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \! k=1,2,\ldots ,\end{array} \end{array}\right. \end{aligned}$$

under restrictions similar to those in (5.2). Hence, in both cases, \(f\in X_{\infty }\) (cf. (1.4)). Moreover,

$$\begin{aligned} \max _{|z|=r}|f(z)|=e^{\omega r}. \end{aligned}$$

Therefore, by (1.8) and (1.9),

$$\begin{aligned}&\displaystyle |R_{n}^{*(2)}(f)|\le \inf _{1<r<\infty }\left( \Vert R_{n}^{*(2)} \Vert |f|_{r}^{*(2)}\right) , \end{aligned}$$
(5.3)
$$\begin{aligned}&\displaystyle |R_{n}^{(1)}(f)|\le \inf _{1<r<\infty } \left( \Vert R_{n}^{(1)}\Vert |f|_{r}^{(1)}\right) , \end{aligned}$$
(5.4)

and

$$\begin{aligned}&\displaystyle |R_{n}^{*(2)}(f)|\le \inf _{1<r<\infty }\left( \Vert R_{n}^{*(2)}\Vert e^{\omega r}\right) , \end{aligned}$$
(5.5)
$$\begin{aligned}&\displaystyle |R_{n}^{(1)}(f)|\le \inf _{1<r<\infty }\left( \Vert R_{n}^{(1)}\Vert e^{\omega r}\right) , \end{aligned}$$
(5.6)

with \(\Vert R_{n}^{*(2)}\Vert \) and \(\Vert R_{n}^{(1)}\Vert \) estimated or given by (4.12) and (4.13) and (4.39) and (4.40), respectively.

It is now worthwhile to see how (5.3) and (5.5) compare with already existing error bounds.

First of all, estimates for \(\Vert R_{n}^{*(2)}\Vert \) using Hämmerlin’s method (cf. Sect. 1) have been obtained by Akrivis in his Ph.D. thesis; he did that by either Approximation Theory techniques or Chebyshev polynomials’ expansions (cf. [1, Sections 1.4–1.5 and 1.7, Equations (1.7.9), (1.4.35) and (1.5.58)–(1.5.59)]). From the three estimates he derived, we choose the one that gives the best results in our case; setting \(m=[(n+1)/2]+2\) in Equation (1.7.9) in [1] and using Equation (1.5.58), we have

$$\begin{aligned} \Vert R_{n}^{*(2)}\Vert\le & {} 2 \left\{ d_{m} \left( 1\!-\!\frac{2m(2m-5)}{(2m-3)(2m+1)}\tau ^{2}\right) \!+\! \frac{4(m+1)^{2}}{4(m+1)^{2}\!-\!1}\frac{\tau ^{3}}{\sqrt{r^{2}-1}}\right\} \frac{r\tau ^{2m-2}}{\sqrt{r^{2}-1}}\nonumber \\&+ \frac{|R_{m}^{*(2)}(t^{2m-2})|-|R_{m}^{L} (t^{2m-2})|}{r^{2m-2}}, \end{aligned}$$
(5.7)

where

$$\begin{aligned} d_{m}=\frac{2^{4m-4}m(m-1)^{3}((m-2)!)^{4}}{(2m-1)((2m-2)!)^{2}}, \end{aligned}$$
(5.8)

\(\tau =r-\sqrt{r^{2}-1}\) and \(R_{m}^{L}\) is the error term of the m-point Gauss-Lobatto quadrature formula for the Legendre weight function \(w(t)=1\) on the interval \([-1,1]\) (with the end nodes \(\pm 1\) included in the m points). Then, we apply (5.3) and (5.5) with \(\Vert R_{n}^{*(2)}\Vert \) estimated by (5.7) and (5.8).

Also, a few years before Akrivis, a number of error bounds have been derived for functions analytic on circular or elliptic contours.

If f is analytic on the disc \(C_{r}\) (cf. Sect. 1), Jayarajan obtained bounds for the Chebyshev-Fourier coefficients of f, leading, in our case, to the estimate

$$\begin{aligned} |R_{n}^{*(2)}(f)|\le \inf _{1<r<\infty }\left\{ \frac{64(n+1)}{(n-2)n(n+2)(n+4)}\frac{r\tau ^{n+3}}{\sqrt{r^{2}-1}}e^{\omega r}\right\} ,\ \ n\ \mathrm{odd} \end{aligned}$$
(5.9)

(cf. [6, Equation (26)]).

If, on the other hand, f is analytic on the ellipse \(\mathcal{E}_{\rho }=\{z\in \mathbb {C}:z=\frac{1}{2}(u+u^{-1}),\ u=\rho e^{i\theta },\ 0\le \theta \le 2\pi \}\) with foci at \(z=\pm 1\) and sum of semiaxes \(\rho ,\ \rho >1\), error bounds for \(R_{n}^{*(2)}\) have been given by Chawla (cf. [2]), Kambo (cf. [7]) and Riess and Johnson (cf. [12]). We choose the latter, which is based on Chebyshev polynomials’ expansions, and, together with the estimate of Kambo, gives the best results and it is also very easy to apply. It has the form

$$\begin{aligned} |R_{n}^{*(2)}(f)|\le & {} \inf _{1<\rho <\infty } \left( 2\left\{ \frac{16(n+1)}{(n-2)n(n+2)(n+4)}+ \frac{1}{\rho ^{2}}\right. \right. \nonumber \\&+\left. \left. O\left( \frac{1}{\rho ^{n+1}}\right) \right\} \frac{1}{\rho ^{n+3}}\max _{z\in \mathcal{E}_{\rho }}|f(z)|\right) ,\ \ n\ \mathrm{odd}, \end{aligned}$$
(5.10)

and, as in our case,

$$\begin{aligned} \max _{z\in \mathcal{E}_{\rho }}|f(z)|=e^{\frac{1}{2}\omega (\rho +\rho ^{-1})}, \end{aligned}$$
(5.11)

we get

$$\begin{aligned} |R_{n}^{*(2)}(f)|\le & {} \inf _{1<\rho <\infty }\left( 2\left\{ \frac{16(n+1)}{(n-2)n(n+2)(n+4)}+\frac{1}{\rho ^{2}}\right. \right. \nonumber \\&+ \left. \left. O\left( \frac{1}{\rho ^{n+1}}\right) \right\} \frac{1}{\rho ^{n+3}}e^{\frac{1}{2}\omega (\rho +\rho ^{-1})}\right) ,\ \ n\ \mathrm{odd} \end{aligned}$$
(5.12)

(cf. [12, Equation (11)]).

Table 4 Error bounds and actual error in approximating the integral (5.1) using formula (4.1)

Our results are summarized in Tables 4, 5 and 6 for formula (4.1) and in Table 7 for formula (4.30); in particular, in Table 4, we present the results based on estimates (5.3) and (5.5) with \(\Vert R_{n}^{*(2)}\Vert \) estimated by (4.12); in Table 5, we give the corresponding estimates but with \(\Vert R_{n}^{*(2)}\Vert \) estimated by (5.7) and (5.8); finally, in Table 6, we present the results based on estimates (5.9) and (5.12). (Numbers in parentheses indicate decimal exponents.) All computations were performed on a SUN Ultra 5 computer in quad precision (machine precision \(1.93\cdot 10^{-34}\)). The value of r and \(\rho \), at which the infimum in each of bounds (5.3)–(5.6), (5.9) and (5.12) was attained, is given in the column headed \(r_{opt}\) and \(\rho _{opt}\), respectively, which is placed immediately before the column of the corresponding bound. As n and r increase, \(\Vert R_{n}^{*(2)}\Vert \) and \(\Vert R_{n}^{(1)}\Vert \) decrease and close to machine precision they can even take a negative value. This actually happens, for \(\Vert R_{n}^{*(2)}\Vert \) when \(\omega =0.5\) and \(n\ge 13,\ \omega =1.0\) and \(n\ge 15\), and \(\omega =2.0\) and \(n\ge 19\); and for \(\Vert R_{n}^{(1)}\Vert \) when \(\omega =0.5\) and \(n\ge 15\), and \(\omega =1.0\) and \(n\ge 17\). The reason is that, in all these cases, the infimums in bounds (5.3)–(5.6) are attained at rather high values of \(r>15\).

Table 5 Error bounds and actual error in approximating the integral (5.1) using formula (4.1)
Table 6 Error bounds and actual error in approximating the integral (5.1) using formula (4.1)
Table 7 Error bounds and actual error in approximating the integral (5.1) using formula (4.30)

Bounds (5.3) and (5.4) provide an excellent estimate of the actual error, and they are always better than bounds (5.5) and (5.6), respectively. This is to be expected, as (5.5) and (5.6) can be derived from (5.3) and (5.4) if \(|f|_{r}^{*(2)}\) and \(|f|_{r}^{(1)}\) are estimated by \(\max _{|z|=r}|f(z)|\) (cf. Sect. 1).

Furthermore, bounds (5.3) and (5.5) with \(\Vert R_{n}^{*(2)}\Vert \) estimated by (4.12) are better than the corresponding bounds with \(\Vert R_{n}^{*(2)}\Vert \) estimated by (5.7) and (5.8), particularly as \(\omega \) and n increase. Estimate (5.7) was derived by approximating \(\Vert R_{n}^{*(2)}\Vert \) by \(\Vert R_{m}^{L}\Vert \) and then adding a correction factor, while (4.12) was obtained by a tailored made process, which apparently explains the difference between the two estimates.

Similarly, bounds (5.3) and (5.5) are better than (5.9) and (5.12), particularly as \(\omega \) gets large, and this in spite of the fact that (5.12) is based on elliptical contours, which have the advantage of shrinking around the interval \([-1,1]\) as \(\rho \rightarrow 1\). However, (5.10) is reasonably good for large \(\rho \) (when the ellipse looks more and more like a circle), and this cannot happen when \(\omega \) is large, as then \(\max _{z\in \mathcal{E}_{\rho }}|f(z)|\) becomes exceedingly high (cf. (5.11)), which is apparently the reason for (5.3) and (5.5) outperforming (5.12).