1 Introduction

One of the most widely used quadrature formulae is the Clenshaw-Curtis formula

$$\begin{aligned} \int _{-1}^{1}f(t)dt=w_{0}^{*}f(1)+\sum _{\nu =1}^{n}w_{\nu }^{*}f(\tau _{\nu })+w_{n+1}^{*}f(-1)+R_{n}^{*}(f), \end{aligned}$$
(1.1)

where

$$\begin{aligned} \tau _{\nu }=\cos {\theta _{\nu }}, \quad \theta _{\nu }=\frac{\nu }{n+1}\pi , \quad \nu =1,2,\ldots ,n, \end{aligned}$$
(1.2)

are the zeros of the nth degree Chebyshev polynomial \(U_{n}\) of the second kind. Formula (1.1) has all weights positive and expressed by explicit formulae, while its precise degree of exactness is \(d^{*}=n+1\) if n is even and \(d^{*}=n+2\) if n is odd, i.e., \(R_{n}^{*}(f)=0\) for all \(f\in \mathbb {P}_{d^{*}}\) (the space of polynomials with real coefficients and degree at most \(d^{*}\)). Moreover, in view of its performance in practice, the Clenshaw-Curtis formula is sometimes compared favorably to the well-known Gauss formula (see [17]).

All of the above made Hasegawa and Sugiura to try improving the error behavior of the Clenshaw-Curtis formula, by adding to (1.1) the nodes

$$\begin{aligned} \pm \tau _{c}=\pm \cos {\theta _{c}}, \quad \theta _{c}=\frac{\pi }{2(n+1)}, \end{aligned}$$
(1.3)

which lie in the intervals \((\tau _{1},1)\) and \((-1,\tau _{n})\), respectively. That way, they obtained the so-called corrected Clenshaw-Curtis formula

$$\begin{aligned} \int _{-1}^{1}f(t)dt=\bar{w}_{0}^{*}f(1)+\bar{w}_{c}^{*}f(\tau _{c})+\sum _{\nu =1}^{n}\bar{w}_{\nu }^{*}f(\tau _{\nu })+\bar{w}_{c}^{*}f(-\tau _{c})+\bar{w}_{n+1}^{*}f(-1)+\bar{R}_{n}^{*}(f). \end{aligned}$$
(1.4)

The new formula has all weights, except for \(\bar{w}_{c}^{*}\) when \(n\ge 2\), positive and explicitly expressed, while its degree of exactness is \(n+3\) if n is even and \(n+4\) if n is odd. In addition, the convergence rate of formula (1.4) is better than that of any formula in the Clenshaw-Curtis family (see [9], in particular, Theorem 2 and Remark 1; a detailed description of all interpolatory quadrature formulae with Chebyshev abscissae of any of the four kinds is given in [13]).

The corresponding open-type Clenshaw-Curtis formula is the so-called Fejér formula of the second kind or Filippi formula

$$\begin{aligned} \int _{-1}^{1}f(t)dt=\sum _{\nu =1}^{n}w_{\nu }f(\tau _{\nu })+R_{n}(f), \end{aligned}$$
(1.5)

having all weights positive and explicitly expressed and degree of exactness \(n-1\) if n is even and n if n is odd. This formula is known to share common properties with the Clenshaw-Curtis formula. An important such property is that formula (1.5) forms a nested set of quadrature formulae, i.e., the nodes of the n-point formula are among those of the \((2n+1)\)-point formula, and the same property is enjoyed by formulae (1.1) and (1.4). This makes all these formulae appropriate for adaptive or cubature integration schemes.

Motivated by the work of Hasegawa and Sugiura in [9], we introduce a corrected Fejér formula of the second kind, by adding to formula (1.5) the nodes \(\pm \tau _{c}\) in (1.3), thus obtaining

$$\begin{aligned} \int _{-1}^{1}f(t)dt=\bar{w}_{c}^{(+)}f(\tau _{c})+\sum _{\nu =1}^{n}\bar{w}_{\nu }f(\tau _{\nu })+\bar{w}_{c}^{(-)}f(-\tau _{c})+\bar{R}_{n}(f). \end{aligned}$$
(1.6)

The new formula is shown to have all weights positive and given by explicit formulae, and precise degree of exactness \(n+1\) if n is even and \(n+2\) if n is odd. This, together with the convergence of the formula for Riemann integrable functions on \([-1,1]\), is the subject of Sect. 2. Section 3 is devoted to the error term of the formula. First, we obtain optimal error bounds by Peano kernel methods, thus concluding that formula (1.6) has essentially the same rate of convergence as the Clenshaw-Curtis formula. Then, using Hilbert space techniques, we compute the norm of the error functional, which leads to error bounds for analytic functions when \(1\le n\le 40\). In Sect. 4, we prove the convergence of formula (1.6) for functions having a monotonic singularity at one or both endpoints of \([-1,1]\). This property, also satisfied by the Fejér formula of the second kind (cf. [5]), is an advantage of formula (1.6) over the Clenshaw-Curtis formula and its corrected version (1.4), both of which cannot even be applied on functions with singularities at \(\pm \)1. In addition, as expected, formula (1.6) retains the nested quadrature formulae property satisfied by formulae (1.1), (1.4) and (1.5). All this together with its rate of convergence make formula (1.6) an alternative to the Clenshaw-Curtis formula. The paper concludes in Sect. 5, with some numerical examples.

2 The quadrature formula

We begin by recalling explicit formulae for the weights of formula (1.5),

$$\begin{aligned} \begin{array}{rcl} w_{\nu }&{}=&{}\displaystyle \frac{2}{n+1}\left\{ 1-2\sum \limits _{k=1}^{[(n-1)/2]}\frac{\cos {2k\theta _{\nu }}}{4k^{2}-1}-\frac{\cos 2[(n+1)/2]\theta _{\nu }}{2[(n+1)/2]-1}\right\} \\ &{}=&{}\displaystyle \frac{2}{n\!+\!1}\left\{ 1-2\sum \limits _{k=1}^{[(n+1)/2]}\frac{\cos {2k\theta _{\nu }}}{4k^{2}-1}-\frac{\cos 2[(n+1)/2]\theta _{\nu }}{2[(n\!+\!1)/2]+1}\right\} , \ \ \nu =1,2,\ldots , n, \end{array} \end{aligned}$$
(2.1)

or

$$\begin{aligned} w_{\nu }=\frac{4\sin \theta _{\nu }}{n+1}\sum _{k=1}^{[(n+1)/2]}\frac{\sin (2k-1)\theta _{\nu }}{2k-1},\ \ \nu =1,2,\ldots , n, \end{aligned}$$

where \([\,\cdot \,]\) denotes the integer part of a real number (cf. [13, Eqs. (2.8)–(2.10) with \(i=2\)]).

We now turn to the study of formula (1.6). Let \(I(f)=\int _{-1}^{1}f(t)dt\) and \(\bar{Q}_{n}(f)=\bar{w}_{c}^{(+)}f(\tau _{c})+ \sum _{\nu =1}^{n}\bar{w}_{\nu }f(\tau _{\nu })\)+\(\bar{w}_{c}^{(-)}f(-\tau _{c})\).

Theorem 2.1

Consider the quadrature formula (1.6).

  1. (a)

    The weights \(\bar{w}_{\nu }\) and \(\bar{w}_{c}^{(+)},\ \bar{w}_{c}^{(-)}\) are given by

    $$\begin{aligned} \bar{w}_{\nu }=w_{\nu }+\frac{{2\sin ^{2}\theta _{\nu }\cos 2[(n\!+\!1)/2]\theta _{\nu }}}{{(2[n/2]+1)(2[(n+1)/2]+1)\sin (\theta _{\nu }+\theta _{c})\sin (\theta _{\nu }-\theta _{c}})},\ \ \nu =1,2,\ldots , n, \end{aligned}$$
    (2.2)
    $$\begin{aligned} \bar{w}_{c}^{(+)}=\bar{w}_{c}^{(-)}=\left\{ \begin{array}{ll} \displaystyle \frac{\sin \theta _{c}}{n+1},&{}n\ \mathrm{even},\\ \displaystyle \frac{(n+1)\tan \theta _{c}}{n(n+2)}, &{}n\ \mathrm{odd}. \end{array}\right. \end{aligned}$$
    (2.3)

    In addition, the \(\bar{w}_{\nu },\ \nu =1,2,\ldots ,n\), are all positive.

  2. (b)

    The quadrature formula has precise degree of exactness \(\bar{d}=n+1\) if n is even and \(\bar{d}=n+2\) if n is odd.

  3. (c)

    There holds \(\lim _{n\rightarrow \infty }\bar{Q}_{n}(f)=I(f)\) for all functions f that are Riemann integrable on \([-1,1]\).

Proof

(a) As formula (1.6) is precise for polynomials of degree \(n+1\), setting \(f(t)=(t^{2}-\tau _{c}^{2})U_{n}(t)/(t-\tau _{\nu })\), we have

$$\begin{aligned} \bar{w}_{\nu }= & {} \displaystyle \frac{1}{(\tau _{\nu }^{2}-\tau _{c}^{2})\displaystyle U'_{n}(\tau _{\nu })} \int _{-1}^{1}\frac{(t^{2}-\tau _{c}^{2})U_{n}(t)}{t-\tau _{\nu }}dt\\= & {} \displaystyle \frac{1}{(\tau _{\nu }^{2}-\tau _{c}^{2})\displaystyle U'_{n}(\tau _{\nu })}\int _{-1}^{1}\frac{(t^{2} -\tau _{\nu }^{2}+\tau _{\nu }^{2}-\tau _{c}^{2})U_{n}(t)}{t-\tau _{\nu }}dt\\= & {} \displaystyle \frac{1}{\displaystyle U'_{n}(\tau _{\nu })}\int _{-1}^{1}\frac{U_{n}(t)}{t-\tau _{\nu }}dt +\frac{\int _{-1}^{1}(t+\tau _{\nu })U_{n}(t)dt}{(\tau _{\nu }^{2}-\tau _{c}^{2})\displaystyle U'_{n}(\tau _{\nu })}, \end{aligned}$$

that is,

$$\begin{aligned} \bar{w}_{\nu }= w_{\nu }+\frac{\int _{-1}^{1}(t+\tau _{\nu })U_{n}(t)dt}{(\tau _{\nu }^{2}-\tau _{c}^{2})\displaystyle U'_{n}(\tau _{\nu })}, \quad \nu =1,2,\ldots ,n. \end{aligned}$$
(2.4)

The nth degree Chebyshev polynomial of the second kind \(U_{n}\) can be represented by

$$\begin{aligned} U_{n}(\cos \theta )=\frac{\sin (n+1)\theta }{\sin \theta }, \end{aligned}$$
(2.5)

and satisfies the three-term recurrence relation

$$\begin{aligned} \begin{array}{c} U_{k+1}(t)=2tU_{k}(t)-U_{k-1}(t), \quad k=1,2,\ldots ,\\ U_{0}(t)=1,\ U_{1}(t)=2t. \end{array} \end{aligned}$$
(2.6)

Then, by means of (2.6) and

$$\begin{aligned} \int _{-1}^{1}U_{m}(t)dt=\left\{ \begin{array}{ll} \displaystyle \frac{2}{m+1}, &{}m\ \mathrm{even},\\ 0,&{} m\ \mathrm{odd} \end{array}\right. \end{aligned}$$
(2.7)

(cf. [11, Eq. (2.46)]), we compute

$$\begin{aligned} \int _{-1}^{1}(t+\tau _{\nu })U_{n}(t)dt=\left\{ \begin{array}{ll} \displaystyle \frac{2\tau _{\nu }}{n+1},&{}n\ \mathrm{even},\\ \displaystyle \frac{2(n+1)}{n(n+2)},&{}n\ \mathrm{odd}. \end{array}\right. \end{aligned}$$
(2.8)

Also, using (2.5), we calculate \(U_{n}'(\cos \theta )\), and setting \(\theta =\theta _{\nu }\), we get

$$\begin{aligned} U'_{n}(\tau _{\nu })= \frac{(-1)^{\nu +1}(n+1)}{1-\tau _{\nu }^{2}}. \end{aligned}$$
(2.9)

Finally, from (1.2)–(1.3), by the double-angle formula and the formula for the difference of cosines, we get

$$\begin{aligned} \tau _{\nu }^{2}-\tau _{c}^{2}=-\sin (\theta _{\nu }+\theta _{c})\sin (\theta _{\nu }-\theta _{c}). \end{aligned}$$
(2.10)

Now, inserting (2.8)–(2.10) into (2.4), we obtain, after an elementary computation, (2.2).

Furthermore, by symmetry, \(\bar{w}_{c}^{(+)}=\bar{w}_{c}^{(-)}\), hence, setting \(f(t)=(t+\tau _{c})U_{n}(t)\) in formula (1.6), we have

$$\begin{aligned} \bar{w}_{c}^{(+)}=\bar{w}_{c}^{(-)}= \frac{\int _{-1}^{1}(t+\tau _{c})U_{n}(t)dt}{2\tau _{c}U_{n}(\tau _{c})}, \end{aligned}$$
(2.11)

where, as in (2.8),

$$\begin{aligned} \int _{-1}^{1}(t+\tau _{c})U_{n}(t)dt=\left\{ \begin{array}{ll} \displaystyle \frac{2\tau _{c}}{n+1},&{}n\ \mathrm{even},\\ \displaystyle \frac{2(n+1)}{n(n+2)},&{}n\ \mathrm{odd}, \end{array}\right. \end{aligned}$$
(2.12)

and, by (1.3) and (2.5),

$$\begin{aligned} U_{n}(\tau _{c})=\frac{1}{\sin \theta _{c}}, \end{aligned}$$

which inserted, together with (2.12), into (2.11), yields (2.3).

We now turn into proving the positivity of \(\bar{w}_{\nu },\ \nu =1,2,\ldots ,n\). First of all, by symmetry,

$$\begin{aligned} \begin{array}{c} \tau _{n-\nu +1}=-\tau _{\nu },\quad \nu =1,2,\ldots ,n, \\ \bar{w}_{n-\nu +1}=\bar{w}_{\nu },\quad \nu =1,2,\ldots ,n, \end{array} \end{aligned}$$
(2.13)

hence, we only need to prove the positivity of \(\bar{w}_{\nu }\) for \(\nu =1,2,\ldots ,[(n+1)/2]\). Furthermore, as \(w_{\nu }>0\) (cf. [13, Sect. 2.1]) and

$$\begin{aligned} \cos 2[(n+1)/2]\theta _{\nu }=\displaystyle \left\{ \begin{array}{ll} \displaystyle (-1)^{\nu }\cos \theta _{\nu },&{}n\ \mathrm{even}, \\ (-1)^{\nu },&{}n\ \mathrm{odd}, \end{array}\right. \end{aligned}$$
(2.14)

by (2.2), \(\bar{w}_{\nu }>0\) for \(\nu \) even. It therefore remains to prove the positivity of \(\bar{w}_{\nu }\) for \(\nu \) odd. Let first n be even. Then, by the second equation in (2.1), in view of (2.14),

$$\begin{aligned} \bar{w}_{\nu }> & {} \frac{2}{n+1}\left\{ 1-2\sum \limits _{k=1}^{n/2}\frac{1}{4k^{2}-1}+\frac{\cos \theta _{\nu }}{n+1} \right\} -\frac{2\sin ^{2}\theta _{\nu }\cos \theta _{\nu }}{(n+1)^{2}\sin (\theta _{\nu }+\theta _{c})\sin (\theta _{\nu }-\theta _{c})}, \end{aligned}$$

and, by virtue of

$$\begin{aligned} \sum _{k=1}^{n/2}\frac{1}{4k^{2}-1}=\frac{n}{2(n+1)} \end{aligned}$$

(proved by a partial fraction decomposition of the left-hand side), the formula for the product of sines and the fact that \(2\theta _{c}=\theta _{1}\) (cf. (1.2)–(1.3)), we get, after a simple computation,

$$\begin{aligned} \bar{w}_{\nu }> & {} \displaystyle \frac{2}{(n+1)^{2}} + \frac{2\cos \theta _{\nu }}{(n+1)^{2}} -\frac{2\sin ^{2}\theta _{\nu }\cos \theta _{\nu }}{(n+1)^{2}\sin (\theta _{\nu }+\theta _{c})\sin (\theta _{\nu }-\theta _{c})}\\= & {} \displaystyle \frac{2\{(1+\cos \theta _{\nu })\sin (\theta _{\nu }+\theta _{c})\sin (\theta _{\nu }-\theta _{c}) -(1-\cos ^{2}\theta _{\nu })\cos \theta _{\nu }\}}{(n+1)^{2}\sin (\theta _{\nu }+\theta _{c})\sin (\theta _{\nu }-\theta _{c})}\\= & {} \displaystyle \frac{(1+\cos \theta _{\nu })(1+\cos \theta _{1}-2\cos \theta _{\nu })}{(n+1)^{2}\sin (\theta _{\nu } +\theta _{c})\sin (\theta _{\nu }-\theta _{c})}>0. \end{aligned}$$

If, on the other hand, n is odd, starting from the first equation in (2.1) and proceeding in a like manner, we obtain

$$\begin{aligned} \bar{w}_{\nu }>\frac{2\{1-\cos ^{2}\theta _{\nu }+(n+2)(\cos \theta _{1}-\cos ^{2}\theta _{\nu })\}}{n(n+1)(n+2)\sin (\theta _{\nu }+\theta _{c})\sin (\theta _{\nu }-\theta _{c})}>0, \end{aligned}$$

thus concluding the proof.

(b) Let n be even. First of all, formula (1.6) has degree of exactness at least \(n+1\). Furthermore, by a repeated application of (2.6), and in view of (2.7), we compute

proving that formula (1.6) has precise degree of exactness \(n+1\).

Similarly, for n odd, the degree of exactness is at least \(n+2\), and as

it is precisely \(n+2\).

(c) This, by a well-known result of Fejér (cf. [4, Satz 1]), is an immediate consequence of the positivity of the weights \(\bar{w}_{c}^{(+)},\ \bar{w}_{\nu }, \ \nu =1,2,\ldots ,n\), and \(\bar{w}_{c}^{(-)}\). \(\square \)

3 The error term of the quadrature formula

Our error estimates for formula (1.6) are of two different types. Optimal error bounds, by Peano kernel methods, for functions that are sufficiently smooth; and error bounds, by Hilbert space techniques, for analytic functions.

3.1 Peano kernel error bounds

Given that formula (1.6) has degree of exactness \(\bar{d}\), for \(f\in C^{\bar{d}+1}[-1,1]\), we have

$$\begin{aligned} \bar{R}_{n}(f)=\int _{-1}^{1}\bar{K}_{\bar{d}}(t)f^{(\bar{d}+1)}(t)dt, \end{aligned}$$
(3.1)

where \(\bar{K}_{\bar{d}}\) is the \(\bar{d}\)th Peano kernel. From (3.1), we immediately derive

$$\begin{aligned} |\bar{R}_{n}(f)|\le c_{\bar{d}+1}\max _{-1\le t\le 1}|f^{(\bar{d}+1)}(t)|,\ c_{\bar{d}+1}=\int _{-1}^{1}|\bar{K}_{\bar{d}}(t)|dt. \end{aligned}$$

If, in addition, \(\bar{K}_{\bar{d}}\) does not change sign on \([-1,1]\), formula (1.6) is called definite; in particular, positive definite if \(\bar{K}_{\bar{d}}\ge 0\), and negative definite if \(\bar{K}_{\bar{d}}\le 0\). In this case, (3.1), by the Mean Value Theorem for integrals, gives

$$\begin{aligned} \bar{R}_{n}(f)=\bar{c}_{\bar{d}+1}f^{(\bar{d}+1)}(\xi ),\ \bar{c}_{\bar{d}+1}=\int _{-1}^{1}\bar{K}_{\bar{d}}(t)dt, \ \ -1<\xi <1 \end{aligned}$$
(3.2)

(cf. [3, Sect. 4.3]).

The derivation of the error bounds will be based on the following lemma, which precedes our results.

Lemma 3.1

([10, Lemma B]) Let \(g\in C^{m+s}[a,b],\ m\ge 1,\ s\ge 0\), have the zeros \(t_{\nu },\ 1\le \nu \le m+s\). For a \(k,\ 1\le k\le m\), assume that the polynomial \(q_{k}(t)=\Pi _{i=1}^{k}(t-t_{i})\) has only simple zeros. Then there exist functions \(r_{i}\in C^{k+s-1}[a,b],\ 1\le i\le k\), such that

$$\begin{aligned} g(t)=\sum _{i=1}^{k}\frac{r_{i}(t)}{q'_{k}(t_{i})}\prod _{\nu =1}^{m}(t-t_{\nu }). \end{aligned}$$
(3.3)

Each \(r_{i}\) has \(k+s-1\) zeros, especially, the \(t_{\nu },\ m+1\le \nu \le m+s\), are zeros of \(r_{i}\). In addition, there exist \(\xi _{i}=\xi _{i}(t)\in [a,b], \ 1\le i\le k\), such that

$$\begin{aligned} \frac{r_{i}^{(k+s-1)}(t)}{(k+s-1)!}=\frac{g^{(m+s)}(\xi _{i})}{(m+s)!}. \end{aligned}$$
(3.4)

Theorem 3.2

Consider the quadrature formula (1.6). There holds, for n even and \(f\in C^{n+2}[-1,1]\),

and, for \(n(odd)\ge 3\) and \(f\in C^{n+3}[-1,1]\),

On the other hand, if \(n=1\), the quadrature formula is positive definite, and

Proof

Let first n be even. As formula (1.6) is interpolatory, having degree of exactness \(n+1\), there holds

$$\begin{aligned} \bar{R}_{n}(f)=\int _{-1}^{1}\bar{r}_{n}(f;t)dt, \end{aligned}$$
(3.6)

where \(\bar{r}_{n}(f;\cdot )\) is the error of the interpolation based on the \(n+2\) points \(\tau _{\nu },\ \nu =1,2,\ldots ,n\), and \(\pm \tau _{c}\). Assuming that \(f\in C^{n+2}[-1,1]\), the same is true for \(\bar{r}_{n}(f;\cdot )\). Since, in addition, \(\bar{r}_{n}(f;\tau _{\nu })=0, \ \nu =1,2,\ldots ,n\), and \(\bar{r}_{n}(f;\pm \tau _{c})=0\), we can apply Lemma 3.1 with \(g(\cdot )=\bar{r}_{n}(f;\cdot ),\ m=n,\ s=2\) and \([a,b]=[-1,1]\). Setting \(k=2\) and \(q_{2}(t)=(t-\tau _{1})(t-\tau _{n})=(t-\tau _{1})(t+\tau _{1})=t^{2}-\tau _{1}^{2}\) (cf. (2.13)), (3.3) gives

$$\begin{aligned} \bar{r}_{n}(f;t)=\frac{1}{2^{n+1}\tau _{1}}\{r_{1}(t)U_{n}(t)-r_{2}(t)U_{n}(t)\}, \end{aligned}$$
(3.7)

where \(r_{i}\in C^{3}[-1,1],\ i=1,2\), each \(r_{i}\) has three zeros, in particular, \(r_{i}(\pm \tau _{c})=0\), and there exist \(\xi _{i}=\xi _{i}(t)\in [-1,1],\ i=1,2\), such that

$$\begin{aligned} r'''_{i}(t)=\frac{6}{(n+2)!} f^{(n+2)}(\xi _{i}),\quad i=1,2 \end{aligned}$$
(3.8)

(cf. (3.4)). Now, let the functions \(h_{i},\ i=1,2,3\), be defined by

$$\begin{aligned} h'_{1}=U_{n}, \end{aligned}$$
(3.9)

and

$$\begin{aligned} h_{i}(t)=\int _{-1}^{t}h_{i-1}(x)dx,\ \ i=2,3, \end{aligned}$$
(3.10)

whence

$$\begin{aligned} h'_{i}=h_{i-1},\ \ i=2,3, \end{aligned}$$
(3.11)
$$\begin{aligned} h_{i}(-1)=0,\ \ i=2,3. \end{aligned}$$
(3.12)

Then, inserting (3.7) into (3.6), and applying, in view of (3.9) and (3.11)–(3.12), integration by parts, we get

from which, there follows

(3.13)

Now, from (3.9) and (3.10), using \(T'_{n+1}=(n+1)U_{n}\), where \(T_{n+1}\) is the \((n+1)\)th degree Chebyshev polynomial of the first kind (cf. [11, Eq. (2.48)]) and

$$\begin{aligned} \int _{-1}^{t}T_{m}(x)dx=\frac{1}{2}\left\{ \frac{T_{m+1}(t)}{m+1}-\frac{T_{m-1}(t)}{m-1}\right\} +\frac{(-1)^{m-1}}{(m-1)(m+1)},\ \ m\ge 2 \end{aligned}$$
(3.14)

(cf. [11, Eq. (2.43)]), we compute

$$\begin{aligned} h_{1}(t)=\frac{T_{n+1}(t)}{n+1}, \end{aligned}$$
$$\begin{aligned} h_{2}(t)=\frac{T_{n+2}(t)}{2(n+1)(n+2)}-\frac{T_{n}(t)}{2n(n+1)}+\frac{(-1)^{n}}{n(n+1)(n+2)}, \end{aligned}$$
$$\begin{aligned} \begin{array}{rl} h_{3}(t)=&{}\displaystyle \frac{T_{n+3}(t)}{4(n+1)(n+2)(n+3)}-\frac{T_{n+1}(t)}{2n(n+1)(n+2)}+\frac{T_{n-1}(t)}{4(n-1)n(n+1)}\\ &{}+\displaystyle \frac{(-1)^{n}(t+1)}{n(n+1)(n+2)}+\frac{(-1)^{n}3}{(n-1)n(n+1)(n+2)(n+3)},\ \ n \ge 2, \end{array} \end{aligned}$$

hence, we find

$$\begin{aligned} h_{1}(-1)=\frac{(-1)^{n+1}}{n+1}, \end{aligned}$$
(3.15)
$$\begin{aligned} h_{1}(1)=\frac{1}{n+1}, \end{aligned}$$
(3.16)
$$\begin{aligned} h_{2}(1)=\left\{ \begin{array}{ll} 0,&{}n\ \mathrm{even},\\ \displaystyle -\frac{2}{n(n+1)(n+2)},&{}\,\,n\ \mathrm{odd}, \end{array}\right. \end{aligned}$$
(3.17)
$$\begin{aligned} h_{3}(1)=\left\{ \begin{array}{ll} \displaystyle \frac{2}{(n-1)(n+1)(n+3)},&{}\,\,n\ \mathrm{even}, \\ \displaystyle -\frac{2}{n(n+1)(n+2)},&{}n\ \mathrm{odd}, \end{array}\right. \end{aligned}$$
(3.18)
$$\begin{aligned} |h_{3}(t)|\le \frac{3(n^{2}+2n-1)}{(n-1)n(n+1)(n+2)(n+3)},\ \ n \ge 2. \end{aligned}$$
(3.19)

Furthermore, as \(r_{i},\ i=1,2\), has three zeros, among them \(\pm \tau _{c}\), by Rolle’s Theorem, \(r'_{i}\) and \(r''_{i},\ i=1,2\), have two and one zeros, respectively, and let \(t'_{i},\ i=1,2\), be one of the zeros of \(r'_{i}\) and \(t''_{i},\ i=1,2\), be the zero of \(r''_{i}\). Then, by the Mean Value Theorem, applied first to \(r_{i},\ i=1,2\), on \([\tau _{c},1]\),

$$\begin{aligned} |r_{i}(1)|=|r'_{i}(\zeta _{i})|(1-\tau _{c}),\ \ i=1,2, \end{aligned}$$

then to \(r'_{i},\ i=1,2\), between \(t'_{i}\) and \(\zeta _{i}\),

$$\begin{aligned} |r'_{i}(\zeta _{i})|=|r''_{i}(\zeta '_{i})||\zeta _{i}-t'_{i}|\le 2|r''_{i}(\zeta '_{i})|,\ \ i=1,2, \end{aligned}$$

and finally to \(r''_{i},\ i=1,2\), between \(t''_{i}\) and \(\zeta '_{i}\),

$$\begin{aligned} |r''_{i}(\zeta '_{i})|=|r'''_{i}(\zeta ''_{i})||\zeta '_{i}-t''_{i}|\le 2|r'''_{i}(\zeta ''_{i})|,\ \ i=1,2, \end{aligned}$$

which combined, together with (1.3) and (3.8), give, in view of the double-angle formula for cosines,

$$\begin{aligned} |r_{i}(\pm 1)|\le & {} \frac{24(1-\tau _{c})}{(n+2)!}\max _{-1\le t\le 1}|f^{(n+2)}(t)|\nonumber \\= & {} \frac{48\sin ^{2}\frac{\pi }{4(n+1)}}{(n+2)!}\max _{-1\le t\le 1}|f^{(n+2)}(t)|,\ \ i=1,2, \end{aligned}$$
(3.20)

where the estimate for \(r_{i}(-1)\) is derived by the same steps. In a like manner,

$$\begin{aligned} |r''_{i}(1)| \le \frac{12}{(n+2)!}\max _{-1\le t\le 1}|f^{(n+2)}(t)|,\ \ i=1,2. \end{aligned}$$
(3.21)

Finally, from (1.2), by virtue of \(\cos \theta \ge 1-2\theta /\pi ,\ 0\le \theta \le \pi /2\), we get

$$\begin{aligned} \frac{1}{\tau _{1}} \le \frac{n+1}{n-1}. \end{aligned}$$
(3.22)

Now, inserting (3.8) and (3.15)–(3.22) into (3.13), taking into account that n is even, we obtain, after an elementary computation, \((3.5_{e})\).

We next turn to the case of \(n(\mathrm{odd})\ge 3\). As in this case formula (1.6) has degree of exactness \(n+2\), we consider the even part of the interpolation error \(\bar{r}_{n}(f;\cdot )\) defined by

$$\begin{aligned} \bar{r}_{n,e}(f;t)=\frac{1}{2}\{\bar{r}_{n}(f;t)+\bar{r}_{n}(f;-t)\}. \end{aligned}$$

First of all, a simple change of variables shows that

$$\begin{aligned} \int _{-1}^{1}\bar{r}_{n,e}(f;t)dt=\int _{-1}^{1}\bar{r}_{n}(f;t)dt, \end{aligned}$$

hence,

$$\begin{aligned} \bar{R}_{n}(f)=\int _{-1}^{1}\bar{r}_{n,e}(f;t)dt \end{aligned}$$

(cf. (3.6)). As \(\bar{r}_{n}(f;0)=\bar{r}_{n}(f;\tau _{(n+1)/2})=0\), we have \(\bar{r}_{n,e}(f;0)=0\); and as \(\bar{r}_{n,e}(f;\cdot )\) is an even function, there holds \(\bar{r}_{n,e}^{\prime }(f;0)=0\). Consequently, \(\bar{r}_{n,e}(f;\cdot )\) has \(n+3\) zeros, the \(\tau _{\nu },\ \nu =1,2,\ldots ,n\), and the \(\pm \tau _{c}\), where \(\tau _{(n+1)/2}=0\) is a double zero. Therefore, assuming that \(f\in C^{n+3}[-1,1]\), the same is true for \(\bar{r}_{n,e}(f;\cdot )\), and we can apply Lemma 3.1 with \(g(\cdot )=\bar{r}_{n,e}(f;\cdot ),\ m=n,\ s=3\) and \([a,b]=[-1,1]\). Choosing the same k and \(q_{k}\) as in the case of n even, \(\bar{r}_{n,e}(f;\cdot )\) has the representation (3.7), except that here each \(r_{i}\in C^{4}[-1,1]\), it has four zeros, among which \(\pm \tau _{c}\), and there exist \(\xi _{i}=\xi _{i}(t)\in [-1,1]\) such that

$$\begin{aligned} r_{i}^{(4)}(t)=\frac{24}{(n+3)!}f^{(n+3)}(\xi _{i}),\ \ i=1,2. \end{aligned}$$
(3.23)

Then, we proceed as in the case of n even, by defining the \(h_{i},\ i=1,2,3\), in (3.9)–(3.12) and

$$\begin{aligned} h_{4}(t)=\int _{-1}^{t}h_{3}(x)dx. \end{aligned}$$
(3.24)

We get

(3.25)

Now, from (3.24), in view of (3.14), an elaborate computation gives

$$\begin{aligned} h_{4}(t)= & {} \frac{T_{n+4}(t)}{8(n+1)(n+2)(n+3)(n+4)}-\frac{3T_{n+2}(t)}{8n(n+1)(n+2)(n+3)}\\&+\frac{3T_{n}(t)}{8(n-1)n(n+1)(n+2)}-\frac{T_{n-2}(t)}{8(n-2)(n-1)n(n+1)}\\&-\frac{(t+1)^{2}}{2n(n+1)(n+2)}-\frac{3(t+1)}{(n-1)n(n+1)(n+2)(n+3)}\\&-\frac{15}{(n-2)(n-1)n(n+1)(n+2)(n+3)(n+4)}, \end{aligned}$$

hence,

$$\begin{aligned} h_{4}(1)=-\frac{2(n^{2}+2n-5)}{(n-2)n(n+1)(n+2)(n+4)}, \end{aligned}$$
(3.26)
$$\begin{aligned} |h_{4}(t)|\le \frac{2n^{3}+n^{2}-9n+3}{(n-2)(n-1)n(n+1)(n+2)(n+3)}. \end{aligned}$$
(3.27)

Furthermore, as in the case of n even,

$$\begin{aligned} |r_{i}(\pm 1)|\le \frac{384\sin ^{2}\frac{\pi }{4(n+1)}}{(n+3)!}\max _{-1\le t\le 1}|f^{(n+3)}(t)|,\ \ i=1,2, \end{aligned}$$
(3.28)
$$\begin{aligned} |r_{i}^{(j-1)}(1)|\le \frac{2^{5-j}24}{(n+3)!} \max _{-1\le t\le 1}|f^{(n+3)}(t)|,\ \ i=1,2,\ \ j=2,3,4. \end{aligned}$$
(3.29)

Now, inserting (3.15)–(3.18), (3.22), (3.23) and (3.26)–(3.29) into (3.25), taking into account that n is odd, we obtain, after an elementary computation, \((3.5_{o})\).

For \(n=1\), formula (1.6) has the form

$$\begin{aligned} \int _{-1}^{1}f(t)dt=\frac{2}{3}f\left( \sqrt{2}/2\right) +\frac{2}{3}f(0)+\frac{2}{3}f\left( -\sqrt{2}/2\right) +\bar{R}_{1}(f), \end{aligned}$$
(3.30)

with degree of exactness 3 and 3rd Peano kernel

$$\begin{aligned} \bar{K}_{3}(t)=\left\{ \begin{array}{ll} \displaystyle \frac{(1-t)^{4}}{24},&{}\sqrt{2}/2<t\le 1, \\ \displaystyle \frac{3t^{4}-4t^{3}+6(3-2\sqrt{2})t^{2}+3-2\sqrt{2}}{72},&{}0<t\le \sqrt{2}/2, \\ \bar{K}_{3}(-t),&{}-1\le t\le 0. \end{array}\right. \end{aligned}$$

As \(\bar{K}_{3}(t)\ge 0,\ -1\le t\le 1\), the formula is positive definite, hence,

$$\begin{aligned} \bar{R}_{1}(f)=\bar{c}_{4}f^{(4)}(\xi ),\ \ -1<\xi <1 \end{aligned}$$
(3.31)

(cf. (3.2)), where, from (3.31), in view of (3.30), we get

$$\begin{aligned} \bar{c}_{4}=\frac{\bar{R}_{1}(t^{4})}{4!}=\frac{1}{360}, \end{aligned}$$

thus obtaining (3.5\(_1\)). \(\square \)

Remark 3.1

We have, in view of (2.15\(_{e}\))–(2.15\(_{o}\)), for n even,

and for n odd,

which, compared to (\(3.5_{ e}\)) and (\(3.5_{ o}\)), respectively, show that our bounds are optimal.

Furthermore, as in both (\(3.5_{ e}\)) and (\(3.5_{ o}\)), the quantity in the braces is of order \(O(n^{-3})\), the rate of convergence of formula (1.6) is the same as that of the Clenshaw-Curtis formula (cf. [2, Theorem 2]), which is also confirmed numerically in Example 5.1.

Remark 3.2

Given that formula (1.5) is definite (cf. [1]), one could ask the same question for formula (1.6), in which case we could obtain results analogous to (3.5\(_{1}\)). A few calculations, for small values of n, indicate that the answer to this question could be affirmative, although further investigations would be needed. Note, however, that, even though proving the definiteness of formula (1.6) would be quite cumbersome, requiring a substantial effort (cf. [1]), it will not essentially improve the results of Theorem 3.2.

3.2 Hilbert space error bounds

Another estimate for the error term of formula (1.6) can be obtained by a Hilbert space technique proposed by Hämmerlin (cf. [8]). Assuming that f is a single-valued holomorphic function in the disk \(C_{r}=\{z\in \mathbb {C}:|z|<r\},\ r>1\), then it can be written as

$$\begin{aligned} f(z)=\sum _{k=0}^{\infty }a_{k}z^{k},\ \ z\in C_{r}. \end{aligned}$$

Define

$$\begin{aligned} |f|_{r}=\sup \{|a_{k}|r^{k}:k\in \mathbb {N}_{0}\ \mathrm{and}\ \bar{R}_{n}(t^{k}) \ne 0\}, \end{aligned}$$

which is a seminorm in the space

$$\begin{aligned} X_{r}=\{f:f\ \mathrm{holomorphic\ in}\ C_{r}\ \mathrm{and}\ |f|_{r}<\infty \}. \end{aligned}$$

Then, it can easily be shown that \(\bar{R}_{n}(\cdot )\) is a continuous linear functional in \((X_{r},|\cdot |_{r})\), and its norm is given by

$$\begin{aligned} \Vert \bar{R}_{n}\Vert =\sum _{k=0}^{\infty }\frac{|\bar{R}_{n}(t^{k})|}{r^{k}}, \end{aligned}$$

while, in case that

$$\begin{aligned} \bar{R}_{n}(t^{k})\ge 0,\ \ k\ge 0, \end{aligned}$$
(3.33)

one can derive the representation

$$\begin{aligned} \Vert \bar{R}_{n}\Vert =\frac{r}{(r^{2}-\tau _{c}^{2})U_{n}(r)}\int _{-1}^{1}\frac{(t^{2}-\tau _{c}^{2})U_{n}(t)}{r-t}dt \end{aligned}$$
(3.34)

(cf. [15, Sect. 2]). Consequently, for \(f\in X_{R}\),

$$\begin{aligned} |\bar{R}_{n}(f)|\le \Vert \bar{R}_{n}\Vert |f|_{r},\ \ 1<r\le R, \end{aligned}$$
(3.35)

and optimizing the right-hand side of (3.35) as a function of r, we get

$$\begin{aligned} |\bar{R}_{n}(f)|\le \inf _{1<r\le R}(\Vert \bar{R}_{n}\Vert |f|_{r}). \end{aligned}$$
(3.36)

Another estimate can be obtained if \(|f|_{r}\) is estimated by \(\max _{|z|=r}|f(z)|\), which exists at least for \(r<R\) (cf. [15, Eq. (2.9)]), giving

$$\begin{aligned} |\bar{R}_{n}(f)|\le \inf _{1<r<R}(\Vert \bar{R}_{n}\Vert \max _{|z|=r} |f(z)|). \end{aligned}$$
(3.37)

The latter can also be derived by a contour integration technique on circular contours (cf. [6]).

Therefore, in order to compute the norm of \(\bar{R}_{n}\) by (3.34), we first need to examine the validity of (3.33). First of all, by Theorem 2.1(b),

$$\begin{aligned} \bar{R}_{n}(t^{k})=0,\quad k=0,1,\ldots ,2[(n+1)/2]+1. \end{aligned}$$
(3.38)

Then, we can prove

Lemma 3.3

The error term of the quadrature formula (1.6) satisfies

$$\begin{aligned} \bar{R}_{n}(t^{2l})>0, \quad l\ge \bar{k}_{n}, \end{aligned}$$

where \(\bar{k}_{n}\ge [(n+1)/2]+1\) is a constant.

Proof

Setting \(f(t)=t^{2l}\) in formula (1.6), we have

$$\begin{aligned} \bar{R}_{n}(t^{2l})= & {} \displaystyle \int _{-1}^{1}t^{2l}dt-\bar{w}_{c}^{(+)}\tau _{c}^{2l}-\sum _{\nu =1}^{n}\bar{w}_{\nu }\tau _{\nu }^{2l}-\bar{w}_{c}^{(-)}(-\tau _{c})^{2l}\nonumber \\> & {} \displaystyle \frac{2}{2l+1}-\tau _{c}^{2l}\left( \bar{w}_{c}^{(+)}+\sum _{\nu =1}^{n}\bar{w}_{\nu }+\bar{w}_{c}^{(-)}\right) \nonumber \\= & {} \displaystyle \frac{2}{2l+1}-2\tau _{c}^{2l}=\frac{2}{2l+1}\{1-(2l+1)\tau _{c}^{2l}\}, \end{aligned}$$
(3.39)

and, as \(\lim _{l\rightarrow \infty }(2l+1)\tau _{c}^{2l}=0\), our assertion follows. \(\square \)

From the last part in (3.39), we can find the constant \(\bar{k}_{n}\). This was done for \(1\le n\le 40\), and the values are given in Table 1. We have also examined numerically and found that \(\bar{R}_{n}(t^{2l})>0\) for all \([(n+1)/2]+1\le l\le \bar{k}_{n}-1\) and \(1\le n\le 40\). Putting everything together, we conclude

$$\begin{aligned} \bar{R}_{n}(t^{2l})>0,\ \ l\ge [(n+1)/2]+1,\ \ 1\le n\le 40. \end{aligned}$$
(3.40)

Furthermore, from (1.6), there follows, by symmetry, that

$$\begin{aligned} \bar{R}_{n}(t^{2l+1})=0,\ \ l\ge [(n+1)/2]+1, \end{aligned}$$

which, combined with (3.38) and (3.40), gives

$$\begin{aligned} \bar{R}_{n}(t^{k})\ge 0, \quad k\ge 0, \quad 1\le n \le 40. \end{aligned}$$
(3.41)

Interestingly enough, by (3.32e)–(3.32o),

$$\begin{aligned} \bar{R}_{n}\left( t^{2[(n+1)/2]+2}\right) >0, \end{aligned}$$

i.e., \(\bar{R}_{n}(t^{2l})>0\) is theoretically confirmed when \(l=[(n+1)/2]+1\) for all \(n\ge 1\). This together with our numerical findings suggest the following

Conjecture 3.4

The error term of the quadrature formula (1.6) satisfies

$$\begin{aligned} \bar{R}_{n}(t^{k})\ge 0,\ \ k\ge 0. \end{aligned}$$
Table 1 Values of \(\bar{k}_{n}, 1\le n\le 40\)

We are now in a position to compute \(\Vert \bar{R}_{n}\Vert \).

Theorem 3.5

Consider the quadrature formula (1.6). For \(1\le n \le 40\), we have

$$\begin{aligned} \Vert \bar{R}_{n}\Vert =r\ln {\left( \frac{r+1}{r-1}\right) }-\frac{4r}{U_{n}(r)}\sum _{k=1}^{[(n+1)/2]} \frac{U_{n-2k+1}(r)}{2k-1}-\left\{ \begin{array}{ll} \frac{2r^{2}}{(n+1)(r^{2}-\tau _{c}^{2})U_{n}(r)},&{}n\ \mathrm{even,}\\ \frac{2(n+1)r}{n(n+2)(r^{2}-\tau _{c}^{2})U_{n}(r)},&{}n\ \mathrm{odd.} \end{array}\right. \end{aligned}$$
(3.42)

Proof

Let \(1\le n \le 40\). Then, in view of (3.41) (cf. (3.33)), \(\Vert \bar{R}_{n}\Vert \) is given by (3.34). Writing

$$\begin{aligned} \int _{-1}^{1}\frac{(t^{2}-\tau _{c}^{2})U_{n}(t)}{r-t}dt=\int _{-1}^{1}\frac{(t^{2}-r^{2}+r^{2}-\tau _{c}^{2})U_{n}(t)}{r-t}dt, \end{aligned}$$

splitting the integral on the right-hand side in two, and using (2.6), we get

$$\begin{aligned} \displaystyle \int _{-1}^{1}\frac{(t^{2}-\tau _{c}^{2})U_{n}(t)}{r-t}dt= & {} \displaystyle (r^{2}-\tau _{c}^{2})\int _{-1}^{1}\frac{U_{n}(t)}{r-t}dt\\&\displaystyle -\frac{1}{2}\int _{-1}^{1}U_{n+1}(t)dt-r\int _{-1}^{1}U_{n}(t)dt-\frac{1}{2}\int _{-1}^{1}U_{n-1}(t)dt. \end{aligned}$$

By

$$\begin{aligned} \int _{-1}^{1}\frac{U_{n}(t)}{r-t}dt=U_{n}(r)\ln {\left( \frac{r+1}{r-1}\right) }-4\sum _{k=1}^{[(n+1)/2]}\frac{U_{n-2k+1}(r)}{2k-1} \end{aligned}$$

(cf. [14, Proposition 2.2(i), Eq. (2.9)]), and (2.7), we find

$$\begin{aligned} \begin{array}{rl} \displaystyle \int _{-1}^{1}\frac{(t^{2}-\tau _{c}^{2})U_{n}(t)}{r-t}dt=&{}\displaystyle (r^{2}-\tau _{c}^{2})U_{n}(r)\ln {\left( \frac{r+1}{r-1}\right) } -4(r^{2}-\tau _{c}^{2})\sum \limits _{k=1}^{[(n+1)/2]}\frac{U_{n-2k+1}(r)}{2k-1}\\ &{}-\left\{ \begin{array}{ll} \displaystyle \frac{2r}{(n+1)},&{}n\ \mathrm{even,}\\ \displaystyle \frac{2(n+1)}{n(n+2)},&{}n\ \mathrm{odd,} \end{array}\right. \end{array} \end{aligned}$$

which, inserted into (3.34), yields (3.42). \(\square \)

4 Convergence of the quadrature formula for functions with singularities

We show that formula (1.6) converges, not only for Riemann integrable functions on \([-1,1]\), but also for functions having monotonic singularities at \(\pm 1\).

Following the notation in [5], we denote by \(M[-1,1)\) the class of functions f that are continuous on the half-open interval \([-1,1)\), monotonic in some neighborhood of 1, and such that \(\lim _{x\rightarrow 1^{-}}\int _{-1}^{x}f(t)dt\) exists. The classes \(M(-1,1]\) and \(M(-1,1)\) are defined analogously, while M stands for the union of all three classes.

Let, in the quadrature formula (1.6), \(\overline{\tau }_{1}=\tau _{c},\ \overline{\tau }_{\nu }={\tau }_{\nu -1},\ \nu =2,3,\ldots ,n+1,\ \overline{\tau }_{n+2}=-\tau _{c}\), and, accordingly, \(\overline{w}_{1}=\bar{w}_{c}^{(+)},\ \overline{w}_{\nu }=\bar{w}_{\nu -1},\ \nu =2,3,\ldots ,n+1,\ \overline{w}_{n+2}=\bar{w}_{c}^{(-)}\). Furthermore, as in Sect. 2, \(I(f)=\int _{-1}^{1}f(t)dt\) and \(\bar{Q}_{n}(f)=\bar{w}_{c}^{(+)}f(\tau _{c})+ \sum _{\nu =1}^{n}\bar{w}_{\nu }\) \(f(\tau _{\nu })\)+\(\bar{w}_{c}^{(-)}f(-\tau _{c})=\sum _{\nu =1}^{n+2}\overline{w}_{\nu }f(\overline{\tau }_{\nu })\), while \(\overline{\tau }_{0}=1\). For \(f\in M[-1,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\bar{Q}_{n}(f)=I(f) \end{aligned}$$
(4.1)

if the following two conditions are satisfied:

  1. (i)

    \(\lim _{n\rightarrow \infty }\bar{Q}_{n}(g)=I(g)\) for all \(g\in C[-1,1]\).

  2. (ii)

    There exist constants \(c>0,\ \delta >0\) such that \(|\overline{w}_{\nu }|\le c\,(\overline{\tau }_{\nu -1}-\overline{\tau }_{\nu })\) for all sufficiently large n and for all \(\nu \ge 1\) such that \(1-\delta \le \overline{\tau }_{\nu }\le 1\).

As formula (1.6) is symmetric (cf. (2.3) and (2.13)), conditions (i) and (ii) also imply (4.1) for all \(f\in M(-1,1]\) or \(f\in M(-1,1)\), thus, for all \(f\in M\) (see [16, Sect. 4, Lemma 4.1] ).

Our results are summarized in the following

Theorem 4.1

Consider the quadrature formula (1.6). Then (4.1) holds for all \(f\in M\).

Proof

By what was said previously, it suffices to satisfy conditions (i) and (ii).

The first has already been proved in Theorem 2.1(c).

Regarding condition (ii), we shall show that

$$\begin{aligned} \overline{w}_{\nu }<\left( \frac{5}{6}\pi ^{2}+6\right) (\overline{\tau }_{\nu -1}-\overline{\tau }_{\nu }) \end{aligned}$$
(4.2)

for all \(n\ge 1\) and \(\nu =1,2,\ldots ,[(n+2)/2]\).

Let first n be even and \(\nu =2,3,\ldots ,n/2+1\). Then \(\overline{w}_{\nu }=\bar{w}_{\nu -1}\), with \(\bar{w}_{\nu -1}\) given by (2.1) and (2.2). Setting \(\nu -1\) in place of \(\nu \) in (2.1), we have, in view of the cosine series (cf. [7, Eq. 1.444.7]), a partial fraction decomposition in \(2\sum _{k=n/2}^{\infty }1/(4k^{2}-1)=1/(n-1)\) and \(\sin \theta \le \theta ,\ 0\le \theta \le \pi /2\),

$$\begin{aligned} w_{\nu -1}= & {} \displaystyle \frac{2}{n+1}\left\{ 1-2\displaystyle \sum _{k=1}^{\infty }\frac{\cos {2k\theta _{\nu -1}}}{4k^{2}-1} +2\displaystyle \sum _{k=(n-2)/2+1}^{\infty }\frac{\cos {2k\theta _{\nu -1}}}{4k^{2}-1}-\frac{\cos n\theta _{\nu -1}}{n-1}\right\} \\< & {} \displaystyle \frac{2}{n+1}\left\{ \frac{\pi }{2}\sin \theta _{\nu -1} +2\displaystyle \sum _{k=n/2}^{\infty }\frac{1}{4k^{2}-1}+\frac{1}{n-1}\right\} \\< & {} \displaystyle \frac{2}{n+1}\left\{ \frac{\pi }{2}\theta _{\nu -1}+\frac{2}{n-1}\right\} , \end{aligned}$$

hence,

$$\begin{aligned} w_{\nu -1}<\frac{\nu -1}{(n+1)^{2}}\pi ^{2}+\frac{4}{(n-1)(n+1)},\ \ \nu =2,3,\ldots ,n/2+1. \end{aligned}$$
(4.3)

Also, using \(2\theta /\pi \le \sin \theta \le \theta ,\ 0\le \theta \le \pi /2\), we get

$$\begin{aligned} \displaystyle \frac{{2\sin ^{2}\theta _{\nu -1}\cos n\theta _{\nu -1}}}{{(n+1)^{2}\sin (\theta _{\nu -1}+\theta _{c})\sin (\theta _{\nu -1}-\theta _{c}})}&< \displaystyle \frac{{2\theta _{\nu -1}^{2}}}{(n+1)^{2}\sin \frac{(2\nu -1)\pi }{2(n+1)}\sin \frac{(2\nu -3)\pi }{2(n+1)}}\\&<\displaystyle \frac{2}{(n+1)^{2}}\frac{(\nu -1)^{2}}{(2\nu -3)(2\nu -1)}\pi ^{2},\\&\qquad \qquad \quad \ \nu =2,3,\ldots ,n/2+1, \end{aligned}$$

which inserted, together with (4.3), into (2.2) with \(\nu -1\) in place of \(\nu \), gives

$$\begin{aligned} \begin{array}{r} \overline{w}_{\nu }=\bar{w}_{\nu -1}<\displaystyle \frac{\nu -1}{(n+1)^{2}}\pi ^{2}+\frac{4}{(n-1)(n+1)} +\frac{2}{(n+1)^{2}}\frac{(\nu -1)^{2}}{(2\nu -3)(2\nu -1)}\pi ^{2},\\ \nu =2,3,\ldots ,n/2+1. \end{array} \end{aligned}$$
(4.4)

Moreover,

$$\begin{aligned} \begin{array}{rcl} \overline{\tau }_{\nu -1}-\overline{\tau }_{\nu }&{}=&{}{\tau }_{\nu -2}-{\tau }_{\nu -1}=\displaystyle \cos {\frac{(\nu -2)\pi }{n+1}}-\cos {\frac{(\nu -1)\pi }{n+1}}\\ &{}=&{}\displaystyle 2\sin {\frac{(2\nu -3)\pi }{2(n+1)}}\sin {\frac{\pi }{2(n+1)}}>\frac{2(2\nu -3)}{(n+1)^{2}},\ \ \nu =2,3,\ldots ,n/2+1. \end{array} \end{aligned}$$
(4.5)

Now, combining (4.4) and (4.5), we get

$$\begin{aligned} \begin{array}{r} \displaystyle \frac{\overline{w}_{\nu }}{\overline{\tau }_{\nu -1}-\overline{\tau }_{\nu }}<\frac{\nu -1}{2(2\nu -3)}\pi ^{2}+\frac{2(n+1)}{n-1}\frac{1}{2\nu -3} +\frac{(\nu -1)^{2}}{(2\nu -3)^{2}(2\nu -1)}\pi ^{2},\\ \nu =2,3,\ldots ,n/2+1, \end{array} \end{aligned}$$

hence,

$$\begin{aligned} \frac{\overline{w}_{\nu }}{\overline{\tau }_{\nu -1}-\overline{\tau }_{\nu }}<\frac{5}{6}\pi ^{2}+6,\ \ \nu =2,3,\ldots ,n/2+1. \end{aligned}$$
(4.6)

On the other hand, for n even and \(\nu =1\), we have, in view of \(2\theta /\pi \le \sin \theta \le \theta ,\ 0\le \theta \le \pi /2\), from (2.3),

$$\begin{aligned} \overline{w}_{1}=\bar{w}_{c}^{(+)}=\frac{\sin \theta _{c}}{n+1}<\frac{\theta _{c}}{n+1}=\frac{\pi }{2(n+1)^{2}}, \end{aligned}$$

and

$$\begin{aligned} \overline{\tau }_{0}-\overline{\tau }_{1}=1-\tau _{c}=1-\cos {\frac{\pi }{2(n+1)}}=2\sin ^{2}{\frac{\pi }{4(n+1)}}>\frac{1}{2(n+1)^{2}}, \end{aligned}$$

which, combined together, yield

$$\begin{aligned} \frac{\overline{w}_{1}}{\overline{\tau }_{0}-\overline{\tau }_{1}}<\pi . \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.6_1) \end{aligned}$$

Now, from (4.6) and (4.6\(_1\)), we finally obtain

for all n even and \(\nu =1,2,\ldots ,n/2+1\).

In a like manner, we show

for all n odd and \(\nu =1,2,\ldots ,(n+1)/2\).

Putting (\(4.7_{ e}\)) and (\(4.7_{ o}\)) together, we conclude (4.2). \(\square \)

5 Numerical examples

Our examples focus on comparing formula (1.6) with the Clenshaw-Curtis formula, on showing the efficiency of bounds (3.36)–(3.37), and on demonstrating the ability of formula (1.6) to integrate functions with monotonic singularities at one or both endpoints of \([-1,1]\).

Example 5.1

We approximate the integral \(\int _{-1}^{1}f(t)dt\) by means of formula (1.6) or the Clenshaw-Curtis formula (1.1), when f(t) is any one of the four functions \(e^{-t^{2}},\ 1/(1+16t^{2}),\ e^{-1/t^{2}}\) or \(|t|^{3}\), borrowed from [17], where they were used for comparing the Clenshaw-Curtis formula with the Gauss formula. The first function is entire, the second analytic, the third \(C^{\infty }\) and the fourth \(C^{2}\). The modulus of the actual error is given in Table 2. (Numbers in parentheses indicate decimal exponents.) All computations were performed on a SUN Ultra 5 computer in quad precision (machine precision \(1.93\times 10^{-34}\)). Whenever the actual error is close to machine precision, we enter instead “m.p.” (for machine precision).

Table 2 Actual error in computing \(\int _{-1}^{1}f(t)dt\) by either formula (1.6) or formula (1.1)

Our numerical results confirm what was theoretically proved in Theorem 3.2 (cf. Remark 3.1), namely, that formulae (1.6) and (1.1) have the same rate of convergence. Indeed, the actual errors of both formulae for each of our test functions are very close or almost identical to each other.

Example 5.2

We want to approximate the integral

$$\begin{aligned} \int _{-1}^{1}\frac{t^{2}}{4+t^{2}}dt=2-4\arctan {(1/2)}, \end{aligned}$$
(5.1)

by means of formula (1.6).

The function \(f(z)=\frac{z^{2}}{4+z^{2}}=\sum _{k=0}^{\infty }(-1)^{k}\frac{z^{2k+2}}{2^{2k+2}}\) is holomorphic in \(C_{2}=\{z\in \mathbb {C}:|z|<2 \}\), hence, taking into account that formula (1.6) has degree of exactness \(2[(n+1)/2]+1\), we find

$$\begin{aligned} |f|_{r}=\frac{r^{2[(n+1)/2]+2}}{2^{2[(n+1)/2]+2}}, \end{aligned}$$

thus, \(f\in X_{2}\). Then, from (3.36),

$$\begin{aligned} |\bar{R}_{n}(f)|\le \inf _{1<r\le 2}(\Vert \bar{R}_{n}\Vert |f|_{r}), \end{aligned}$$
(5.2)

with \(\Vert \bar{R}_{n}\Vert \) given by (3.42). As, in addition,

$$\begin{aligned} \max _{|z|=r}|f(z)|=\frac{r^{2}}{4-r^{2}}, \end{aligned}$$
(5.3)

we have, from (3.37),

$$\begin{aligned} |\bar{R}_{n}(f)|\le \inf _{1<r<2}\left( \Vert \bar{R}_{n}\Vert \frac{r^{2}}{4-r^{2}}\right) . \end{aligned}$$
(5.4)

Our results are summarized in Table 3. The value of r, at which the infimum in each of bounds (5.2) and (5.4) was attained, is given in the column headed \(r_{opt}\) and placed immediately before the column of the corresponding bound. In the last column, we give the modulus of the actual error.

Table 3 Error bounds (5.2), (5.4) and actual error in computing the integral (5.1)

Bound (5.2) is quite reasonable, overestimating the actual error by no more than two orders of magnitude. Bound (5.4) is inferior to (5.2), particularly as n increases, because then \(r_{opt}\) approaches 2 and that way increases the value of \(\max _{|z|=r}|f(z)|\) in (5.3). Using bound (5.4) in order to estimate not the error, but the appropriate value of n to be used, yields an overestimation of n by just a few units.

Example 5.3

We want to approximate the integral

$$\begin{aligned} \int _{0}^{1}t^{a}\ln {(e/t)}dt=\frac{a+2}{(a+1)^{2}},\ \ a>-1, \end{aligned}$$
(5.5)

whose integrand has a monotonic singularity at 0. Previously, this example has been employed in [5, Sect. 4], [12, Sect. 5], and [16, Sect. 5], where the integral has been approximated, for various values of a, by means of interpolatory and product type formulae for Chebyshev weights based on the Chebyshev abscissae of any one of the four kinds. Here, we use formula (1.6), appropriately transformed onto the interval [0,1],

$$\begin{aligned} \int _{0}^{1}f(t)dt\doteq \frac{1}{2}\bar{w}_{c}^{(+)}f\left( t_{c}^{(+)}\right) +\frac{1}{2}\sum _{\nu =1}^{n}\bar{w}_{\nu }f(t_{\nu })+\frac{1}{2}\bar{w}_{c}^{(-)}f\left( t_{c}^{(-)}\right) , \end{aligned}$$
(5.6)

where

$$\begin{aligned} t_{c}^{(+)}=\frac{1}{2}(1+\tau _{c}),\ \ t_{\nu }=\frac{1}{2}(1+\tau _{\nu }),\ \ \nu =1,2,\ldots ,n,\ \ t_{c}^{(-)}=\frac{1}{2}(1-\tau _{c}), \end{aligned}$$

and where we set \(n-2\) in place of n in order to have an n-point formula. For comparison, we also compute integral (5.5) by means of the Fejér formula of the second kind (1.5),

$$\begin{aligned} \int _{0}^{1}f(t)dt\doteq \frac{1}{2}\sum _{\nu =1}^{n}w_{\nu }f(t_{\nu }), \end{aligned}$$
(5.7)

or the Gauss formula for the Chebyshev weight function of the second kind (cf. [16, Sect. 5]),

$$\begin{aligned} \int _{0}^{1}f(t)dt\doteq \frac{\pi }{n+1}\sum _{\nu =1}^{n}[t_{\nu }(1-t_{\nu })]^{1/2}f(t_{\nu }). \end{aligned}$$
(5.8)

The moduli of the actual errors, in units of \(10^{-6}\), are shown in Table 4.

Our numerical results indicate that formula (5.6) is more accurate than formulae (5.7) and, particularly, (5.8) for all values of a, probably because the nodes of the former are distributed closer to the point of singularity. For \(a<0\), all quadrature formulae converge extremely slowly, apparently due to the combined effect of two singularities in the integrand of (5.5), while things improve dramatically as a increases from 0 to 1.

Table 4 Moduli of the actual errors in computing the integral (5.5)