1 Introduction

Algebraic interpolation polynomials with multiple nodes, known as Hermite polynomials, are well-investigated and are successfully used to solve a wide range of application-oriented problems. Their trigonometric analogue is investigated much less and many questions concerning the existence, uniqueness, and approximate properties of such polynomials still remain open.

Early studies of trigonometric interpolation polynomials with multiple nodes apparently began toward the 30th years of the 20th century. S. M. Lozinsky [1] considered the approximation of the complex-variable functions regular in a single circle, and continuous on its boundary, by the trigonometric interpolation polynomials with multiple nodes located on a single circle’s border. He was the first to call such polynomials Hermite–Fejer polynomials.

E. O.  Zeel [2, 3], generalizing the results of the predecessors [4,5,6,7], proved the existence of the trigonometrical interpolation polynomials of the arbitrary multiplicity w.r.t. the system of the equidistant nodes for the real-valued \(2\pi \) - periodic functions. Moreover, he showed the explicite form of the corresponding fundamental polynomials and established the conditions of uniform convergence of such polymials to the interpolated function depending on the parity of its multiplicity and the smoothness of the interpolated function.

B. G. Gabdulkhayev [8] obtained in a convenient form the best, in the sense of an order, estimates of the speed of convergence of trigonometrical interpolation polynomials of the first multiplicity to continuously differentiable functions. Also, in this work he investigated the properties of the quadrature formulas for Gilbert’s kernel singular integrals based on such polynomials. Relying on the results of [3] and using B. G. Gabdulkhayev [8] technique Yu. Soliyev [9, 10] investigated systematically quadrature formulas based on the interpolation polynomials of different multiplicity for singular integrals with Cauchy and Gilbert kernels.

In this paper the calculation scheme of the collocation method based on trigonometric interpolation polynomials with the multiple nodes for the full singular integro-differential equation in periodic case is constructed and justified. Convergence of the method is proved, and the errors of the approximate solution are estimated.

2 Statement of the Problem

Consider the singular integro-differential equation

$$\begin{aligned} \sum _{\nu =0}^1(a_{\nu }(t)x^{(\nu )}(t)+b_{\nu }(t)(Jx^{(\nu )})(t)+ (J_0h_{\nu }x^{(\nu )})(t))=y(t), \ t\in [0,2\pi ], \end{aligned}$$
(1)

where x is a required function, \(a_{\nu }\), \(b_{\nu }\), \(h_{\nu }\) (by both variables), \(\nu =0,1\), and y are known \(2\pi \)-periodic functions, singular integrals

$$ (Jx^{(\nu )})(t)=\frac{1}{2\pi } \int _{0}^{2\pi }x^{(\nu )}(\tau )\cot \frac{\tau -t}{2}d\tau , \ \nu =0,1, \ t\in [0,2\pi ], $$

are to be interpreted as the Cauchy–Lebesgues principal value, and

$$ (J_0h_{\nu }x^{(\nu )})(t)=\frac{1}{2\pi }\int _0^{2\pi }h_{\nu }(t,\tau )x^{(\nu )}(\tau )d\tau , \ \nu =0,1, \ t\in [0,2\pi ], $$

are regular integrals.

3 Calculation Scheme

Let’s denote \({\mathbb N}\) the set of natural numbers, \({\mathbb N}_0\) the set of natural numbers with zero added, \({\mathbb R}\) the set of real numbers \({\mathbb C}\) the set of complex numbers.

Let’s fix the natural number \(n\in {\mathbb N}\). An approximate solution of the Eq. (1) we seek as a Hermite–Fejer polynomial

$$\begin{aligned} x_n(t)=\frac{1}{n^2}\sum _{k=0}^{n-1}(x_{2k}+x^\prime _{2k} \sin (t-t_{2k}))\frac{\sin ^{2}{\displaystyle \frac{n}{2}(t-t_{2k})}}{\sin ^{2}{\displaystyle \frac{t-t_{2k}}{2}}}, \ t\in [0,2\pi ], \end{aligned}$$
(2)

here \(t_{2k}\), \(k=0,1...,n-1\), are even numbered nodes of the mesh

$$\begin{aligned} t_k=\frac{\pi k}{n}, \quad k=0,1,...,2n-1. \end{aligned}$$
(3)

Unknown coefficients \(x_{2k}\), \(x^\prime _{2k}\), \(k=0,1...,n-1\), of the polynomial (2) we find out as a solution of the system of the algebraic equations

$$\begin{aligned} \sum _{\nu =0}^1(a_{\nu }(t_k)x^{(\nu )}_n(t_k)+b_{\nu }(t_k) (Jx^{(\nu )}_n)(t_k)+(J_0P^{\tau }_{2n}(h_{\nu }x^{(\nu )}_n))(t_k))= \end{aligned}$$
(4)
$$ =y(t_k), \quad k=0,1,...,2n-1, $$

where

$$ P^{\tau }_{2n}(h_{\nu }x^{(\nu )}_n)(t,\tau )=\frac{1}{2n} \sum _{k=0}^{2n-1}h_{\nu }(t,t_k)x^{(\nu )}_n(t_k) \frac{\sin n (\tau -t_k)\cos {\displaystyle \frac{\tau -t_k}{2}}}{\sin {\displaystyle \frac{\tau -t_k}{2}}}, $$
$$ \nu =0,1, \ t,\tau \in [0,2\pi ], $$

is a Lagrange interpolation operator w.r.t. the nodes (3) applied by the variable \(\tau \) to the functions \(h_{\nu }x^{(\nu )}_n\), \(\nu =0,1,\) and

$$ (Jx_n)(t_k)=\frac{1}{n}\sum _{j=0}^{n-1}(\alpha ^0_{0,k-2j}x_{2j} +\alpha ^1_{0,k-2j}x^\prime _{2j}), \quad k=0,1,...,2n-1, $$
$$ \alpha ^0_{0,r}=\{-\cot \frac{r\pi }{2n} \quad \text{ for } \quad r\ne 0, \quad 0 \quad \text{ for } \quad r=0\}, $$
$$ \alpha ^1_{0,r}=\{-\frac{1}{n} \quad \text{ for } \quad r\ne 0, \quad 2-\frac{1}{n} \quad \text{ for } \quad r=0\}; $$
$$ (Jx^\prime _n)(t_{2k})=\frac{1}{n}\sum _{j=0}^{n-1} (\alpha ^0_{1,2k-2j}x_{2j}+\alpha ^1_{1,2k-2j} x^\prime _{2j}), \quad k=0,1,...,n-1, $$
$$ (Jx^\prime _n)(t_{2k+1})=\frac{1}{n}\sum ^{n-1}_{j=0} \alpha ^0_{1,2k-2j+1}x_{2j}, \quad k=0,1,...,n-1, $$
$$ \alpha ^0_{1,r}=\{\csc ^2\frac{r\pi }{2n} \quad \text{ for } \quad r\ne 0, \quad -\frac{n^2-1}{3} \quad \text{ for } \quad r=0\}, $$
$$ \alpha ^1_{1,r}=\{(-1)^r\csc \frac{r\pi }{2n} \quad \text{ for } \quad r\ne 0, \quad 0 \quad \text{ for } \quad r=0\}; $$
$$ (J^0P^{\tau }_{2n}(h_{\nu }x^{(\nu )}_n))(t_k)=\frac{1}{2n} \sum ^{2n-1}_{j=0}h_{\nu }(t_k,t_j)x^{(\nu )}_n(t_j), \ \nu =0,1, \quad k=0,1,...,2n-1, $$

are the quadrature formulae.

4 Some Preliminaries

Let’s denote \(\mathrm{C}\) the space of continuous \(2\pi \)-periodic functions with usual norm

$$ \Vert f\Vert _\mathrm{C}=\sup _{t\in {\mathbb R}}\mid f(t)\mid , \ f\in \mathrm{C}. $$

For the fixed \(m\in {\mathbb N}_0\) denote \(\mathrm{C}^m \subset \mathrm{C}\) the set of the functions on \({\mathbb R}\) with continuous derivatives of order m (\(\mathrm{C}^0=\mathrm{C}\)). The norm on \(\mathrm{C}^m\) we define as follows:

$$ \Vert f\Vert _{\mathrm{C}^m}= \max _{0 \le \nu \le m} \Vert f^{({\nu })}\Vert _\mathrm{C}, \ f\in \mathrm{C}^m. $$

Let’s denote \(\mathrm{H}_{\alpha }\) the set of Hölder continuous functions of order \(\alpha \in {\mathbb R}\), \(0<\alpha \le 1\). For the function f of this set let’s denote

$$ H(f;\alpha )=\sup _{t \not = \tau \atop {t,\tau \in {\mathbb R}}} \frac{\mid f(t)-f(\tau )\mid }{\mid t-\tau \mid ^{\alpha }}, $$

the smallest constant of Hölder condition of the function f. With the help of this constant we can now define the norm on the set \(\mathrm{H}_{\alpha }\), namely,

$$ \Vert f\Vert _{\mathrm{H}_{\alpha }}=\max \{\Vert f\Vert _\mathrm{C}, H(f;\alpha )\}. $$

From the set \(\mathrm{C}^m\), for the fixed \(\alpha \in {\mathbb R},\) \(0 < \alpha \le 1,\) we can select the set of the functions \(\mathrm{H}^m_{\alpha }\) with derivatives of order m satisfying Hölder condition

$$ \mid f^{(m)}(t)-f^{(m)}(\tau )\mid \le H(f^{(m)};\alpha )\mid t-\tau \mid ^{\alpha }, \ t,\tau \in {\mathbb R}. $$

The norm on the set \(\mathrm{H}^m_{\alpha }\) \((\mathrm{H}^0_{\alpha }=\mathrm{H}_{\alpha })\) we define as follows:

$$ \Vert f\Vert _{\mathrm{H}^m_{\alpha }}=\max \{\Vert f\Vert _{\mathrm{C}^m}, H(f^{(m)};\alpha )\}. $$

Denote \({\mathscr {T}}_n\) the set of all trigonometric polynomials of order not higher than n. For the follows we need 2 lemmas from the paper [11].

Lemma 1

Let the numbers \(\alpha ,\beta \in {\mathbb R}\), \(0<\alpha \le 1\), \(0<\beta \le 1\), \(m,r\in {\mathbb N}_0\), \(m \le r\), are such that \(m+\beta \le r+\alpha \). Then for any \(n\in {\mathbb N}\) and any function \(x\in \mathrm{H}^r_{\alpha }\) the following estimate is validFootnote 1:

$$ \Vert x-T_n\Vert _{\mathrm{H}^m_{\beta }}\le cn^{m-r-\alpha +\beta } H(x^{(r)};\alpha ), $$

where \(T_n\in {\mathscr {T}}_n\) is a polynomial of the best approximation of the function x.

Lemma 2

For any \(n\in {\mathbb N}\), \(\beta \in {\mathbb R}\), \(0<\beta \le 1\) and arbitrary trigonomentric polynomial \(T_n\in {\mathscr {T}}_n\) the following estimate is valid:

$$ \Vert T_n\Vert _{\mathrm{H}_{\beta }} \le (1+2^{1-\beta }n^{\beta })\Vert T_n\Vert _\mathrm{C}. $$

An operator \(P_{2n}\) is exact for any polynomial of order \(n-1\) and, as it is shown in [12, 13], has the following properties:

$$\begin{aligned} \Vert P_{2n}\Vert _{\mathrm{H}^{m}_{\beta } \rightarrow \mathrm{H}^{m}_{\beta }}\le c\Vert P_{2n}\Vert _{\mathrm{C}\rightarrow \mathrm{C}}\le c\ln n \end{aligned}$$
(5)

for any \(n\in {\mathbb N}\), \(n\ge 2\), \(\beta \in {\mathbb R}\), \(0<\beta \le 1\), and arbitrary fixed number \(m\in {\mathbb N}\).

5 Justification

Theorem 1

Let the Eq. (1) and the calculation scheme (2)–(4) of the method satisfy the following conditions:

A1 functions \(a_{\nu }\), \(b_{\nu }\), \(\nu =0,1\), and y belong to \(\mathrm{H}_{\alpha }\) for some \(\alpha \in {\mathbb R}\), \(0<\alpha \le 1\); functions \(h_{\nu }\), \(\nu =0,1\), belong to \(\mathrm{H}_{\alpha }\) with the same \(\alpha \) for each variable uniformly w.r.t. other variable,

A2 \(a^2_1(t)+b^2_1(t)\ne 0, \quad t\in [0,2\pi ],\)

A3 \(\kappa ={\mathrm{ind}}(a_1+ib_1)=0,\)

A4 an Eq. (1) has a unique solution \(x^{*}\in \mathrm{H}^1_{\beta }\) for each right-hand side \(y\in \mathrm{H}_{\beta }\), \(0<\beta < \alpha \le 1\).

Then for n large enough the system of equations (4) is uniquely solvable and approximate solutions \(x^{*}_n\) converge to the exact solution \(x^{*}\) of the Eq. (1) by the norm of the space \(\mathrm{H}^1_{\beta }\)

$$ \Vert x^{*}-x^{*}_n\Vert _{\mathrm{H}^1_{\beta }}\le cn^{-\alpha +\beta } \ln n, \quad 0<\beta < \alpha \le 1. $$

Proof

Let’s show first that the assumption A4 of the Theorem 1 is not empty in the sense that there exist the equations of the class considered satisfying A4.

In fact, consider an equation

$$\begin{aligned} a_1(t)(x^{\prime }(t)+x(t))+b_1(t)((Jx^{\prime })(t)+(Jx)(t))=y(t), \quad t \in [0,2\pi ]. \end{aligned}$$
(6)

It is known [14], that the characteristic operator

$$ Bx \equiv a_1(t)x(t)+b_1(t)(Jx)(t), \quad B:H_{\beta } \rightarrow H_{\beta }, $$

of the Eq. (6) is invertable, and an inverse operator \(B^{-1}:H_{\beta } \rightarrow H_{\beta }\) could be written explicitly. Now apply the operator \(B^{-1}\) to both sides of the Eq. (6). Then we’ll get an equivalent equation

$$\begin{aligned} x^{\prime }(t)+x(t)=(B^{-1}y)(t), \quad t \in [0,2\pi ]. \end{aligned}$$
(7)

In the couple of the spaces \((H_{\beta }^1,H_{\beta }),\) an Eq. (7) is a Fredholm equation. Homogeneous equation

$$ x^{\prime }(t)+x(t)=0, \quad t \in [0,2\pi ], $$

in the space of the real-valued functions has a solution \(x(t)=ce^{-t}, \ t\in [0,2\pi ].\) However, this solution is not periodic for \(c\ne 0,\) so the only suitable value is \(c=0\). It means that in the space of the periodic functions \(H_{\beta }^1\) the homogeneous equation has the only zero solution \(x(t)=0, \ t \in [0,2\pi ],\) and it means that the Eq. (7), and thus the Eq. (6), are uniquely solvable for any right-hand side \(y \in H_{\beta }\), \(0< \beta <\alpha \le 1\).

For the following part of the proof of the Theorem 1 we’ll use the method described in [15, 16].

Let’s fix \(\beta \in {\mathbb R}\), \(0<\beta < \alpha \le 1\), and let \(\mathrm{X}=\mathrm{H}^1_{\beta }\), \(\mathrm{Y}=\mathrm{H}_{\beta }\). Then the Eq. (1) can be rewritten as an operator equation

$$\begin{aligned} Qx=y, \quad Q:\mathrm{X}\rightarrow \mathrm{Y}. \end{aligned}$$
(8)

For each function \(x\in \mathrm{X}\) we’ll match the Cauchy integral

$$ \varPhi (z)=\varPhi (x;z)=\frac{1}{2\pi }\int \limits _0^{2\pi }\frac{x(\tau )d\tau }{1-z\exp (-i\tau )}, \ z\in {\mathbb C}. $$

Denote \(x^{+}(t)\) \(x^{-}(t)\) the limit values of the function \(\varPhi (z)\) as z trends to \(\exp (it)\) by any ways inside and outside unit circle correspondently. For the functions \(x^{+}\) and \(x^{-}\) the following Sokhotsky’s formulae are valid means identical operator.

$$\begin{aligned} x^{\pm }(t)=\frac{1}{2}((\pm I-iJ)x)(t)+\frac{1}{2}J_0x, \ t\in {\mathbb R}. \end{aligned}$$
(9)

Differentiating (9) and using known formulae

$$ (x^{\prime }(t))^{\pm }=(x^{\pm }(t))^{\prime }, \quad (Jx)^{\prime }(t)=(Jx^{\prime })(t), $$

we’ll obtain

$$\begin{aligned} x^{\prime }(t)=x^{\prime +}(t)-x^{\prime -}(t), \quad (Jx^{\prime })(t)=i(x^{\prime +}(t)+x^{\prime -}(t)). \end{aligned}$$
(10)

From the conditions A2, A3, according to [17] it follows

$$ \frac{a_1-ib_1}{a_1+ib_1}=\frac{\psi ^{+}}{\psi ^{-}}, $$

where

$$ \psi (z)=e^{\theta (z)}, \quad \theta (z)=\varPhi (u;z), \quad u=\ln \frac{a_1-ib_1}{a_1+ib_1}, \quad z\in {\mathbb C}. $$

Then, using (10), the characteristic operator of the Eq.  (1) can be rewritten [14, 17] as

$$ a_1(t)x^{\prime }(t)+b_1(t)(Jx^{\prime })(t)=\frac{(a_1(t)+ib_1(t))}{\psi ^{-}(t)}(\psi ^{-}(t)x^{\prime +}(t)- \psi ^{+}(t)x^{\prime -}(t)). $$

The Eq. (1) or, in other notation, the Eq. (8) we rewrite as an equivalent operator equation

$$\begin{aligned} Kx\equiv Ux+Vx=f, \quad K:\mathrm{X}\rightarrow \mathrm{Y}, \end{aligned}$$
(11)

where

$$ Ux=\psi ^{-}x^{\prime +}-\psi ^{+}x^{\prime -},\quad Vx=Ax+Bx+Wx, $$
$$ Ax=v^{-1}a_0x, \quad Bx=v^{-1}b_0 Jx, \quad Wx=v^{-1}\sum _{\nu =0}^1J^0h_{\nu }x^{(\nu )}, $$
$$ f=v^{-1}y, \quad v=\frac{a_1+ib_1}{\psi ^{-}}, $$

and according the condition A2 of the Theorem 1, \(v(t)\ne 0\), \(t \in [0,2\pi ]\). An equivalence here means that the Eqs. (1) and (11) are both solvable or not solvable simultaneously and, if they are solvable, their solutions coincide.

Let \(\mathrm{X}_n\subset {\mathscr {T}}_n\) be the set of trigonometrical polynomials of the form (2), and \(\mathrm{Y}_n=P_{2n}\mathrm{Y}\subset {\mathscr {T}}_n\). Then the system of equations (4) is equivalent to the operator equation

$$\begin{aligned} K_nx_n\equiv U_nx_n+V_nx_n=f_n, \quad K_n:\mathrm{X}_n\rightarrow \mathrm{Y}_n, \end{aligned}$$
(12)

where

$$ U_n=P_{2n}U, \quad V_nx_n=P_{2n}Ax_n+P_{2n}Bx_n+ W_nx_n, $$
$$ W_nx_n=P_{2n}\sum _{\nu =0}^1J_0(P^{\tau }_{2n}(h_{\nu } x^{(\nu )}_n)), \quad f_n=P_{2n}f. $$

Here an equivalence means that if the system of equations (4) has a solution \(x^{*}_{2k}, x^{\prime *}_{2k},\) \(k=0,1,...,n-1\), then the Eq. (12) will also has a solution which coincide with the polynomial

$$ x^{*}_n(t)=\frac{1}{n^2}\sum _{k=0}^{n-1} (x^{*}_{2k}+x^{\prime *}_{2k}\sin (t-t_{2k})) \frac{{\displaystyle \sin ^2\frac{n}{2}(t-t_{2k})}}{{\displaystyle \sin ^2\frac{t-t_{2k}}{2}}}, \ t\in {\mathbb R}. $$

Let’s prove now that the operators K and \(K_n\) are close to each other on \(\mathrm{X}_n\).

For any \(x_n\in \mathrm{X}_n\), using the polynomial of the best approximation \(T_{n-1}\in {\mathscr {T}}_{n-1}\) for the function \(Ax_n\), we’ll have

$$\begin{aligned} \Vert Ax_n-P_{2n}Ax_n\Vert _\mathrm{Y}\le (1+\Vert P_{2n}\Vert _{\mathrm{Y}\rightarrow \mathrm{Y}}) \Vert Ax_n-T_{n-1}\Vert _\mathrm{Y}. \end{aligned}$$
(13)

Now, taking into account the structural qualities of the function \(Ax_n\), we can estimate

$$\begin{aligned} H(Ax_n;\alpha )\le c(\Vert x_n\Vert _C+\Vert x^{\prime }_n\Vert _\mathrm{C})\le c\Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(14)

From (13), using Lemma 1, an estimation (5), and in view of (14) we have

$$\begin{aligned} \Vert Ax_n-P_{2n}Ax_n\Vert _\mathrm{Y}\le c(n^{-\alpha +\beta }\ln n) \Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(15)

In the same way, we obtain

$$\begin{aligned} \Vert Bx_n-P_{2n}Bx_n\Vert _\mathrm{Y}\le c(n^{-\alpha +\beta }\ln n) \Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(16)

Considering the trigonometrical degree of accuracy of the quadrature formulae for the regular integrals used in (4) we can write

$$\begin{aligned} \Vert Wx_n-W_nx_n\Vert _\mathrm{Y}\le \Vert \sum _{\nu =0}^1J^0h_{\nu }x^{(\nu )}_n -P_{2n}\sum _{\nu =0}^1J^0P^{\tau }_{2n}(h_{\nu }x^{(\nu )}_n)\Vert _\mathrm{Y} \le \end{aligned}$$
(17)
$$ \le \Vert \sum _{\nu =0}^1J^0h_{\nu }x^{(\nu )}_n-P_{2n} \sum _{\nu =0}^1J^0(h_{\nu }x^{(\nu )}_n)\Vert _\mathrm{Y} + \Vert P_{2n}\sum _{\nu =0}^1J^0(x^{(\nu )}_n(h_{\nu }-P_{2n}^{\tau } h_{\nu }))\Vert _\mathrm{Y}. $$

Now, using the polynomial of the best uniform approximation \(T_{n-1}\in {\mathscr {T}}_{n-1}\) for the function \(\sum \limits ^{1}_{\nu =0}J^0h_{\nu }x^{(\nu )}_n\), we get

$$\begin{aligned} \Vert \sum _{\nu =0}^1J^0(h_{\nu }x^{(\nu )}_n)-P_{2n} \sum _{\nu =0}^1J^0(h_{\nu }x^{(\nu )}_n)\Vert _\mathrm{Y}\le (1+\Vert P_{2n}\Vert _{\mathrm{Y}\rightarrow \mathrm{Y}})\Vert \sum _{\nu =0}^1 J^0h_{\nu }x^{(\nu )}_n-T_{n-1}\Vert _\mathrm{Y}. \end{aligned}$$
(18)

Considering the structural qualities of the function \(h_{\nu }(t,\tau )\) by the variable t, it is easy to show that

$$\begin{aligned} H(\sum _{\nu =0}^1J^0(h_{\nu }x^{(\nu )}_n);\alpha )\le c\sum _{\nu =0}^1\Vert x_n^{(\nu )}\Vert _\mathrm{C}\le c\Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(19)

From (18) and (19), using Lemma 1 and an estimation (5), we get

$$\begin{aligned} \Vert \sum _{\nu =0}^1J^0h_{\nu }x^{(\nu )}_n-P_{2n} \sum _{\nu =0}^1J^0h_{\nu }x^{(\nu )}_n\Vert _\mathrm{Y}\le c(n^{-\alpha +\beta }\ln n)\Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(20)

Further, taking into account the structural qualities of the functions \(h_{\nu }(t,\tau )\) by the variable \(\tau \), error estimations of the quadrature formulae, and Lemma 2, for the second summand of the right-hand side of the estimate (17) we get

$$\begin{aligned} \Vert P_{2n}\sum _{\nu =0}^1J^0(x^{(\nu )}_n(h_{\nu }- P^{\tau }_{2n}h_{\nu }))\Vert _\mathrm{Y}\le \end{aligned}$$
(21)
$$ \le c(n^{\beta }\ln n)\Vert \sum _{\nu =0}^1J^0(x^{(\nu )}_n(h_{\nu }- P^{\tau }_{2n}h_{\nu }))\Vert _\mathrm{C} \le c(n^{-\alpha +\beta }\ln n)\Vert x_n\Vert _\mathrm{X}. $$

Finally, using the estimate (17), (20), and (21), we get

$$\begin{aligned} \Vert Wx_n-W_nx_n\Vert _\mathrm{Y}\le c(n^{-\alpha +\beta }\ln n)\Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(22)

Let’s denote \(\psi _{n-1}(t)\in {\mathscr {T}}_{n-1}\) the polynomial of the best uniform approximation of the function \(\psi (t)\). Using an auxiliary operator

$$ \bar{U}_n:\mathrm{X}_n\rightarrow \mathrm{Y}_n, \quad \bar{U}_nx_n=\psi ^{-}_{n-1}x^{\prime +}_n-\psi ^{+}_{n-1} x^{\prime -}_n, $$

we get

$$\begin{aligned} \Vert Ux_n-U_nx_n\Vert _\mathrm{Y}\le (1+\Vert P_{2n}\Vert _{\mathrm{Y}\rightarrow \mathrm{Y}}) \Vert Ux_n-\bar{U}_nx_n\Vert _\mathrm{Y}. \end{aligned}$$
(23)

Futher, we have

$$\begin{aligned} \Vert Ux_n-\bar{U}_nx_n\Vert _\mathrm{Y}\le \Vert (\psi ^{-}-\psi ^{-}_{n-1}) x^{\prime +}_n\Vert _\mathrm{Y}+ \Vert (\psi ^{+}-\psi ^{+}_{n-1}) x^{\prime -}_n\Vert _\mathrm{Y}. \end{aligned}$$
(24)

Each summand of the right-hand side of (24) we estimate, using Lemma 1 as follows:

$$\begin{aligned} \Vert (\psi ^{\mp }-\psi ^{\mp }_{n-1})x^{\prime {\pm }}_n\Vert _\mathrm{Y}\le \Vert \psi ^{\mp }-\psi ^{\mp }_{n-1}\Vert _\mathrm{Y}\Vert x^{\prime {\pm }}_n\Vert _\mathrm{Y} \le cn^{-\alpha +\beta }\Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(25)

Now by using (24), (25), and (5) we can rewrite inequality (23) as

$$\begin{aligned} \Vert Ux_n-U_nx_n\Vert _\mathrm{Y}\le c(n^{-\alpha +\beta }\ln n)\Vert x_n\Vert _\mathrm{X}. \end{aligned}$$
(26)

And finally, using estimations (15), (16), (22), and (26), we get

$$ \Vert K-K_n\Vert _{\mathrm{X}_n\rightarrow \mathrm{Y}}\le cn^{-\alpha +\beta }\ln n. $$

As the operators Q and K are both invertable and the inverse operator \(Q^{-1}\) is bounded, then

$$\begin{aligned} \Vert K^{-1}\Vert _{\mathrm{Y}\rightarrow \mathrm{X}}\le \Vert v\Vert _\mathrm{Y}\Vert Q^{-1}\Vert _ {\mathrm{Y}\rightarrow \mathrm{X}}\le c. \end{aligned}$$
(27)

So there exists \(n_0\in {\mathbb N}\) such that for all \(n\in {\mathbb N}\), \(n\ge n_0,\)

$$ \Vert K^{-1}\Vert _{\mathrm{Y}\rightarrow \mathrm{X}}\Vert K-K_n\Vert _{\mathrm{X}_n\rightarrow \mathrm{Y}} \le cn^{-\alpha +\beta }\ln n\le \frac{1}{2}. $$

For such n according to the Theorem 1.1 of the paper [16] there exist the operators \(K_n^{-1}:\mathrm{Y}_n\rightarrow \mathrm{X}_n\), and they are bounded. Moreover, for the right-hand sides of the Eqs. (11), (12), using the condition A1 of the Theorem 1, Lemma 1 and estimation (5), we have

$$\begin{aligned} \Vert y-y_n\Vert _\mathrm{Y}=\Vert y-P_{2n}y\Vert _\mathrm{Y} \le cn^{-\alpha +\beta }\ln n. \end{aligned}$$
(28)

Now, using the corollary of the Theorem 1.2 [16], for the solutions \(x^*\) and \(x^*_n\) of the Eqs. (11), (12), taking into account (27), (28), we’ll find

$$ \Vert x^{*}-x^{*}_n\Vert _\mathrm{X}\le cn^{-\alpha +\beta }\ln n. $$

The Theorem 1 is proved.        \(\square \)

Corollary 1

If, in the conditions of the Theorem 1, the functions \(a_{\nu }\), \(b_{\nu }\), \(h_{\nu }\) (by both variables), \(\nu =0,1\), and y belong to \(\mathrm{H}^r_{\alpha }\), \(r\in {\mathbb N}\). Then the approximate solutions \(x_n^{*}\) converge to the exact solution \(x^{*}\) of the Eq. (1) as \(n\rightarrow \infty \) by the norm of the space \(\mathrm{H}^1_{\beta }\) as follows:

$$\begin{aligned} \Vert x^{*}-x^{*}_n\Vert _{\mathrm{H}^1_{\beta }}\le cn^{-r-\alpha +\beta }\ln n, \quad r+\alpha >\beta . \end{aligned}$$
(29)

Proof

Using the Theorem 6 from [15], we can write

$$\begin{aligned} \Vert x^{*}-x^{*}_n\Vert _\mathrm{X}\le (1+\Vert K^{-1}_nP_{2n}K\Vert ) \Vert x^{*}-\bar{x}_n\Vert _\mathrm{X}+\Vert K^{-1}_n\Vert \Vert K_n \bar{x}_n - P_{2n}K\bar{x}_n\Vert _\mathrm{Y}, \end{aligned}$$
(30)

where \(\bar{x}_n\) is an arbitrary element of the space \(\mathrm{X}_n\). Under corollary 1 conditions the solution \(x^{*}\) of the Eq. (1) is so, that \(x^{*\prime }\in \mathrm{H}^{r}_{\alpha }\) for \(0<\alpha <1\) and \(x^{*(r+1)}\in Z\) for \(\alpha =1\) (Z means Zigmund class of the functions). Then, taking for the \(\bar{x}_n\in {\mathscr {T}}_n\) the polynomial of the best uniform approximation for the function \(x^{*}\) and using Lemma 1, for the first summand of the right-hand side of (30) we’ll obtain

$$\begin{aligned} (1+\Vert K^{-1}_nP_{2n}K\Vert )\Vert x^{*}-\bar{x}_n\Vert _\mathrm{X}\le cn^{-r-\alpha +\beta }\ln n. \end{aligned}$$
(31)

Taking into account the structural qualities of the functions \(h_{\nu }(t,\tau ), \ \nu =0,1,\) by the variable \(\tau \), the error estimation of the quadrature formulae, using Lemma 2 and estimation (5) for the second summand of the right-hand side of the inequality (30), we get

$$\begin{aligned} \Vert K_n\bar{x}_n - P_{2n}K\bar{x}_n\Vert _\mathrm{Y} = \Vert W_n\bar{x}_n - P_{2n}W\bar{x}_n\Vert _\mathrm{Y}\le \end{aligned}$$
(32)
$$ \le \Vert P_{2n}\sum _{\nu =0}^1J_0(\bar{x}^{(\nu )}_n (h_{\nu } - P^{\tau }_{2n}h_{\nu }))\Vert _\mathrm{Y} \le $$
$$ \le c(n^{\beta }\ln n)\Vert \sum _{\nu =0}^1J_0 (\bar{x}^{(\nu )}_n(h_{\nu } - P^{\tau }_{2n}h_{\nu }))\Vert _\mathrm{C} \le c(n^{-r-\alpha +\beta })\ln n \Vert \bar{x}_n\Vert _\mathrm{X}. $$

Now, substituting estimations (31) and (32) in (30), and taking into account, that

$$ \Vert \bar{x}_n\Vert _\mathrm{X} \le \Vert x^{*}\Vert _\mathrm{X} +\Vert x^{*} - \bar{x}_n\Vert _\mathrm{X} \le \Vert x^{*}\Vert _\mathrm{X} + cn^{-r-\alpha +\beta }, $$

we get an estimation (29). Corollary 1 is proved.        \(\square \)