1 INTRODUCTION

Singular integral equations are widely used in various areas of mathematics. The range of applications in mechanics and technology is well known: the theories of elasticity and thermoelasticity, hydro-, and aerodynamics. In recent years, singular integral equations have been a major tool for mathematical simulation of problems in electrodynamics.

However, the computation of singular integrals and the solution of singular integral equations are possible only in exceptional cases, so the main tools for applied problems are numerical methods. The well-known works in this area are those by Lifanov, Gabdulkhaev, Boikov, Sanikidze, and others (see [14]). These authors mainly constructed discrete solutions in the form of tables of values of the unknown function. However, it is often needed to find solutions at any point of the integration interval. Solutions of this type were first constructed by Pashkovskii (see [5, pp. 332–349]) using Chebyshev polynomials for integral equations.

In this paper, a computational scheme is proposed to approximately solve singular integral equations with zero values at the endpoints of the interval using Chebyshev polynomials of the second kind. Note that the series expansions of functions in Chebyshev polynomials converge much faster than other expansions. This is confirmed by numerous examples, some of which are given in this work.

2 COMPUTATIONAL SCHEME

We consider a singular integral equation of the form

$${{\mathbb{K}}_{0}}{{\varphi }_{0}} \equiv \frac{2}{\pi }\int\limits_{ - 1}^1 \,\frac{{{{\varphi }_{0}}(t)}}{{t - x}}dt + \frac{2}{\pi }\int\limits_{ - 1}^1 \,K(x,t){{\varphi }_{0}}(t)dt = f(x),\quad - {\kern 1pt} 1 < x < 1,$$
(1)

where \(K(x,t)\) and \(f(x)\) are given continuously differentiable functions on the interval \([ - 1,1]\) and \({{\varphi }_{0}}(t)\) is the unknown function.

A solution is sought in the class of functions with zero values at the endpoints of the integration interval \([ - 1,1]\) (see [6, 7]). This means that \({{\varphi }_{0}}(t) = \sqrt {1 - {{t}^{2}}} \varphi (t)\); therefore, we consider the equation

$$\mathbb{K}\varphi \equiv \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{\varphi (t)}}{{t - x}}dt + \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} K(x,t)\varphi (t)dt = f(x).$$
(2)

As is known (see [1, 7]), Eq. (2) has a unique solution under the condition

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\frac{1}{{\sqrt {1 - {{t}^{2}}} }}(f(t) - \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{\tau }^{2}}} K(t,\tau )\varphi (\tau )d\tau )dt = 0.$$
(3)

It is also known (see [8]) that the second-kind Chebyshev polynomials

$$\begin{gathered} {{U}_{n}}(t) = \frac{{\sin(n + 1)\arccos t}}{{\sqrt {1 - {{t}^{2}}} }},\quad n = 0,1,2, \ldots , \\ {{U}_{0}}(t) = 1,\quad {{U}_{1}}(t) = 2t,\quad {{U}_{2}}(t) = 4{{t}^{2}} - 1,\quad {{U}_{3}}(t) = 8{{t}^{3}} - 4t, \ldots , \\ \end{gathered} $$

are orthogonal on the interval \([ - 1,1]\) with the weight function \(p(t) = \sqrt {1 - {{t}^{2}}} \) and it holds that

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} {{U}_{n}}(t){{U}_{m}}(t)dt = \left\{ \begin{gathered} 0\quad {\text{if}}\;n \ne m, \hfill \\ 1\quad {\text{if}}\;n = m. \hfill \\ \end{gathered} \right.$$
(4)

Using the theory of Chebyshev series (see [5, pp. 104–173]), we then have the representations

$$\varphi (t) = \sum\limits_{k = 0}^\infty \,{{a}_{k}}{{U}_{k}}(t),\quad {{a}_{k}} = \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \varphi (t){{U}_{k}}(t)dt,\quad k = 0,1, \ldots ,$$
$$\begin{gathered} f(x) = \sum\limits_{i = 0}^\infty \,{{d}_{i}}{{U}_{i}}(x),\quad {{d}_{i}} = \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{x}^{2}}} f(x){{U}_{i}}(x)dx,\quad i = 0,1, \ldots , \\ K(x,t) = \sum\limits_{i = 0}^\infty \,{{U}_{i}}(x)\sum\limits_{j = 0}^\infty \,{{c}_{{ij}}}{{U}_{j}}(t), \\ \end{gathered} $$
(5)
$${{c}_{{ij}}} = \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} {{U}_{j}}(t)\left( {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{x}^{2}}} K(x,t){{U}_{i}}(x)dx} \right)dt,\quad i,j = 0,1, \ldots \;.$$

The coefficients \({{d}_{i}}\) (\(i = 0,1, \ldots \)) and \({{c}_{{ij}}}\) (\(i,j = 0,1, \ldots \)) can be computed using (5) or approximately computed using Gaussian quadrature formulas of the highest algebraic order of accuracy (see [9]). The coefficients \({{a}_{0}}\), \({{a}_{1}}\) \( \ldots \) are unknown, since the function \(\varphi (t)\) is unknown.

Substituting expansions (5) of \(\varphi (t)\), \(f(x)\), and \(K(x,t)\) into (2), we obtain

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{1}{{t - x}}\sum\limits_{k = 0}^\infty \,{{a}_{k}}{{U}_{k}}(t)dt + \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \left( {\sum\limits_{i = 0}^\infty \,{{U}_{i}}(x)\sum\limits_{j = 0}^\infty \,{{c}_{{ij}}}{{U}_{j}}(t)} \right)\sum\limits_{k = 0}^\infty \,{{a}_{k}}{{U}_{k}}(t)dt = \sum\limits_{i = 0}^\infty \,{{d}_{i}}{{U}_{i}}(x).$$
(6)

The sums \(\sum\nolimits_{i = 0}^\infty \,{{U}_{i}}(x)\sum\nolimits_{j = 0}^\infty \,{{c}_{{ij}}}{{U}_{j}}(t)\) uniformly converge (see [5, pp. 111, 112]), that is, we can change the order of summation.

It is true that (see [10, p. 85])

$$\frac{1}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{{{U}_{k}}(t)}}{{t - x}}dt = - {{T}_{{k + 1}}}(x),$$

where \({{T}_{{k + 1}}}(t) = \cos(k + 1)\arccos t\) (\(k = 0,1, \ldots \)) are the first-kind Chebyshev polynomials.

Using (4), we can rewrite (6) as

$$ - 2\sum\limits_{k = 0}^\infty \,{{a}_{k}}{{T}_{{k + 1}}}(x) + \sum\limits_{k = 0}^\infty \,{{a}_{k}}\sum\limits_{i = 0}^\infty \,{{c}_{{ik}}}{{U}_{i}}(x) = \sum\limits_{i = 0}^\infty \,{{d}_{i}}{{U}_{i}}(x).$$
(7)

Expanding \( - 2{{T}_{{k + 1}}}(x)\) in a series in terms of the Chebyshev polynomials of the second kind, we have

$$ - 2{{T}_{{k + 1}}}(x) = \sum\limits_{i = 0}^\infty \,{{b}_{{ik}}}{{U}_{i}}(x),$$

where

$${{b}_{{ik}}} = - 2\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{x}^{2}}} {{T}_{{k + 1}}}(x){{U}_{i}}(x)dx = \left\{ \begin{gathered} 0\quad {\text{ if}}\quad i = 0,1, \ldots ,k - 2,k,k + 2, \ldots , \hfill \\ 1\quad {\text{ if}}\quad i = k - 1, \hfill \\ - 1\quad {\text{if}}\quad i = k + 1. \hfill \\ \end{gathered} \right.$$

Then Eq. (7) takes the form

$$\sum\limits_{k = 0}^\infty \,{{a}_{k}}\sum\limits_{i = 0}^\infty \,{{b}_{{ik}}}{{U}_{i}}(x) + \sum\limits_{k = 0}^\infty \,{{a}_{k}}\sum\limits_{i = 0}^\infty \,{{c}_{{ik}}}{{U}_{i}}(x) = \sum\limits_{i = 0}^\infty \,{{d}_{i}}{{U}_{i}}(x)$$

or

$$\sum\limits_{i = 0}^\infty \,\left( {\sum\limits_{k = 0}^\infty \,{{a}_{k}}({{b}_{{ik}}} + {{c}_{{ik}}})} \right){{U}_{i}}(x) = \sum\limits_{i = 0}^\infty \,{{d}_{i}}{{U}_{i}}(x).$$

It follows that

$$\sum\limits_{k = 0}^\infty \,{{a}_{k}}({{b}_{{ik}}} + {{c}_{{ik}}}) = {{d}_{i}},\quad i = 0,1, \ldots \;.$$
(8)

This is a system of linear algebraic equations with respect to the unknowns \({{a}_{0}}\), \({{a}_{1}}\), \( \ldots \) .

We now consider condition (3). It can be similarly represented in the form

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\frac{1}{{\sqrt {1 - {{t}^{2}}} }}\left( {\sum\limits_{i = 0}^\infty \,{{d}_{i}}{{U}_{i}}(t) - \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{\tau }^{2}}} \sum\limits_{i = 0}^\infty \,{{U}_{i}}(t)\sum\limits_{j = 0}^\infty \,{{c}_{{ij}}}{{U}_{j}}(\tau )\sum\limits_{k = 0}^\infty \,{{a}_{k}}{{U}_{k}}(\tau )d\tau } \right)dt = 0.$$

Since the second-kind Chebyshev polynomials are orthonormal, that is, in view of formula (4), we have

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\frac{1}{{\sqrt {1 - {{t}^{2}}} }}\left( {\sum\limits_{i = 0}^\infty \,{{d}_{i}}{{U}_{i}}(t) - \sum\limits_{k = 0}^\infty \,{{a}_{k}}\sum\limits_{i = 0}^\infty \,{{c}_{{ik}}}{{U}_{i}}(t)} \right)dt = 0.$$

Combining this equality and (8) yields the following system of linear algebraic equations of infinite order with infinitely many unknowns:

$$\begin{gathered} \sum\limits_{k = 0}^\infty \,{{a}_{k}}({{b}_{{ik}}} + {{c}_{{ik}}}) = {{d}_{i}},\quad i = 0,1, \ldots , \hfill \\ \sum\limits_{j = 0}^\infty \,\left( {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\frac{1}{{\sqrt {1 - {{t}^{2}}} }}\left( {{{d}_{j}} - \sum\limits_{k = 0}^\infty \,{{a}_{k}}{{c}_{{jk}}}} \right){{U}_{j}}(t)dt} \right) = 0. \hfill \\ \end{gathered} $$
(9)

The approximate system is as follows:

$$\begin{gathered} \sum\limits_{k = 0}^n \,{{a}_{k}}({{b}_{{ik}}} + {{c}_{{ik}}}) = {{d}_{i}},\quad i = 0,1, \ldots ,n - 1, \hfill \\ \sum\limits_{j = 0}^n \,\left( {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\frac{1}{{\sqrt {1 - {{t}^{2}}} }}\left( {{{d}_{j}} - \sum\limits_{k = 0}^n \,{{a}_{k}}{{c}_{{jk}}}} \right){{U}_{j}}(t)dt} \right) = 0. \hfill \\ \end{gathered} $$
(10)

Computing the integral

$${{g}_{j}} = \frac{2}{\pi }\int\limits_{ - 1}^1 \,\frac{1}{{\sqrt {1 - {{t}^{2}}} }}{{U}_{j}}(t)dt = \left\{ \begin{gathered} 0\quad {\text{if}}\quad j = 2m - 1, \hfill \\ 2\quad {\text{if}}\quad j = 2m, \hfill \\ \end{gathered} \right.$$

we simplify system (10) and obtain

$$\begin{gathered} \sum\limits_{k = 0}^n \,{{a}_{k}}({{b}_{{ik}}} + {{c}_{{ik}}}) = {{d}_{i}},\quad i = 0,1, \ldots ,n - 1, \hfill \\ \sum\limits_{k = 0}^n \,{{a}_{k}}{{G}_{k}} = H, \hfill \\ \end{gathered} $$
(11)

where

$${{G}_{k}} = \sum\limits_{j = 0}^n \,{{g}_{j}}{{c}_{{jk}}},\quad H = \sum\limits_{j = 0}^n \,{{g}_{j}}{{d}_{j}}.$$

If the functions \(f(x)\) and \(K(x,t)\) satisfy the conditions

$$\int\limits_{ - 1}^1 \,\frac{{f(x)}}{{\sqrt {1 - {{x}^{2}}} }}dx = 0,\quad \int\limits_{ - 1}^1 \,\frac{{K(x,t)}}{{\sqrt {1 - {{x}^{2}}} }}dx = 0,$$

then condition (3) automatically turns into an identity (see [4]); thus, to solve Eq. (2) approximately, it suffices to solve only the system

$$\sum\limits_{k = 0}^n \,{{a}_{k}}({{b}_{{ik}}} + {{c}_{{ik}}}) = {{d}_{i}},\quad i = 0,1, \ldots ,n.$$
(12)

After this system is solved for the unknowns \({{a}_{0}}\), \({{a}_{1}}\), \( \ldots \), \({{a}_{n}}\), an approximate solution is given by

$$\varphi (t) \approx {{\varphi }_{n}}(t) = \sum\limits_{k = 0}^n \,{{a}_{k}}{{U}_{k}}(t).$$
(13)

3 JUSTIFICATION OF THE COMPUTATIONAL SCHEME

We first note that the computational scheme will be justified in a similar way to [11].

Let \(X\) denote the space of functions of the form \({{\varphi }_{0}}(t) = \sqrt {1 - {{t}^{2}}} \varphi (t)\), where \(\varphi (t)\) is a continuously differentiable function on the interval \([ - 1;1]\) whose derivative belongs to the Hölder class \(H(\alpha )\), \(0 < \alpha \leqslant 1\). The norm in \(X\) is defined as

$$\left\| {{{\varphi }_{0}}(t)} \right\| = {{\left\| {\varphi (t)} \right\|}_{{C[ - 1,1]}}} + \mathop {\sup}\limits_{{{t}_{1}} \ne {{t}_{2}}} \frac{{\left| {\varphi ({{t}_{1}}) - \varphi ({{t}_{2}})} \right|}}{{{{{\left| {{{t}_{1}} - {{t}_{2}}} \right|}}^{\beta }}}},\quad 0 < \beta < \alpha .$$
(14)

Let \({{X}_{n}}\) denote the subspace of \(X\) consisting of functions \({{\varphi }_{{0n}}}(t) = \sqrt {1 - {{t}^{2}}} {{\varphi }_{n}}(t)\), where \({{\varphi }_{n}}(t) = \sum\nolimits_{k = 0}^n \,{{\alpha }_{k}}{{U}_{k}}(t)\) is the set of polynomials of degree \(n\). The norm in \({{X}_{n}}\) is defined by (14).

Let \(Y\) denote the space of Hölder continuous functions \(y(t)\) defined on the interval \([ - 1,1]\) with the norm

$$\left\| {y(t)} \right\| = \mathop {\max}\limits_{ - 1 \leqslant t \leqslant 1} \left| {y(t)} \right| + \mathop {\sup}\limits_{{{t}_{1}} \ne {{t}_{2}}} \frac{{\left| {y({{t}_{1}}) - y({{t}_{2}})} \right|}}{{{{{\left| {{{t}_{1}} - {{t}_{2}}} \right|}}^{\beta }}}},\quad 0 < \beta < \alpha .$$

Let \({{Y}_{n}}\) denote the space of polynomials of the form \({{y}_{n}}(t) = \sum\nolimits_{k = 0}^n \,{{\alpha }_{k}}{{U}_{k}}(t)\) with the norm

$$\left\| {{{y}_{n}}(t)} \right\| = \mathop {\max}\limits_{ - 1 \leqslant t \leqslant 1} \left| {{{y}_{n}}(t)} \right| + \mathop {\sup}\limits_{{{t}_{1}} \ne {{t}_{2}}} \frac{{\left| {{{y}_{n}}({{t}_{1}}) - {{y}_{n}}({{t}_{2}})} \right|}}{{{{{\left| {{{t}_{1}} - {{t}_{2}}} \right|}}^{\beta }}}},\quad 0 < \beta < \alpha .$$

Let \({{P}_{n}}\) denote the projector from \(Y\) to \({{Y}_{n}}\) defined by the formula \({{y}_{n}}(t) = {{P}_{n}}[y(t)]\) and from \(X\) to \({{X}_{n}}\) defined by the formula \({{P}_{n}}[{{\varphi }_{0}}(t)] = \sqrt {1 - {{t}^{2}}} {{P}_{n}}[\varphi (t)]\). Here, \({{P}_{n}}[y(t)]\) is the projection operator onto the set of nth-degree polynomials of the form \(\sum\nolimits_{k = 0}^n \,{{\alpha }_{k}}{{U}_{k}}(t)\). It is known (see [8, 12]) that \(\left\| {{{P}_{n}}} \right\| \leqslant C\ln n\) in the space \(C[ - 1;1]\), where \(C = {\text{const}}\).

We need to prove that the operator \(\mathbb{K}\) acts from \(X\) to Y.

This is obvious, since, according to the properties of singular operators (see [6]), if \(K(x,t) \in H(\alpha )\) and \(\varphi (t) \in H(\alpha )\), then

$$\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{\varphi (t)}}{{t - x}}dt \in H(\alpha )\quad {\text{and}}\quad \int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} K(x,t)\varphi (t)dt \in H(\alpha ),$$

that is, \(\mathbb{K}\varphi \in H(\alpha )\).

Assume that there exists an inverse operator \({{\mathbb{K}}^{{ - 1}}}\) acting from \(Y\) to X.

The approximate equation for (2) can be rewritten as

$$\mathbb{K}{{\varphi }_{n}} \equiv \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{{{\varphi }_{n}}(t)}}{{t - x}}dt + \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} K(x,t){{\varphi }_{n}}(t)dt = f(x).$$
(15)

Then the system of linear algebraic equations with respect to the unknown coefficients \({{a}_{0}}\), \({{a}_{1}}\), \( \ldots \), \({{a}_{n}}\) can be written as

$${{\mathbb{K}}_{n}}{{\varphi }_{n}} = {{P}_{n}}\left[ {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{{{\varphi }_{n}}(t)}}{{t - x}}dt} \right] + {{P}_{n}}\left[ {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} K(x,t){{\varphi }_{n}}(t)dt} \right] = {{P}_{n}}[f(x)].$$
(16)

We estimate the norm of the difference

$$\left\| {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} (K(x,t) - K_{n}^{x}(x,t)){{\varphi }_{n}}(t)dt} \right\|,$$

where \(K_{n}^{x}(x,t)\) is the best uniform approximation polynomial in \(x\) of degree \(n\) for \(K(x,t)\). It is evident that

$$\begin{gathered} {{\left\| {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} (K(x,t) - K_{n}^{x}(x,t)){{\varphi }_{n}}(t)dt} \right\|}_{{C[ - 1,1]}}} \\ \, \leqslant \max\left| {K(x,t) - K_{n}^{x}(x,t)} \right|\max\left| {{{\varphi }_{n}}(t)} \right| \leqslant \bar {E}_{n}^{x}(K(x,t))\left\| {{{\varphi }_{n}}(t)} \right\|, \\ \end{gathered} $$

where \(\bar {E}_{n}^{x}(K(x,t)) = \mathop {\sup}\limits_{ - 1 \leqslant t \leqslant 1} E_{n}^{x}(K(x,t))\) and \(E_{n}^{x}(K(x,t))\) is the best approximation of \(K(x,t)\) with respect to \(x\) by a second-kind Chebyshev polynomial.

Repeating the proof of Bernstein’s inverse theorem (see [12, p. 165]), we can show that

$$\left\| {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} (K(x,t) - K_{n}^{x}(x,t)){{\varphi }_{n}}(t)dt} \right\| \leqslant C{{n}^{\beta }}\bar {E}_{n}^{x}(K(x,t))\left\| {{{\varphi }_{n}}(t)} \right\|,$$

where \(C = {\text{const}}\) is independent of \(n\).

It follows from the general theory of approximate methods (see [13]) that, for \(n\) such that

$$q = C{{n}^{\beta }}\left\| {{{\mathbb{K}}^{{ - 1}}}} \right\|\bar {E}_{n}^{x}(K(x,t))\ln n < 1,$$

system (16) is uniquely solvable, the operator \({{\mathbb{K}}_{n}}\) is continuously invertible, and

$$\left\| {\varphi - {{{\bar {\varphi }}}_{n}}} \right\| \leqslant C{{n}^{\beta }}\left\| {{{\mathbb{K}}^{{ - 1}}}} \right\|\bar {E}_{n}^{x}(K(x,t))\ln n,$$
(17)

where \(\varphi (t)\) and \({{\bar {\varphi }}_{n}}(t)\) are solutions of Eqs. (2) and (16).

We now use the mechanical quadrature method for singular integral equation (2). It has the operator form

$${{P}_{n}}\left[ {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{{{\varphi }_{n}}(t)}}{{t - x}}dt} \right] + {{P}_{n}}\left[ {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} P_{n}^{t}[K(x,t)]{{\varphi }_{n}}(t)dt} \right] = {{P}_{n}}[f(x)].$$
(18)

Reasoning in a similar manner and applying the collocation method, we can rewrite (18) in the form

$${{\bar {\mathbb{K}}}_{n}}{{\varphi }_{n}} = \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{{{\varphi }_{n}}(t)}}{{t - x}}dt + {{P}_{n}}\left[ {\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} P_{n}^{t}[K(x,t)]{{\varphi }_{n}}(t)dt} \right] = {{P}_{n}}[f(x)].$$
(19)

Estimating the difference \({{\left\| {{{\mathbb{K}}_{n}}{{\varphi }_{n}} - {{{\bar {\mathbb{K}}}}_{n}}{{\varphi }_{n}}} \right\|}_{{[C[ - 1,1]]}}}\) and using Bernstein’s inverse theorem (see [11, p. 165]), we obtain

$$\left\| {{{\mathbb{K}}_{n}}{{\varphi }_{n}} - {{{\bar {\mathbb{K}}}}_{n}}{{\varphi }_{n}}} \right\| \leqslant C{{n}^{\beta }}\bar {E}_{n}^{t}(K(x,t))\ln n\left\| {{{\varphi }_{n}}} \right\|.$$

The Banach theorem (see [13]) implies that, for \(n\) such that

$${{q}_{n}} = C{{n}^{\beta }}\left\| {{{\mathbb{K}}^{{ - 1}}}} \right\|\bar {E}_{n}^{t}(K(x,t))\ln n < 1,$$

the operator \({{\bar {\mathbb{K}}}_{n}}\) is continuously invertible and

$$\left\| {{{\varphi }_{n}} - {{{\bar {\varphi }}}_{n}}} \right\| \leqslant C{{n}^{\beta }}\left\| {{{\mathbb{K}}^{{ - 1}}}} \right\|\bar {E}_{n}^{t}(K(x,t))\ln n.$$
(20)

Thus, we have proved the following assertion.

Theorem. Assume that the operator \(\mathbb{K}\) is continuously invertible and the functions \(K(x,t)\) and \(f(x)\) are continuously differentiable and belong to the Hölder class \(H(\alpha )\), \(0 < \alpha \leqslant 1\). Then for \(n\) such that

$$C\left\| {{{\mathbb{K}}^{{ - 1}}}} \right\|(\bar {E}_{n}^{x}(K(x,t)) + \bar {E}_{n}^{t}(K(x,t))){{n}^{\beta }}\ln n < 1,$$

system (11) has a unique solution and

$$\left\| {\varphi - {{\varphi }_{n}}} \right\| \leqslant C\left\| {{{\mathbb{K}}^{{ - 1}}}} \right\|(\bar {E}_{n}^{x}(K(x,t)) + \bar {E}_{n}^{t}(K(x,t))){{n}^{\beta }}\ln n.$$
(21)

If \(K(x,t)\) and \(f(x)\) have continuous derivatives of orders up to \(r - 1\) \((r \geqslant 1)\) and the derivatives of order \(r\) belong to the Hölder class \(H(\alpha )\), \(0 < \alpha \leqslant 1\), then it follows from (21) and the inequality \(\bar {E}_{n}^{x}(K(x,t)) \leqslant O\left( {\tfrac{1}{{{{n}^{{r + \alpha }}}}}} \right)\) (see [12, p. 138]) that

$$\left\| {\varphi - {{\varphi }_{n}}} \right\| = O\left( {\frac{{\ln n}}{{{{n}^{{r + \alpha - \beta }}}}}} \right),\quad 0 < \beta < \alpha .$$

4 TEST EXAMPLES

We consider the following equations:

1.

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{\varphi (t)}}{{t - x}}dt + \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} ({{x}^{3}} + tx)\varphi (t)dt = - 2x + {{x}^{3}}.$$

Here, additional condition (3) is automatically satisfied; therefore, the equation has the unique solution \(\varphi (t) = 1.\)

2.

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{\varphi (t)}}{{t - x}}dt + \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} ({{x}^{3}} + 4tx)\varphi (t)dt = - 2{{x}^{2}} + 1 + x.$$

Here, additional condition (3) is also satisfied; thus, the equation has the unique solution \(\varphi (t) = t\).

3.

$$\frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} \frac{{\varphi (t)}}{{t - x}}dt + \frac{2}{\pi }\int\limits_{ - 1}^1 \,\sqrt {1 - {{t}^{2}}} (4{{x}^{3}} + tx)\varphi (t)dt = x - {{x}^{3}}.$$

Here, additional condition (3) is satisfied as well, and the equation has the unique solution \(\varphi (t) = {{t}^{2}}\).

Table 1 presents the numerical results produced by solving the systems of linear algebraic equations (12) for each example with \(n = 10\).

Table 1

Coefficients of the solution expansion

Example 1

solution \(\varphi (t) = 1\)

Example 2

solution \(\varphi (t) = t\)

Example 3

solution \(\varphi (t) = {{t}^{2}}\)

\({{a}_{0}}\)

\(0.9999999\)

\(9.685755{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(0.2499998\)

\({{a}_{1}}\)

\( - 6.81494{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(0.5000001\)

\( - 2.384186{\text{E}}{\kern 1pt} - {\kern 1pt} 07\)

\({{a}_{2}}\)

\( - 3.583727{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\( - 1.48749{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(0.25\)

\({{a}_{3}}\)

\( - 3.927067{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(6.214358{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\( - 4.396021{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\({{a}_{4}}\)

\( - 3.965056{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\( - 1.40607{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(9.742919{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\({{a}_{5}}\)

\( - 1.961513{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(6.712615{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\( - 9.037535{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\({{a}_{6}}\)

\(5.037663{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\( - 6.926157{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\(2.199927{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\({{a}_{7}}\)

\( - 2.116015{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(5.175345{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(2.166799{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\({{a}_{8}}\)

\(2.833207{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\(3.793914{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\( - 7.175324{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

\({{a}_{9}}\)

\( - 5.829515{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(1.990343{\text{E}}{\kern 1pt} - {\kern 1pt} 08\)

\(9.166978{\text{E}}{\kern 1pt} - {\kern 1pt} 09\)

Approximate solution

of the equation

\(\varphi (t) \approx \sum\limits_{k = 1}^{10} \,{{a}_{{k - 1}}}{{U}_{{k - 1}}}(t) \approx 1\)

\(\varphi (t) \approx \sum\limits_{k = 1}^{10} \,{{a}_{{k - 1}}}{{U}_{{k - 1}}}(t) \approx t\)

\(\varphi (t) \approx \sum\limits_{k = 1}^{10} \,{{a}_{{k - 1}}}{{U}_{{k - 1}}}(t) \approx {{t}^{2}}\)

The results obtained in this paper show that the constructed computational scheme is convenient for implementation and efficient in terms of accuracy.