1 INTRODUCTION

The object of study is the linear integral equation of the third kind with fixed singularities in the kernel (ETKFS):

$$Ax \equiv x(t)\prod\limits_{j = 1}^l {{{{(t - {{t}_{j}})}}^{{{{m}_{j}}}}}} + \int\limits_{ - 1}^1 {K(t,s){{{\left[ {{{{\left( {s + 1} \right)}}^{{{{p}_{1}}}}}{{{\left( {1 - s} \right)}}^{{{{p}_{2}}}}}} \right]}}^{{ - 1}}}x(s)ds = y(t)} ,$$
(1.1)

where

$$t \in I \equiv [ - 1,1],\quad {{t}_{j}} \in \left( { - 1,1} \right),\quad {{m}_{j}} \in N\quad (j = \overline {1,l} );\quad {{p}_{1}},{{p}_{2}} \in {{R}^{ + }},K$$

\(y\) are known continuous functions having certain pointwise smoothness, \(x(t)\) is the sought function, and the integral is understood as an Hadamard finite part integral [1, p. 144–150]. Equations of the form (1.1) is increasingly widely used both in theory and applications. Equations of this kind arise in a number of important problems in the theories of elasticity, neutron transport, and particle scattering (see, e.g., [2, 3] and the references in [2, 4]), as well as theories of singular integral equations with a degenerate symbol [5] and partial differential equations of mixed type [6]. In this case, as a rule, the natural classes of solutions of integral equations of the third kind (ETK) are the special spaces of generalized functions (SGFs) of type \(D\) or \(V\). By \(D\) (\(V\)) we mean the SGF constructed on the basis of the Dirac delta function functional (respectively, the Hadamard finite part integral). The ETKFSs under study are solved exactly only in very rare special cases; therefore, the development of theoretically substantiated efficient methods for their approximate solution in the SGFs is a relevant and actively developing area of mathematical analysis and computational mathematics. A number of results in this field were obtained in [712], where special direct polynomial methods for solving ETKFSs of the form (1.1) in SGFs of type \(D\) and \(V\) were proposed and substantiated.

In this paper, for an approximate solution of ETKFS (1.1) in an SGF of type \(D\), we propose a new version of the generalized collocation method based on the use of cubic splines with minimal defect. Its theoretical substantiation in the sense of [13, Ch. 1] is conducted, and it is found that this method is optimal in order of accuracy on a certain class \(F\) of smooth functions among all direct projection methods for solving the equations under study in the SGFs.

2 ON THE SPACES OF TEST AND GENERALIZED FUNCTIONS

Let \(C \equiv C(I)\) be the space of functions continuous on \(I\) with the usual max-norm and \(m \in N\). Following [14], we say that a function \(f \in C\) belongs to the class \(C\left\{ {m;0} \right\} \equiv C_{0}^{{\left\{ m \right\}}}(I)\) if, at the point \(t = 0\), there is a \(m\)th-order Taylor derivative \({{f}^{{\left\{ m \right\}}}}(0)\) (naturally, we assume that \(C\left\{ {0;0} \right\} \equiv C\)). The set \(C\left\{ {m;0} \right\}\) is called the class of pointwise smooth functions with the characteristic operator \(T:C\left\{ {m;0} \right\} \to C\) defined by the rule

$$Tf \equiv \left[ {{{f(t) - \sum\limits_{i = 0}^{m - 1} {{{f}^{{\left\{ i \right\}}}}(0){{t}^{i}}} } \mathord{\left/ {\vphantom {{f(t) - \sum\limits_{i = 0}^{m - 1} {{{f}^{{\left\{ i \right\}}}}(0){{t}^{i}}} } {i!}}} \right. \kern-0em} {i!}}} \right]{{t}^{{ - m}}} \equiv F(t) \in C\quad (F(0) \equiv \mathop {\lim }\limits_{t \to 0} F(t)).$$

With the norm

$${{\left\| f \right\|}_{{C\left\{ {m;0} \right\}}}} \equiv {{\left\| {Tf} \right\|}_{C}} + \sum\limits_{i = 0}^{m - 1} {\left| {{{f}^{{\left\{ i \right\}}}}(0)} \right|} $$

the space \(C\left\{ {m;0} \right\}\) is complete and normally embedded in \(C\) (see, e.g., [15, Ch. 1, Section 2]).

Next, let \(p \in {{R}^{ + }}\) and \(g \in C\). Following [14], we will write \(g \in C\left\{ {p;1} \right\} \equiv C_{1}^{{\left\{ p \right\}}}(I)\) if there exist left Taylor derivatives \({{g}^{{\left\{ j \right\}}}}(1)\quad \left( {j = \overline {1,\left[ p \right]} } \right)\) at the point \(t = 1\) and, for \(p \ne \left[ p \right]\) (\(\left[ {\;} \right]\) is the entire part of a number), there is a finite limit

$$\mathop {\lim }\limits_{t \to 1 - } \left\{ {\left[ {g(t) - \sum\limits_{j = 0}^{\left[ p \right]} {{{g}^{{\left\{ j \right\}}}}(1){{{(t - 1)}}^{j}}} {\text{/}}j!} \right]{{{(1 - t)}}^{{ - p}}}} \right\}.$$

We supply the vector space \(C\left\{ {p;1} \right\}\) with the norm

$${{\left\| g \right\|}_{{\left\{ p \right\}}}} \equiv {{\left\| g \right\|}_{{C\left\{ {p;1} \right\}}}} \equiv {{\left\| {Sg} \right\|}_{C}} + \sum\limits_{i = 0}^\lambda {\left| {{{g}^{{\left\{ i \right\}}}}(1)} \right|} {\kern 1pt} ,$$
(2.1)

where

$$\begin{gathered} Sg \equiv \left[ {g(t) - {{\sum\limits_{i = 0}^\lambda {{{g}^{{\left\{ i \right\}}}}(1){{{(t - 1)}}^{i}}} } \mathord{\left/ {\vphantom {{\sum\limits_{i = 0}^\lambda {{{g}^{{\left\{ i \right\}}}}(1){{{(t - 1)}}^{i}}} } {i!}}} \right. \kern-0em} {i!}}} \right]{{(1 - t)}^{{ - p}}} \equiv G(t) \in C, \\ \lambda = \lambda (p) \equiv \left[ p \right] - (1 + \operatorname{sign} (\left[ p \right] - p)),\quad G(1) \equiv \mathop {\lim }\limits_{t \to 1 - } G(t). \\ \end{gathered} $$
(2.2)

Note that elements of the space \(C\left\{ {p;1} \right\}\) are functions of the form

$$g(t) = {{(1 - t)}^{p}}G(t) + \sum\limits_{i = 0}^\lambda {{{b}_{i}}} {{(t - 1)}^{i}},$$
(2.3)

where \(Sg = G(t) \in C,\) \({{g}^{{\left\{ i \right\}}}}(1) = {{b}_{i}}\,i!\) \((i = \overline {0,\lambda } ).\) Hence, the space \(C\left\{ {p;1} \right\}\) with norm (2.1) is complete and embedded in \(C\).

Now we form the test space for our studies:

$$Y \equiv C_{{0;1}}^{{\left\{ m \right\};\left\{ p \right\}}}(I) \equiv \left\{ {\left. {y \in C\left\{ {m;0} \right\}} \right|Ty \in C\left\{ {p;1} \right\}} \right\}.$$

We define the norm in it as

$${{\left\| y \right\|}_{Y}} \equiv {{\left\| {Ty} \right\|}_{{\left\{ p \right\}}}} + \sum\limits_{i = 0}^{m - 1} {\left| {{{y}^{{\left\{ i \right\}}}}(0)} \right|} \quad \left( {y \in Y} \right).$$
(2.4)

Lemma 2.1 (see [7]). 1) The test functions have the following structure:

$$\varphi \in Y \Leftrightarrow \varphi (t) = \left( {UV\Phi } \right)(t) + {{t}^{m}}\sum\limits_{j = 0}^\lambda {{{\alpha }_{j}}} {{(t - 1)}^{j}} + \sum\limits_{i = 0}^{m - 1} {{{e}_{i}}} {{t}^{i}},$$
(2.5)

where \(\Phi \in C,\) \({{\alpha }_{j}} \in R,\) \(j = \overline {0,\lambda } ,\) \({{e}_{i}} \in R,\) \(i = \overline {0,m - 1} ,\) and \(ST\varphi = \Phi ,\;\) \({{(T\varphi )}^{{\left\{ j \right\}}}}(1) = {{\alpha }_{j}}j!\) \((\forall j),\) \({{\varphi }^{{\left\{ i \right\}}}}(0) = {{e}_{i}}\,i!\) \((\forall i);\) \(Uf \equiv {{t}^{m}}f(t),\) \(Vf \equiv {{(1 - t)}^{p}}f(t);\)

2) the space \(Y\) is complete in norm (2.4) and embedded in \(C\left\{ {m;0} \right\}.\)

Let \(h \in C({{I}^{2}})\) and, for each fixed \(s \in I\), the function \(h(t,s)\) belong to the space \(C\left\{ {p;1} \right\}\). We will say that \(h \in C_{t}^{{\left\{ p \right\}}}({{I}^{2}})\) if \({{S}_{t}}h \in C\), where \({{S}_{t}}\) denotes operator (2.2) applied with respect to the variable \(t\). In the same way, we define the class \(C_{s}^{{\left\{ p \right\}}}({{I}^{2}})\). Then,

$$C_{1}^{{\left\{ p \right\}}}({{I}^{2}}) \equiv C_{t}^{{\left\{ p \right\}}}({{I}^{2}}) \cap C_{s}^{{\left\{ p \right\}}}({{I}^{2}}).$$

Now, over the space of test functions Y, we construct a family \(X \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {m;0} \right\}\) of generalized functions \(x(t)\) of the form

$$x(t) \equiv z(t) + \sum\limits_{i = 0}^{m - 1} {{{\gamma }_{i}}{{\delta }^{{\left\{ i \right\}}}}} (t),$$
(2.6)

where \(t \in I,\) \(z \in C\left\{ {p;1} \right\},\) \({{\gamma }_{i}} \in R\) are arbitrary constants, and \(\delta \) and \({{\delta }^{{\left\{ i \right\}}}}\) are, respectively, the Dirac delta function and its Taylor derivatives acting on the space \(Y\) of test functions according to the rule

$$\left( {{{\delta }^{{\left\{ i \right\}}}},y} \right) \equiv \int\limits_{ - 1}^1 {{{\delta }^{{\left\{ i \right\}}}}} \left( t \right)y(t)dt \equiv {{( - 1)}^{i}}{{y}^{{\left\{ i \right\}}}}(0),\quad y \in Y,\quad i = \overline {0,m - 1} .$$

It is clear that the vector space \(X\) with the norm

$${{\left\| x \right\|}_{X}} \equiv {{\left\| z \right\|}_{{\left\{ p \right\}}}} + \sum\limits_{i = 0}^{m - 1} {\left| {{{\gamma }_{i}}} \right|} $$
(2.7)

is a Banach space.

3 ON THE SPLINE APPROXIMATION OF POINTWISE SMOOTH FUNCTIONS

Let us consider the approximation of the elements of the test space \(Y \equiv C_{{0;1}}^{{\left\{ m \right\};\left\{ p \right\}}}(I)\) using cubic splines.

We define on \(I\) a uniform grid

$${{\Delta }_{n}}: - 1 \equiv {{s}_{0}} < {{s}_{1}} < ... < {{s}_{n}} \equiv 1,\quad n = 2,3,...,$$
(3.1)

where \({{s}_{k}} \equiv - 1 + 2k{\text{/}}n,\) \(k = \overline {0,n} ,\) and consider on it a cubic spline

$${{z}_{n}}(t) \equiv \sum\limits_{i = - 1}^{n + 1} {{{c}_{i}}{{B}_{i}}(t)} ,\quad {{c}_{i}} \in R,$$

satisfying the boundary conditions

$$z_{n}^{{(3)}}\left( {{{s}_{j}} - 0} \right) = z_{n}^{{(3)}}\left( {{{s}_{j}} + 0} \right),\quad j = 1,n - 1.$$
(3.2)

Here, the test functions \({{B}_{i}}(t)\) are \(B\)-splines with a support \(({{s}_{{i - 2}}},{{s}_{{i + 2}}})\) (see, e.g., [16, Ch. 3, Section 8]). To  determine all functions \({{B}_{i}}(t)\), we supplement grid (3.1) with uniformly spaced nodes: \({{s}_{{ - 3}}} < {{s}_{{ - 2}}} < {{s}_{{ - 1}}} < {{s}_{0}} \equiv - 1,\)\(1 \equiv {{s}_{n}} < {{s}_{{n + 1}}} < {{s}_{{n + 2}}} < {{s}_{{n + 3}}}\). We denote by \(S_{n}^{3}\) the space of all cubic splines \({{z}_{n}}(t)\) on the grid \({{\Delta }_{n}}\) that possess property (3.2) with the norm \({{\left\| {{{z}_{n}}} \right\|}_{C}}\). Next, we denote by \({{P}_{n}}:C \to S_{n}^{3}\) an operator that puts into correspondence to any function \(f \in C\) its interpolation cubic spline \({{P}_{n}}f \in S_{n}^{3}\) under condition (3.2), such that \(\left( {{{P}_{n}}f} \right)({{s}_{i}}) = f({{s}_{i}}),\) \(i = \overline {0,n} \). In book [16, Ch. 3, Section 1, Theorem 3.1], the existence and uniqueness of an interpolating cubic spline are proved under various boundary conditions and an algorithm for constructing such splines is presented. It is also specially noted there [16, Ch. 3, Section 5] that, in the approximation by cubic splines, the choice of boundary conditions (3.2) is the most appropriate.

Theorems 9, 10, and 13 in [17, Ch. 2, Section 4] and the corresponding result in [18] (see Lemma 2 in [18]) imply the following result.

Lemma 3.1. Let \(r = \overline {1,4} \) and \(f \in {{C}^{{(r)}}} \equiv {{C}^{{(r)}}}(I)\). Then,

$${{\left\| {{{P}_{n}}f - f} \right\|}_{C}} = O({{n}^{{ - r}}}),\quad n \to \infty .$$
(3.3)

Let \({{\Pi }_{q}} \equiv {\text{span\{ }}{{t}^{i}}{\text{\} }}_{0}^{q}\) be the class of all algebraic polynomials of degree not higher than \(q\). Denote by \({{Y}_{n}} \equiv {\text{span}}\left\{ {UV{{B}_{i}}} \right\}_{{ - 1}}^{{n + 1}} \oplus {{\Pi }_{{m + \lambda }}}\) the \((n + m + \lambda + 4)\)-dimensional subspace of \(Y\) and introduce into consideration the operator \({{\Gamma }_{n}} \equiv {{\Gamma }_{{n + m + \lambda + 4}}}:Y \to {{Y}_{n}}\) that puts into correspondence to any function \(y \in Y\) a generalized spline \({{\Gamma }_{n}}y \in {{Y}_{n}}\) defined by the conditions

$$(ST{{\Gamma }_{n}}y)({{s}_{i}}) = (STy)({{s}_{i}}),\quad i = \overline {0,n} ,$$
$${{(T{{\Gamma }_{n}}y)}^{{\left\{ j \right\}}}}(1) = {{(Ty)}^{{\left\{ j \right\}}}}(1),\quad j = \overline {0,\lambda } ,$$
$${{({{\Gamma }_{n}}y)}^{{\left\{ i \right\}}}}(0) = {{y}^{{\left\{ i \right\}}}}(0),\quad i = \overline {0,m - 1} ,$$
$${{(ST{{\Gamma }_{n}}y)}^{{(3)}}}({{s}_{j}} - 0) = {{(ST{{\Gamma }_{n}}y)}^{{\left( 3 \right)}}}({{s}_{j}} + 0),\quad j = 1,n - 1.$$

Following the reasoning in [15, Ch. 1, Section 5, 5.3], it easy to obtain the representation

$${{\Gamma }_{n}}y \equiv {{\Gamma }_{{n + m + \lambda + 4}}}(y;t) = (UV{{P}_{n}}STy)(t) + {{t}^{m}}\sum\limits_{j = 0}^\lambda {{{{(Ty)}}^{{\left\{ j \right\}}}}} (1){{(t - 1)}^{j}}{\text{/}}j{\kern 1pt} !\; + \;\sum\limits_{i = 0}^{m - 1} {{{y}^{{\left\{ i \right\}}}}(0){{t}^{i}}} {\text{/}}i{\kern 1pt} !.$$
(3.4)

Lemma 3.2. \({{\Gamma }_{n}}\) is a projector in the space Y.

By virtue of (3.4) and \(P_{n}^{2} = {{P}_{n}}\), this lemma can also be proved similarly to Lemma 1.5.1 in [15, Ch. 1, Section 5]. The role of the operators \(U\) and \(T\) in Lemma 1.5.1 is played by \(UV\) and ST, respectively.

Henceforward, we will use the following notation:

$$Y{{C}^{{(r)}}} \equiv \left\{ {y \in Y\,{\text{|}}\,STy \in {{C}^{{(r)}}}} \right\},$$

where \(r = 0,1,2,...;\) \(Y{{C}^{{(0)}}} \equiv Y.\)

The following theorem characterizes the rate of convergence of the generalized interpolation splines to the interpolated function.

Theorem 1. If \(y \in Y{{C}^{{(r)}}},\,\,r = \overline {1,4} ,\) then

$${{\left\| {{{\Gamma }_{n}}y - y} \right\|}_{Y}} = O({{n}^{{ - r}}}),\quad n \to \infty .$$
(3.5)

Proof. By virtue of (2.5), (3.4), (2.4), (2.1), and Lemma 3.1, we successively find

$${{\left\| {{{\Gamma }_{n}}y - y} \right\|}_{Y}} = {{\left\| {UV({{P}_{n}}STy - STy)} \right\|}_{Y}} \equiv {{\left\| {V({{P}_{n}}STy - STy)} \right\|}_{{\left\{ p \right\}}}} \equiv {{\left\| {{{P}_{n}}STy - STy} \right\|}_{C}} \equiv O({{n}^{{ - r}}}).$$

Remark 1. Obviously, estimate (3.5) and the well-known Banach–Steinhaus theorem imply uniform boundedness of the norms of the operators \({{\Gamma }_{n}}:\left\| {{{\Gamma }_{n}}} \right\| = O(1),\)\(n \to \infty \).

4 GENERALIZED COLLOCATION METHOD WITH CUBIC SPLINES (GCMCS)

Let there be given ETKFS (1.1). To reduce cumbersome calculations and simplify formulations, without limiting the generality of the methods and results, henceforward, we will assume that \(l = 1,\) \({{t}_{1}} = 0,\) and \({{p}_{1}} = 0\), i.e., consider an equation of the form

$$Ax \equiv {{t}^{m}}x(t) + \int\limits_{ - 1}^1 {K(t,s){{{(1 - s)}}^{{ - p}}}} x(s)ds = y(t)\quad (t \in I),$$
(4.1)

where \(m \in N,\) \(p \in {{R}^{ + }};\) \(y \in Y,K\) is a known continuous function satisfying the conditions

$$K \in C_{s}^{{\left\{ p \right\}}}({{I}^{2}}),\quad K(t, \cdot ) \in Y,\quad {{\psi }_{j}}(t) \equiv K_{s}^{{\left\{ j \right\}}}(t,1) \in Y,$$
$${{\tau }_{i}}(t) \equiv K_{s}^{{\left\{ i \right\}}}(t,0) \in Y,\quad u \equiv {{S}_{s}}K \in C_{t}^{{\left\{ m \right\}}}({{I}^{2}}),\quad {{\theta }_{i}}(s) \equiv u_{t}^{{\left\{ i \right\}}}(0,s) \in C,$$
(4.2)
$$v \equiv {{T}_{t}}u \in C_{t}^{{\left\{ p \right\}}}({{I}^{2}}),\quad {{\varphi }_{j}}(s) \equiv v_{t}^{{\left\{ j \right\}}}(1,s) \in C,\quad j = \overline {0,\lambda } ,\quad i = \overline {0,m - 1} ;$$

and \(x \in X\) is the sought generalized function. The Fredholmity and sufficient conditions for the continuous invertibility of the operator \(A:X \to Y\) were proved in [7], where a method for finding the exact solution of ETKFS (4.1) in the class X was also described.

We construct an approximate solution of Eq. (4.1) in the form

$$\begin{gathered} {{x}_{n}} \equiv {{x}_{n}}(t;\left\{ {{{c}_{k}}} \right\}) \equiv {{f}_{n}}(t) + \sum\limits_{i = 0}^{m - 1} {{{c}_{{i + \lambda + n + 3}}}} {{\delta }^{{\left\{ i \right\}}}}(t), \\ {{f}_{n}}(t) \equiv {{(1 - t)}^{p}}{{z}_{n}}(t) + \sum\limits_{i = 0}^\lambda {{{c}_{{i + n + 2}}}} {{(t - 1)}^{i}}, \\ \end{gathered} $$
(4.3)

where \({{z}_{n}}(t) \equiv \sum\nolimits_{i = - 1}^{n + 1} {{{c}_{i}}} {{B}_{i}}(t)\) is the cubic spline considered above in Section 3. We find the set \(\left\{ {{{c}_{k}}} \right\}_{{ - 1}}^{{n + m + \lambda + 2}}\) of unknown parameters according to our GCMCS from a quadratic system of linear algebraic equations (SLAE) of the \((n + m + \lambda + 4)\)-th order:

$$\begin{gathered} (ST{{\rho }_{n}})({{s}_{i}}) = 0,\quad i = \overline {0,n} ,\quad {{(T{{\rho }_{n}})}^{{\left\{ j \right\}}}}(1) = 0,\quad j = \overline {0,\lambda } ,\quad \rho _{n}^{{\left\{ i \right\}}}(0) = 0,\quad i = \overline {0,m - 1} , \\ {{(STU{{x}_{n}})}^{{\left( 3 \right)}}}({{s}_{k}} - 0) = {{(STU{{x}_{n}})}^{{\left( 3 \right)}}}({{s}_{k}} + 0),\quad k = 1,n - 1, \\ \end{gathered} $$
(4.4)

where \({{\rho }_{n}}(t) \equiv \rho _{n}^{A}(t) \equiv (A{{x}_{n}} - y)(t)\) is the residual of the approximate solution and \(\left\{ {{{s}_{i}}} \right\}_{0}^{n}\) is the previously used system of collocation nodes generating grid (3.1).

In the substantiation of the proposed method, a useful role is played by the functions

$${{\Psi }_{i}}(t) \equiv \sum\limits_{l = 0}^i {\left( {\begin{array}{*{20}{c}} i \\ l \end{array}} \right)} {\kern 1pt} {{\tau }_{l}}(t)\prod\limits_{k = 0}^{i - l - 1} {\left( {p + k} \right)} ,\quad i = \overline {0,m - 1} .$$

For the computational algorithm (4.1)–(4.4), we have the following theorem.

Theorem 2. Suppose that a homogeneous ETKFS \(Ax = 0\) has only a zero solution in \(X\) (e.g., under the conditions of Theorem 3 in [7]) and the initial data are such that \(u \equiv {{S}_{s}}K\) (with respect to \(t\)), \({{\psi }_{j}},{{\Psi }_{i}},y \in Y{{C}^{{(r)}}},\) \(r = \overline {1,4} ,\) \(j = \overline {0,\lambda } ,\) \(i = \overline {0,m - 1} .\) Then, for all \(n \in N,\) \(n \geqslant {{n}_{0}}\), SLAE (4.4) has a unique solution \(\{ c_{k}^{*}\} \) and the sequence of approximate solutions \(x_{n}^{*} \equiv {{x}_{n}}(t;\{ c_{k}^{*}\} )\) converges to the exact solution \(x\text{*} = {{A}^{{ - 1}}}y\) in the norm of the space \(X\) with a rate

$$\Delta x_{n}^{*} \equiv \left\| {x_{n}^{*} - x\text{*}} \right\| = O({{n}^{{ - r}}}),\quad r = \overline {1,4} .$$
(4.5)

Proof. ETKFS (4.1) is represented as a linear operator equation

$$Ax \equiv Ux + Kx = y\quad (x \in X \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {m;0} \right\},\quad y \in Y \equiv C_{{0;1}}^{{\left\{ m \right\};\left\{ p \right\}}}),$$
(4.6)

where the operator \(A:X \to Y\) is continuously invertible.

System (4.3) and (4.4) should also be written in operator form. To this end, we construct the corresponding finite-dimensional subspaces in the form

$$X \supset {{X}_{n}} \equiv V(S_{n}^{3}) \oplus {\text{span}}\left\{ {{{{(t - 1)}}^{i}}} \right\}_{0}^{\lambda } \oplus {\text{span}}\left\{ {{{\delta }^{{\left\{ i \right\}}}}(t)} \right\}_{0}^{{m - 1}},$$
$$Y \supset {{Y}_{n}} \equiv UV(S_{n}^{3}) \oplus {{\Pi }_{{m + \lambda }}}.$$

Then, following the reasoning in the proof of Theorem 4.3.1 (see [15, Ch. 4, Section 3]), it is easy to show that computational scheme (4.3) and (4.4) for the GCMCS is equivalent to the linear operator equation

$${{A}_{n}}{{x}_{n}} \equiv {{\Gamma }_{n}}A{{x}_{n}} = {{\Gamma }_{n}}y\quad ({{x}_{n}} \in {{X}_{n}},\;{{\Gamma }_{n}}y \in {{Y}_{n}}),$$
(4.7)

where \({{\Gamma }_{n}}:Y \to {{Y}_{n}}\) is the spline operator introduced and studied in detail in Section 3. Therefore, to prove Theorem 2, it suffices to prove the existence, uniqueness, and convergence of solutions of Eqs. (4.7).

Let us refine the structure of the approximating equation (4.7). Since, by Lemma 3.2, \(\Gamma _{n}^{2} = {{\Gamma }_{n}},\) we have \({{\Gamma }_{n}}U{{x}_{n}} = U{{x}_{n}} \in {{Y}_{n}}\) for any element \({{x}_{n}} \in {{X}_{n}}.\) Thus, system (4.3) and (4.4) is equivalent to a linear equation of the form

$${{A}_{n}}{{x}_{n}} \equiv U{{x}_{n}} + {{\Gamma }_{n}}K{{x}_{n}} = {{\Gamma }_{n}}y\quad ({{x}_{n}} \in {{X}_{n}},\;{{\Gamma }_{n}}y \in {{Y}_{n}}).$$
(4.8)

Let us now analyze the proximity of the operators \(A\) and \({{A}_{n}}\) in the subspace \({{X}_{n}}\). Using Eqs. (4.6) and (4.8), representations (2.5) and (3.4), and norms (2.4) and (2.1), for an arbitrary element \({{x}_{n}} \in {{X}_{n}}\), we find that

$${{\left\| {Ax{}_{n}\; - {{A}_{n}}{{x}_{n}}} \right\|}_{Y}} = {{\left\| {Kx{}_{n}\; - \;{{\Gamma }_{n}}K{{x}_{n}}} \right\|}_{Y}} = {{\left\| {STKx{}_{n}\; - {{P}_{n}}STK{{x}_{n}}} \right\|}_{C}}.$$
(4.9)

From (4.1), (4.2), and (2.6), we have

$$(Kx)(t) = \int\limits_{ - 1}^1 {u(t,s)z(s)ds + \sum\limits_{j = 0}^\lambda {{{\mu }_{j}}} } (z){{\psi }_{j}}(t) + \sum\limits_{i = 0}^{m - 1} {{{{( - 1)}}^{i}}} {{\gamma }_{i}}{{\Psi }_{i}}(t),$$

where

$${{\mu }_{j}}(z) \equiv \int\limits_{ - 1}^1 {(Sz)(s){{{(s - 1)}}^{j}}} \frac{1}{{j!}}ds + \sum\limits_{k = 0}^\lambda {{{z}^{{\left\{ k \right\}}}}} (1){{\beta }_{{jk}}},$$
$${{\beta }_{{jk}}} \equiv \int\limits_{ - 1}^1 {{{{(s - 1)}}^{{j + k}}}} \frac{1}{{j!k!}}{{(1 - s)}^{{ - p}}}ds,\quad j,k = \overline {0,\lambda } .$$

Then, taking into account (4.3) and (4.2), we obtain

$$STK{{x}_{n}} = \int\limits_{ - 1}^1 {h\left( {t,s} \right){{f}_{n}}(s)ds + \sum\limits_{j = 0}^\lambda {{{\mu }_{j}}} } ({{f}_{n}}){{\alpha }_{j}}(t) + \sum\limits_{i = 0}^{m - 1} {{{{( - 1)}}^{i}}} {{c}_{{i + \lambda + n + 3}}}{{\beta }_{i}}(t),$$
(4.10)

where \(h \equiv {{S}_{t}}{v},\) \({{\alpha }_{j}} \equiv ST{{\psi }_{j}},\) and \({{\beta }_{i}} \equiv ST{{\Psi }_{i}},\) \(j = \overline {0,\lambda } ,\) \(i = \overline {0,m - 1} .\)

By virtue of (4.10), (3.3), and definition (2.7), we successively derive the following approximate estimate (hereinafter, \({{d}_{i}},\;i = \overline {1,4} \) are certain constants not dependent on the natural number \(n\)):

$$\begin{gathered} {{\left\| {STK{{x}_{n}} - {{P}_{n}}STK{{x}_{n}}} \right\|}_{C}} = \mathop {\max }\limits_{t \in I} \left| {{\kern 1pt} \int\limits_{ - 1}^1 {(h - P_{n}^{t}} } \right.h)(t,s){{f}_{n}}(s)ds + \sum\limits_j {{{\mu }_{j}}} ({{f}_{n}})({{\alpha }_{j}} - {{P}_{n}}{{\alpha }_{j}})(t) \\ \left. { + \;\sum\limits_i {{{{( - 1)}}^{i}}} {{c}_{{i + \lambda + n + 3}}}({{\beta }_{i}} - {{P}_{n}}{{\beta }_{i}})(t){\kern 1pt} } \right| \leqslant 2{{\left\| {{{f}_{n}}} \right\|}_{C}}{{d}_{1}}{{n}^{{ - r}}} + \sum\limits_j {\left| {{{\mu }_{{_{j}}}}({{f}_{n}})} \right|} {\kern 1pt} {{d}_{1}}{{n}^{{ - r}}} + \sum\limits_i {\left| {{{c}_{{i + \lambda + n + 3}}}} \right|} \,{{d}_{1}}{{n}^{{ - r}}} \\ \, \leqslant {{2}^{{p + 1}}}{{\left\| {{{f}_{n}}} \right\|}_{{\left\{ p \right\}}}}{{d}_{1}}{{n}^{{ - r}}} + ({{2}^{{\lambda + 1}}} + \beta )\left( {\lambda + 1} \right){{\left\| {{{f}_{n}}} \right\|}_{{\left\{ p \right\}}}}{{d}_{1}}{{n}^{{ - r}}} + m{{\left\| {{{x}_{n}}} \right\|}_{X}}{{d}_{1}}{{n}^{{ - r}}} \leqslant {{d}_{2}}{{n}^{{ - r}}}{{\left\| {{{x}_{n}}} \right\|}_{X}}, \\ \end{gathered} $$
(4.11)

where \({{d}_{2}} \equiv [{{2}^{{p + 1}}} + ({{2}^{{\lambda + 1}}} + \beta )(\lambda + 1) + m]{{d}_{1}},\;\beta \equiv \mathop {\max }\limits_{0 \leqslant j,k \leqslant \lambda } \left| {{{\beta }_{{jk}}}} \right|.\)

Equality (4.9) and estimate (4.11) imply that

$${{\varepsilon }_{n}} \equiv {{\left\| {A - {{A}_{n}}} \right\|}_{{{{X}_{n}} \to Y}}} \leqslant {{d}_{2}}{{n}^{{ - r}}},\quad r = \overline {1,4} .$$
(4.12)

Then, due to inequalities (4.12) and (3.5), Theorem 7 (see [13, Ch. 1, Section 4]) implies the assertion of Theorem 2 with estimate (4.5). The theorem is proved.

In what follows, when optimizing direct projection methods for solving ETKFS (4.1), an essential role will be played by the following theorem.

Theorem 3. Suppose that ETKFS (4.1) have a solution of the form

$$x\text{*}(t) \equiv z\text{*}(t) + \sum\limits_{i = 0}^{m - 1} {\gamma _{i}^{*}} {{\delta }^{{\left\{ i \right\}}}}(t),\quad Sz\text{*} = STUx\text{*} \in {{C}^{{\left( r \right)}}},\quad r = \overline {1,4} ,$$
(4.13)

for given \(y \in Y\) and the corresponding approximating operator \({{A}_{n}}\) in the GCMCS is continuously invertible. Then, the error of the approximate solution \(x_{n}^{*} \in {{X}_{n}}\) for the right-hand side \({{\Gamma }_{n}}y \in {{Y}_{n}}\) can be represented as

$$\Delta x_{n}^{*} = O\left\{ {{{{\left\| {Sz\text{*} - \;{{P}_{n}}Sz\text{*}} \right\|}}_{C}}} \right\} = O({{n}^{{ - r}}}),\quad r = \overline {1,4} .$$
(4.14)

Proof. By virtue of Theorem 6 (see [13, Ch. 1, Section 3]) and the structure of the approximate equation (4.8), we have

$$\Delta x_{n}^{*} = O\left\{ {\left\| {{{\Gamma }_{n}}} \right\|\left\| {x\text{*}\; - \;{{x}_{n}}} \right\|} \right\},$$
(4.15)

where \(x_{n}^{{}} \in {{X}_{n}}\) is so far an arbitrary element. We choose it as follows:

$${{x}_{n}}(t) \equiv (V{{P}_{n}}STUx\text{*})(t) + \sum\limits_{j = 0}^\lambda {(TUx\text{*}} {{)}^{{\left\{ j \right\}}}}(1){{(t - 1)}^{j}}{\text{/}}j{\kern 1pt} !\; + \;\sum\limits_{i = 0}^{m - 1} {\gamma _{i}^{*}} {{\delta }^{{\left\{ i \right\}}}}(t).$$
(4.16)

Then, the required estimate (4.14) follows from (4.15), (4.13), (2.3), (4.16), (2.7), (2.1), and Lemma 3.1, taking into account Remark 1:

$$\Delta x_{n}^{*} \leqslant {{d}_{3}}{{\left\| {VSz\text{*}\; - V{{P}_{n}}Sz\text{*}} \right\|}_{X}} \equiv {{d}_{3}}{{\left\| {VSz\text{*}\; - V{{P}_{n}}Sz\text{*}} \right\|}_{{\left\{ p \right\}}}} \equiv {{d}_{3}}{{\left\| {Sz\text{*}\; - {{P}_{n}}Sz\text{*}} \right\|}_{C}} \leqslant {{d}_{4}}{{n}^{{ - r}}},\quad r = \overline {1,4} .$$

5 ON OPTIMIZATION OF THE DIRECT PROJECTION METHODS FOR SOLVING ETKFS

Let us preliminarily present the necessary definitions and statement of the problem. Let \(X\) and \(Y\) be Banach spaces and \({{X}_{n}}\) and \({{Y}_{n}}\) be their corresponding arbitrary subspaces of the same dimension \(N = N(n) < + \infty ,\) \(n \in N,\) so  that \(N \to \infty \) as \(n \to \infty \). We denote by \({{\Lambda }_{n}} \equiv \left\{ {{{\lambda }_{n}}} \right\}\) some set of linear operators \({{\lambda }_{n}}\) mapping \(Y\) on \({{Y}_{n}}\). Next, we consider two classes of uniquely solvable linear operator equations

$$Ax = y,\quad x \in X,\quad y \in Y,$$
(5.1)

and

$${{\lambda }_{n}}A{{x}_{n}} = {{\lambda }_{n}}y,\quad {{x}_{n}} \in {{X}_{n}},\quad {{\lambda }_{n}} \in {{\Lambda }_{n}},\quad n \in N,$$
(5.2)

respectively. Let \(x\text{*} \in X\) and \(x_{n}^{*} \in {{X}_{n}}\) be solutions of Eqs. (5.1) and (5.2), respectively, and \(F \equiv \left\{ f \right\}\) be the class of coefficients (i.e., initial data) of Eq. (5.1), generating the class \(X\text{*} \equiv \left\{ {x\text{*}} \right\}\) of sought-for elements.

Following [13, Ch. 2, Section 1], the quantity

$${{V}_{N}}(F) \equiv \mathop {\inf }\limits_{{{X}_{n}},{{Y}_{n}}} \mathop {\mathop {\inf }\limits_{{{\lambda }_{n}} \in {{\Lambda }_{n}}} }\limits_{} V(F;{{\lambda }_{n}};{{X}_{n}},{{Y}_{n}}),$$
(5.3)

where

$$V(F;{{\lambda }_{n}};{{X}_{n}},{{Y}_{n}}) \equiv \mathop {\sup}\limits_{f \in F} (f;{{\lambda }_{n}};{{X}_{n}},{{Y}_{n}}) = \mathop {\sup}\limits_{x\text{*} \in X\text{*}} {{\left\| {x\text{*} - \;x_{n}^{*}} \right\|}_{X}},$$

is called the optimal error estimate for all possible direct projection methods \((\lambda _{n}^{{}} \in {{\Lambda }_{n}})\) for solving Eq. (5.1) in the class F.

Definition 1 (see [13, Ch. 2, Section 1]). Let there exist subspaces \(X_{n}^{0} \subset X,\) \(Y_{n}^{0} \subset Y\) of dimension \(N = N(n) < + \infty \) and operators \(\lambda _{n}^{0}:Y \to Y_{n}^{0},\) \(\lambda _{n}^{0} \in {{\Lambda }_{n}},\) ensuring the condition

$${{V}_{N}}(F) \succ \prec V(F;\lambda _{n}^{0};X_{n}^{0},Y_{n}^{0})\quad \left( {N \to \infty } \right),$$
(5.4)

where the symbol \( \succ \prec \) means, as usual, weak equivalence. Then, method (5.1) and (5.2) for \({{X}_{n}} = X_{n}^{0},\;{{Y}_{n}} = Y_{n}^{0}\), and \({{\lambda }_{n}} = \lambda _{n}^{0}\) is called optimal in order of accuracy on the class \(F\) among all direct projection methods \({{\lambda }_{n}}\;\left( {{{\lambda }_{n}} \in {{\Lambda }_{n}}} \right)\) for solving Eqs. (5.1).

Let us now consider the optimization in order of accuracy on the class of uniquely solvable (uniformly in \(K \in F\)) ETKFSs (4.1) in the case when the initial data belong to the family \(Y{{C}^{{\left( r \right)}}},\) i.e., for \(u \equiv {{S}_{s}}K\) (with respect to \(t\)), \({{\psi }_{j}},\,j = \overline {0,\lambda } ,{{\Psi }_{i}},\) \(i = \overline {0,m - 1} ,\) \(y \in Y{{C}^{{(r)}}},\) \(r = \overline {1,4} .\) Then, by Theorem 3 from [7], we have

$$X\text{*} \equiv \left\{ {x\text{*} \in X\,{\text{|}}\,Ax\text{*} = y;\;u,{{\psi }_{j}},{{\Psi }_{i}},\;y \in Y{{C}^{{\left( r \right)}}}} \right\} = X{{C}^{{(r)}}},$$

where \(X{{C}^{{(r)}}} \equiv \left\{ {x \in X\,{\text{|}}\,STUx \in {{C}^{{\left( r \right)}}}} \right\}.\)

Next, let

$$X_{n}^{0} \equiv V(S_{n}^{3}) \oplus {\text{span}}\left\{ {{{{(t - 1)}}^{i}}} \right\}_{0}^{\lambda } \oplus {\text{span}}\left\{ {{{\delta }^{{\left\{ i \right\}}}}(t)} \right\}_{0}^{{m - 1}}{\kern 1pt} ,$$
$$Y_{n}^{0} \equiv UV(S_{n}^{3}) \oplus {{\Pi }_{{m + \lambda }}},$$

and \(\Lambda _{n}^{0} \equiv \left\{ {{{\lambda }_{n}}} \right\}\) be the family of all linear operators \(\lambda _{n}^{{}}:Y \to Y_{n}^{0}.\)

Theorem 4. Let \(F = Y{{C}^{{(r)}}},\) \({{\Lambda }_{n}} = \Lambda _{n}^{0}.\) Then,

$${{V}_{N}}(F) \succ \prec {{N}^{{ - r}}},\quad N = n + m + \lambda + 4,\quad r = \overline {1,4} ,$$
(5.5)

and this order, optimal in accuracy, realizes the GCMCS.

Proof. Note that the definition of the Kolmogorov \(N\)th width \({{d}_{N}}(L,X)\) of a set \(L\) in a normed space \(X \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {m;0} \right\}\) (see, e.g., [19, Ch. 1, Section 1]) and Theorem 1.3.6 (see [4, Ch. 1, Section 1.3]) implies the equality

$${{d}_{N}}(L,X) = {{d}_{{N - m - \lambda }}}(STU(L),C),\quad N > m + \lambda ,$$

which, in turn, since \({{d}_{l}}\left( {{{C}^{{\left( r \right)}}},C} \right) \succ \prec {{l}^{{ - r}}}\;\left( {l \in N} \right)\) (see, e.g., [19, Ch. 3, Sec. 3]), implies the weak equivalence

$${{d}_{N}}\left( {X{{C}^{{\left( r \right)}}},X} \right) \succ \prec {{N}^{{ - r}}}.$$
(5.6)

It is known (see [13, Ch. 4, Section 2]) that \({{V}_{N}}(F) \geqslant {{d}_{N}}(X\text{*},X).\) Therefore, (5.6) implies

$${{V}_{N}}(F) \geqslant \mathop {{{d}_{N}}(X{{C}^{{(r)}}},X) \succ \prec {{N}^{{ - r}}}}\limits_{} .$$
(5.7)

On the other hand, according to (5.3) and Theorems 2 and 3, we find the estimate

$${{V}_{N}}(F) \leqslant \mathop {\sup }\limits_{x\text{*} \in X{{C}^{{\left( r \right)}}}} {{\left\| {x\text{*} - \;x_{n}^{*}} \right\|}_{X}} = O({{N}^{{ - r}}}),\quad x_{n}^{*} = A_{n}^{{ - 1}}{{\Gamma }_{n}}y.$$

Hence and from relations (5.7) and (5.4), we obtain the assertion of Theorem 4 with estimate (5.5).

6 CONCLUDING REMARKS

Remark 2. By virtue of the definition of the norm in \(X \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {m;0} \right\}\), it is easy to see that the convergence of a sequence \((x_{n}^{*})\) of approximate solutions to the exact solution \(x\text{*} = {{A}^{{ - 1}}}y\) in the metric of \(X\) implies the ordinary convergence in the space of generalized functions, i.e., weak convergence.

Remark 3. When approximating solutions of operator equations \(Ax = y\), a natural question arises about the convergence rate of the residual of the method, \(\rho _{n}^{*}(t) \equiv (Ax_{n}^{*} - y)(t)\). One of the results in this direction can be easily obtained from Theorem 2, namely, its simple corollary: if the initial data of Eq. (4.1) belong to the class \(Y{{C}^{{(r)}}},\;r = \overline {1,4} ,\) then, under the conditions of Theorem 2, we have the estimate

$${{\left\| {\rho _{n}^{*}} \right\|}_{Y}} = O({{n}^{{ - r}}}),\quad r = \overline {1,4} .$$

Remark 4. Since \(C_{{0;1}}^{{\left\{ 0 \right\};\left\{ p \right\}}} \equiv C\left\{ {p;1} \right\} \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {0;0} \right\}\), for \(m = 0\), ETKFS (4.1) turns into an integral equation of the second kind in \(C\left\{ {p;1} \right\}\) with a fixed singularity in the kernel and the proposed method (4.3) and (4.4) turns into the corresponding version of the collocation method with cubic splines, so that \(h \equiv {{S}_{t}}{{S}_{s}}K,\) \({{\alpha }_{j}} \equiv S{{\psi }_{j}},\) \(j = \overline {0,\lambda } ,\) and \(STy \equiv Sy.\) Therefore, Theorem 2 contains the corresponding results on the substantiation of this version of the collocation method for the approximate solution of equations of the second kind with a singularity in the kernel; in this case, the error is estimated as \({{\left\| {x_{n}^{*} - x\text{*}} \right\|}_{{\left\{ p \right\}}}} = O({{n}^{{ - r}}}),\) \(r = \overline {1,4} .\)

Remark 5. If \(m = p = 0\), then \(C_{{0;1}}^{{\left\{ 0 \right\};\left\{ 0 \right\}}} \equiv C \equiv {{D}^{{\left\{ 0 \right\}}}}\left\{ {0;0} \right\}\) and ETKFS (4.1) also turns into an integral equation of the second kind in the space \(C.\) In this case, method (4.3) and (4.4) turns into the corresponding cubic spline collocation method for an equation of the second kind, so that \(h \equiv K,\) \(STy \equiv y.\) Therefore, Theorem 2 also substantiates this method for approximate solution of equations of the second kind in \(C.\) The corresponding error is estimated as \({{\left\| {x_{n}^{*} - x\text{*}} \right\|}_{C}} = O({{n}^{{ - r}}}),\) \(r = \overline {1,4} .\)

Remark 6. Since, under the conditions of Theorem 2, the corresponding approximating operators \({{A}_{n}}\) have the property

$$\left\| {A_{n}^{{ - 1}}} \right\| = O(1),\quad A_{n}^{{ - 1}}:{{Y}_{n}} \to {{X}_{n}},\quad n \geqslant {{n}_{1}},$$

it is obvious (see [13, Ch. 1, Section 5]) that the direct method proposed here for ETKFS (4.1) is stable with respect to small perturbations of the initial data. This allows one to find a numerical solution of the equations under study on a computer with any predetermined order of accuracy. Moreover, if ETKFS (4.1) is well conditioned, then SLAE (4.4) is also well conditioned.