Abstract
A linear integral equation of the third kind with fixed singularities in the kernel is studied. For its approximate solution in the space of generalized functions, a special generalized version of the spline method is proposed and substantiated. Optimality of the method in order of accuracy is proved.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 INTRODUCTION
The object of study is the linear integral equation of the third kind with fixed singularities in the kernel (ETKFS):
where
\(y\) are known continuous functions having certain pointwise smoothness, \(x(t)\) is the sought function, and the integral is understood as an Hadamard finite part integral [1, p. 144–150]. Equations of the form (1.1) is increasingly widely used both in theory and applications. Equations of this kind arise in a number of important problems in the theories of elasticity, neutron transport, and particle scattering (see, e.g., [2, 3] and the references in [2, 4]), as well as theories of singular integral equations with a degenerate symbol [5] and partial differential equations of mixed type [6]. In this case, as a rule, the natural classes of solutions of integral equations of the third kind (ETK) are the special spaces of generalized functions (SGFs) of type \(D\) or \(V\). By \(D\) (\(V\)) we mean the SGF constructed on the basis of the Dirac delta function functional (respectively, the Hadamard finite part integral). The ETKFSs under study are solved exactly only in very rare special cases; therefore, the development of theoretically substantiated efficient methods for their approximate solution in the SGFs is a relevant and actively developing area of mathematical analysis and computational mathematics. A number of results in this field were obtained in [7–12], where special direct polynomial methods for solving ETKFSs of the form (1.1) in SGFs of type \(D\) and \(V\) were proposed and substantiated.
In this paper, for an approximate solution of ETKFS (1.1) in an SGF of type \(D\), we propose a new version of the generalized collocation method based on the use of cubic splines with minimal defect. Its theoretical substantiation in the sense of [13, Ch. 1] is conducted, and it is found that this method is optimal in order of accuracy on a certain class \(F\) of smooth functions among all direct projection methods for solving the equations under study in the SGFs.
2 ON THE SPACES OF TEST AND GENERALIZED FUNCTIONS
Let \(C \equiv C(I)\) be the space of functions continuous on \(I\) with the usual max-norm and \(m \in N\). Following [14], we say that a function \(f \in C\) belongs to the class \(C\left\{ {m;0} \right\} \equiv C_{0}^{{\left\{ m \right\}}}(I)\) if, at the point \(t = 0\), there is a \(m\)th-order Taylor derivative \({{f}^{{\left\{ m \right\}}}}(0)\) (naturally, we assume that \(C\left\{ {0;0} \right\} \equiv C\)). The set \(C\left\{ {m;0} \right\}\) is called the class of pointwise smooth functions with the characteristic operator \(T:C\left\{ {m;0} \right\} \to C\) defined by the rule
With the norm
the space \(C\left\{ {m;0} \right\}\) is complete and normally embedded in \(C\) (see, e.g., [15, Ch. 1, Section 2]).
Next, let \(p \in {{R}^{ + }}\) and \(g \in C\). Following [14], we will write \(g \in C\left\{ {p;1} \right\} \equiv C_{1}^{{\left\{ p \right\}}}(I)\) if there exist left Taylor derivatives \({{g}^{{\left\{ j \right\}}}}(1)\quad \left( {j = \overline {1,\left[ p \right]} } \right)\) at the point \(t = 1\) and, for \(p \ne \left[ p \right]\) (\(\left[ {\;} \right]\) is the entire part of a number), there is a finite limit
We supply the vector space \(C\left\{ {p;1} \right\}\) with the norm
where
Note that elements of the space \(C\left\{ {p;1} \right\}\) are functions of the form
where \(Sg = G(t) \in C,\) \({{g}^{{\left\{ i \right\}}}}(1) = {{b}_{i}}\,i!\) \((i = \overline {0,\lambda } ).\) Hence, the space \(C\left\{ {p;1} \right\}\) with norm (2.1) is complete and embedded in \(C\).
Now we form the test space for our studies:
We define the norm in it as
Lemma 2.1 (see [7]). 1) The test functions have the following structure:
where \(\Phi \in C,\) \({{\alpha }_{j}} \in R,\) \(j = \overline {0,\lambda } ,\) \({{e}_{i}} \in R,\) \(i = \overline {0,m - 1} ,\) and \(ST\varphi = \Phi ,\;\) \({{(T\varphi )}^{{\left\{ j \right\}}}}(1) = {{\alpha }_{j}}j!\) \((\forall j),\) \({{\varphi }^{{\left\{ i \right\}}}}(0) = {{e}_{i}}\,i!\) \((\forall i);\) \(Uf \equiv {{t}^{m}}f(t),\) \(Vf \equiv {{(1 - t)}^{p}}f(t);\)
2) the space \(Y\) is complete in norm (2.4) and embedded in \(C\left\{ {m;0} \right\}.\)
Let \(h \in C({{I}^{2}})\) and, for each fixed \(s \in I\), the function \(h(t,s)\) belong to the space \(C\left\{ {p;1} \right\}\). We will say that \(h \in C_{t}^{{\left\{ p \right\}}}({{I}^{2}})\) if \({{S}_{t}}h \in C\), where \({{S}_{t}}\) denotes operator (2.2) applied with respect to the variable \(t\). In the same way, we define the class \(C_{s}^{{\left\{ p \right\}}}({{I}^{2}})\). Then,
Now, over the space of test functions Y, we construct a family \(X \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {m;0} \right\}\) of generalized functions \(x(t)\) of the form
where \(t \in I,\) \(z \in C\left\{ {p;1} \right\},\) \({{\gamma }_{i}} \in R\) are arbitrary constants, and \(\delta \) and \({{\delta }^{{\left\{ i \right\}}}}\) are, respectively, the Dirac delta function and its Taylor derivatives acting on the space \(Y\) of test functions according to the rule
It is clear that the vector space \(X\) with the norm
is a Banach space.
3 ON THE SPLINE APPROXIMATION OF POINTWISE SMOOTH FUNCTIONS
Let us consider the approximation of the elements of the test space \(Y \equiv C_{{0;1}}^{{\left\{ m \right\};\left\{ p \right\}}}(I)\) using cubic splines.
We define on \(I\) a uniform grid
where \({{s}_{k}} \equiv - 1 + 2k{\text{/}}n,\) \(k = \overline {0,n} ,\) and consider on it a cubic spline
satisfying the boundary conditions
Here, the test functions \({{B}_{i}}(t)\) are \(B\)-splines with a support \(({{s}_{{i - 2}}},{{s}_{{i + 2}}})\) (see, e.g., [16, Ch. 3, Section 8]). To determine all functions \({{B}_{i}}(t)\), we supplement grid (3.1) with uniformly spaced nodes: \({{s}_{{ - 3}}} < {{s}_{{ - 2}}} < {{s}_{{ - 1}}} < {{s}_{0}} \equiv - 1,\)\(1 \equiv {{s}_{n}} < {{s}_{{n + 1}}} < {{s}_{{n + 2}}} < {{s}_{{n + 3}}}\). We denote by \(S_{n}^{3}\) the space of all cubic splines \({{z}_{n}}(t)\) on the grid \({{\Delta }_{n}}\) that possess property (3.2) with the norm \({{\left\| {{{z}_{n}}} \right\|}_{C}}\). Next, we denote by \({{P}_{n}}:C \to S_{n}^{3}\) an operator that puts into correspondence to any function \(f \in C\) its interpolation cubic spline \({{P}_{n}}f \in S_{n}^{3}\) under condition (3.2), such that \(\left( {{{P}_{n}}f} \right)({{s}_{i}}) = f({{s}_{i}}),\) \(i = \overline {0,n} \). In book [16, Ch. 3, Section 1, Theorem 3.1], the existence and uniqueness of an interpolating cubic spline are proved under various boundary conditions and an algorithm for constructing such splines is presented. It is also specially noted there [16, Ch. 3, Section 5] that, in the approximation by cubic splines, the choice of boundary conditions (3.2) is the most appropriate.
Theorems 9, 10, and 13 in [17, Ch. 2, Section 4] and the corresponding result in [18] (see Lemma 2 in [18]) imply the following result.
Lemma 3.1. Let \(r = \overline {1,4} \) and \(f \in {{C}^{{(r)}}} \equiv {{C}^{{(r)}}}(I)\). Then,
Let \({{\Pi }_{q}} \equiv {\text{span\{ }}{{t}^{i}}{\text{\} }}_{0}^{q}\) be the class of all algebraic polynomials of degree not higher than \(q\). Denote by \({{Y}_{n}} \equiv {\text{span}}\left\{ {UV{{B}_{i}}} \right\}_{{ - 1}}^{{n + 1}} \oplus {{\Pi }_{{m + \lambda }}}\) the \((n + m + \lambda + 4)\)-dimensional subspace of \(Y\) and introduce into consideration the operator \({{\Gamma }_{n}} \equiv {{\Gamma }_{{n + m + \lambda + 4}}}:Y \to {{Y}_{n}}\) that puts into correspondence to any function \(y \in Y\) a generalized spline \({{\Gamma }_{n}}y \in {{Y}_{n}}\) defined by the conditions
Following the reasoning in [15, Ch. 1, Section 5, 5.3], it easy to obtain the representation
Lemma 3.2. \({{\Gamma }_{n}}\) is a projector in the space Y.
By virtue of (3.4) and \(P_{n}^{2} = {{P}_{n}}\), this lemma can also be proved similarly to Lemma 1.5.1 in [15, Ch. 1, Section 5]. The role of the operators \(U\) and \(T\) in Lemma 1.5.1 is played by \(UV\) and ST, respectively.
Henceforward, we will use the following notation:
where \(r = 0,1,2,...;\) \(Y{{C}^{{(0)}}} \equiv Y.\)
The following theorem characterizes the rate of convergence of the generalized interpolation splines to the interpolated function.
Theorem 1. If \(y \in Y{{C}^{{(r)}}},\,\,r = \overline {1,4} ,\) then
Proof. By virtue of (2.5), (3.4), (2.4), (2.1), and Lemma 3.1, we successively find
Remark 1. Obviously, estimate (3.5) and the well-known Banach–Steinhaus theorem imply uniform boundedness of the norms of the operators \({{\Gamma }_{n}}:\left\| {{{\Gamma }_{n}}} \right\| = O(1),\)\(n \to \infty \).
4 GENERALIZED COLLOCATION METHOD WITH CUBIC SPLINES (GCMCS)
Let there be given ETKFS (1.1). To reduce cumbersome calculations and simplify formulations, without limiting the generality of the methods and results, henceforward, we will assume that \(l = 1,\) \({{t}_{1}} = 0,\) and \({{p}_{1}} = 0\), i.e., consider an equation of the form
where \(m \in N,\) \(p \in {{R}^{ + }};\) \(y \in Y,K\) is a known continuous function satisfying the conditions
and \(x \in X\) is the sought generalized function. The Fredholmity and sufficient conditions for the continuous invertibility of the operator \(A:X \to Y\) were proved in [7], where a method for finding the exact solution of ETKFS (4.1) in the class X was also described.
We construct an approximate solution of Eq. (4.1) in the form
where \({{z}_{n}}(t) \equiv \sum\nolimits_{i = - 1}^{n + 1} {{{c}_{i}}} {{B}_{i}}(t)\) is the cubic spline considered above in Section 3. We find the set \(\left\{ {{{c}_{k}}} \right\}_{{ - 1}}^{{n + m + \lambda + 2}}\) of unknown parameters according to our GCMCS from a quadratic system of linear algebraic equations (SLAE) of the \((n + m + \lambda + 4)\)-th order:
where \({{\rho }_{n}}(t) \equiv \rho _{n}^{A}(t) \equiv (A{{x}_{n}} - y)(t)\) is the residual of the approximate solution and \(\left\{ {{{s}_{i}}} \right\}_{0}^{n}\) is the previously used system of collocation nodes generating grid (3.1).
In the substantiation of the proposed method, a useful role is played by the functions
For the computational algorithm (4.1)–(4.4), we have the following theorem.
Theorem 2. Suppose that a homogeneous ETKFS \(Ax = 0\) has only a zero solution in \(X\) (e.g., under the conditions of Theorem 3 in [7]) and the initial data are such that \(u \equiv {{S}_{s}}K\) (with respect to \(t\)), \({{\psi }_{j}},{{\Psi }_{i}},y \in Y{{C}^{{(r)}}},\) \(r = \overline {1,4} ,\) \(j = \overline {0,\lambda } ,\) \(i = \overline {0,m - 1} .\) Then, for all \(n \in N,\) \(n \geqslant {{n}_{0}}\), SLAE (4.4) has a unique solution \(\{ c_{k}^{*}\} \) and the sequence of approximate solutions \(x_{n}^{*} \equiv {{x}_{n}}(t;\{ c_{k}^{*}\} )\) converges to the exact solution \(x\text{*} = {{A}^{{ - 1}}}y\) in the norm of the space \(X\) with a rate
Proof. ETKFS (4.1) is represented as a linear operator equation
where the operator \(A:X \to Y\) is continuously invertible.
System (4.3) and (4.4) should also be written in operator form. To this end, we construct the corresponding finite-dimensional subspaces in the form
Then, following the reasoning in the proof of Theorem 4.3.1 (see [15, Ch. 4, Section 3]), it is easy to show that computational scheme (4.3) and (4.4) for the GCMCS is equivalent to the linear operator equation
where \({{\Gamma }_{n}}:Y \to {{Y}_{n}}\) is the spline operator introduced and studied in detail in Section 3. Therefore, to prove Theorem 2, it suffices to prove the existence, uniqueness, and convergence of solutions of Eqs. (4.7).
Let us refine the structure of the approximating equation (4.7). Since, by Lemma 3.2, \(\Gamma _{n}^{2} = {{\Gamma }_{n}},\) we have \({{\Gamma }_{n}}U{{x}_{n}} = U{{x}_{n}} \in {{Y}_{n}}\) for any element \({{x}_{n}} \in {{X}_{n}}.\) Thus, system (4.3) and (4.4) is equivalent to a linear equation of the form
Let us now analyze the proximity of the operators \(A\) and \({{A}_{n}}\) in the subspace \({{X}_{n}}\). Using Eqs. (4.6) and (4.8), representations (2.5) and (3.4), and norms (2.4) and (2.1), for an arbitrary element \({{x}_{n}} \in {{X}_{n}}\), we find that
From (4.1), (4.2), and (2.6), we have
where
Then, taking into account (4.3) and (4.2), we obtain
where \(h \equiv {{S}_{t}}{v},\) \({{\alpha }_{j}} \equiv ST{{\psi }_{j}},\) and \({{\beta }_{i}} \equiv ST{{\Psi }_{i}},\) \(j = \overline {0,\lambda } ,\) \(i = \overline {0,m - 1} .\)
By virtue of (4.10), (3.3), and definition (2.7), we successively derive the following approximate estimate (hereinafter, \({{d}_{i}},\;i = \overline {1,4} \) are certain constants not dependent on the natural number \(n\)):
where \({{d}_{2}} \equiv [{{2}^{{p + 1}}} + ({{2}^{{\lambda + 1}}} + \beta )(\lambda + 1) + m]{{d}_{1}},\;\beta \equiv \mathop {\max }\limits_{0 \leqslant j,k \leqslant \lambda } \left| {{{\beta }_{{jk}}}} \right|.\)
Equality (4.9) and estimate (4.11) imply that
Then, due to inequalities (4.12) and (3.5), Theorem 7 (see [13, Ch. 1, Section 4]) implies the assertion of Theorem 2 with estimate (4.5). The theorem is proved.
In what follows, when optimizing direct projection methods for solving ETKFS (4.1), an essential role will be played by the following theorem.
Theorem 3. Suppose that ETKFS (4.1) have a solution of the form
for given \(y \in Y\) and the corresponding approximating operator \({{A}_{n}}\) in the GCMCS is continuously invertible. Then, the error of the approximate solution \(x_{n}^{*} \in {{X}_{n}}\) for the right-hand side \({{\Gamma }_{n}}y \in {{Y}_{n}}\) can be represented as
Proof. By virtue of Theorem 6 (see [13, Ch. 1, Section 3]) and the structure of the approximate equation (4.8), we have
where \(x_{n}^{{}} \in {{X}_{n}}\) is so far an arbitrary element. We choose it as follows:
Then, the required estimate (4.14) follows from (4.15), (4.13), (2.3), (4.16), (2.7), (2.1), and Lemma 3.1, taking into account Remark 1:
5 ON OPTIMIZATION OF THE DIRECT PROJECTION METHODS FOR SOLVING ETKFS
Let us preliminarily present the necessary definitions and statement of the problem. Let \(X\) and \(Y\) be Banach spaces and \({{X}_{n}}\) and \({{Y}_{n}}\) be their corresponding arbitrary subspaces of the same dimension \(N = N(n) < + \infty ,\) \(n \in N,\) so that \(N \to \infty \) as \(n \to \infty \). We denote by \({{\Lambda }_{n}} \equiv \left\{ {{{\lambda }_{n}}} \right\}\) some set of linear operators \({{\lambda }_{n}}\) mapping \(Y\) on \({{Y}_{n}}\). Next, we consider two classes of uniquely solvable linear operator equations
and
respectively. Let \(x\text{*} \in X\) and \(x_{n}^{*} \in {{X}_{n}}\) be solutions of Eqs. (5.1) and (5.2), respectively, and \(F \equiv \left\{ f \right\}\) be the class of coefficients (i.e., initial data) of Eq. (5.1), generating the class \(X\text{*} \equiv \left\{ {x\text{*}} \right\}\) of sought-for elements.
Following [13, Ch. 2, Section 1], the quantity
where
is called the optimal error estimate for all possible direct projection methods \((\lambda _{n}^{{}} \in {{\Lambda }_{n}})\) for solving Eq. (5.1) in the class F.
Definition 1 (see [13, Ch. 2, Section 1]). Let there exist subspaces \(X_{n}^{0} \subset X,\) \(Y_{n}^{0} \subset Y\) of dimension \(N = N(n) < + \infty \) and operators \(\lambda _{n}^{0}:Y \to Y_{n}^{0},\) \(\lambda _{n}^{0} \in {{\Lambda }_{n}},\) ensuring the condition
where the symbol \( \succ \prec \) means, as usual, weak equivalence. Then, method (5.1) and (5.2) for \({{X}_{n}} = X_{n}^{0},\;{{Y}_{n}} = Y_{n}^{0}\), and \({{\lambda }_{n}} = \lambda _{n}^{0}\) is called optimal in order of accuracy on the class \(F\) among all direct projection methods \({{\lambda }_{n}}\;\left( {{{\lambda }_{n}} \in {{\Lambda }_{n}}} \right)\) for solving Eqs. (5.1).
Let us now consider the optimization in order of accuracy on the class of uniquely solvable (uniformly in \(K \in F\)) ETKFSs (4.1) in the case when the initial data belong to the family \(Y{{C}^{{\left( r \right)}}},\) i.e., for \(u \equiv {{S}_{s}}K\) (with respect to \(t\)), \({{\psi }_{j}},\,j = \overline {0,\lambda } ,{{\Psi }_{i}},\) \(i = \overline {0,m - 1} ,\) \(y \in Y{{C}^{{(r)}}},\) \(r = \overline {1,4} .\) Then, by Theorem 3 from [7], we have
where \(X{{C}^{{(r)}}} \equiv \left\{ {x \in X\,{\text{|}}\,STUx \in {{C}^{{\left( r \right)}}}} \right\}.\)
Next, let
and \(\Lambda _{n}^{0} \equiv \left\{ {{{\lambda }_{n}}} \right\}\) be the family of all linear operators \(\lambda _{n}^{{}}:Y \to Y_{n}^{0}.\)
Theorem 4. Let \(F = Y{{C}^{{(r)}}},\) \({{\Lambda }_{n}} = \Lambda _{n}^{0}.\) Then,
and this order, optimal in accuracy, realizes the GCMCS.
Proof. Note that the definition of the Kolmogorov \(N\)th width \({{d}_{N}}(L,X)\) of a set \(L\) in a normed space \(X \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {m;0} \right\}\) (see, e.g., [19, Ch. 1, Section 1]) and Theorem 1.3.6 (see [4, Ch. 1, Section 1.3]) implies the equality
which, in turn, since \({{d}_{l}}\left( {{{C}^{{\left( r \right)}}},C} \right) \succ \prec {{l}^{{ - r}}}\;\left( {l \in N} \right)\) (see, e.g., [19, Ch. 3, Sec. 3]), implies the weak equivalence
It is known (see [13, Ch. 4, Section 2]) that \({{V}_{N}}(F) \geqslant {{d}_{N}}(X\text{*},X).\) Therefore, (5.6) implies
On the other hand, according to (5.3) and Theorems 2 and 3, we find the estimate
Hence and from relations (5.7) and (5.4), we obtain the assertion of Theorem 4 with estimate (5.5).
6 CONCLUDING REMARKS
Remark 2. By virtue of the definition of the norm in \(X \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {m;0} \right\}\), it is easy to see that the convergence of a sequence \((x_{n}^{*})\) of approximate solutions to the exact solution \(x\text{*} = {{A}^{{ - 1}}}y\) in the metric of \(X\) implies the ordinary convergence in the space of generalized functions, i.e., weak convergence.
Remark 3. When approximating solutions of operator equations \(Ax = y\), a natural question arises about the convergence rate of the residual of the method, \(\rho _{n}^{*}(t) \equiv (Ax_{n}^{*} - y)(t)\). One of the results in this direction can be easily obtained from Theorem 2, namely, its simple corollary: if the initial data of Eq. (4.1) belong to the class \(Y{{C}^{{(r)}}},\;r = \overline {1,4} ,\) then, under the conditions of Theorem 2, we have the estimate
Remark 4. Since \(C_{{0;1}}^{{\left\{ 0 \right\};\left\{ p \right\}}} \equiv C\left\{ {p;1} \right\} \equiv {{D}^{{\left\{ p \right\}}}}\left\{ {0;0} \right\}\), for \(m = 0\), ETKFS (4.1) turns into an integral equation of the second kind in \(C\left\{ {p;1} \right\}\) with a fixed singularity in the kernel and the proposed method (4.3) and (4.4) turns into the corresponding version of the collocation method with cubic splines, so that \(h \equiv {{S}_{t}}{{S}_{s}}K,\) \({{\alpha }_{j}} \equiv S{{\psi }_{j}},\) \(j = \overline {0,\lambda } ,\) and \(STy \equiv Sy.\) Therefore, Theorem 2 contains the corresponding results on the substantiation of this version of the collocation method for the approximate solution of equations of the second kind with a singularity in the kernel; in this case, the error is estimated as \({{\left\| {x_{n}^{*} - x\text{*}} \right\|}_{{\left\{ p \right\}}}} = O({{n}^{{ - r}}}),\) \(r = \overline {1,4} .\)
Remark 5. If \(m = p = 0\), then \(C_{{0;1}}^{{\left\{ 0 \right\};\left\{ 0 \right\}}} \equiv C \equiv {{D}^{{\left\{ 0 \right\}}}}\left\{ {0;0} \right\}\) and ETKFS (4.1) also turns into an integral equation of the second kind in the space \(C.\) In this case, method (4.3) and (4.4) turns into the corresponding cubic spline collocation method for an equation of the second kind, so that \(h \equiv K,\) \(STy \equiv y.\) Therefore, Theorem 2 also substantiates this method for approximate solution of equations of the second kind in \(C.\) The corresponding error is estimated as \({{\left\| {x_{n}^{*} - x\text{*}} \right\|}_{C}} = O({{n}^{{ - r}}}),\) \(r = \overline {1,4} .\)
Remark 6. Since, under the conditions of Theorem 2, the corresponding approximating operators \({{A}_{n}}\) have the property
it is obvious (see [13, Ch. 1, Section 5]) that the direct method proposed here for ETKFS (4.1) is stable with respect to small perturbations of the initial data. This allows one to find a numerical solution of the equations under study on a computer with any predetermined order of accuracy. Moreover, if ETKFS (4.1) is well conditioned, then SLAE (4.4) is also well conditioned.
REFERENCES
J. Hadamard, Le Problème de Cauchy et les Équations aux Dérivées Partielles Lin éaires Hyperboliques (Paris, Hermann, 1933).
G. R. Bart and R. L. Warnock, “Linear integral equations of the third kind,” SIAM J. Math. Anal. 4 (4), 609–622 (1973).
K. Case and P. Zweifel, Linear Transport Theory (Addison-Wesley, Reading, 1967).
R. R. Zamaliev, Candidate’s Dissertation in Mathematics and Physics (Kazan Federal Univ., Kazan, 2012).
S. N. Raslambekov, “Singular integral equation of the first kind in an exceptional case in classes of generalized functions,” Izv. Vyssh. Uchebn. Zaved., Mat., No. 10, 51–56 (1983).
Kh. G. Bzhikhatlov, “On a boundary value problem with a shift,” Differ. Uravn. 9 (1), 162–165 (1973).
N. S. Gabbasov, “Methods for solving an integral equation of the third kind with fixed singularities in the kernel,” Differ. Equations 45 (9), 1370–1378 (2009).
N. S. Gabbasov and R. R. Zamaliev, “A new variant of the subdomain method for integral equations of the third kind with singularities in the kernel,” Russ. Math. 55 (5), 8–13 (2011).
N. S. Gabbasov, “A new version of the collocation method for a class of integral equations in the singular case,” Differ. Equations 49 (9), 1142–1149 (2013).
N. S. Gabbasov, “Special direct method for solving integral equations in the singular case,” Differ. Equations 50 (9), 1232–1239 (2014).
N. S. Gabbasov and Z. Kh. Galimova, “On numerical solving integral equations of the third kind with singularities in a kernel,” Russ. Math. 60 (5), 28–35 (2016).
N. S. Gabbasov and Z. Kh. Galimova, “Special version of collocation method for integral equations of the third kind with fixed singularities in a kernel,” Russ. Math. 62 (5), 16–22 (2018).
B. G. Gabdulkhaev, Optimal Approximation of Solutions to Linear Problems (Kazan. Univ., Kazan, 1980) [in Russian].
S. Prössdorf, “Singular integral equation with a symbol vanishing at a finite number of points,” Mat. Issled. 7 (1), 116–132 (1972).
N. S. Gabbasov, Methods for Solving Fredholm Integral Equations in Spaces of Generalized Functions (Kazan. Univ., Kazan, 2006) [in Russian].
Yu. S. Zav’yalov, B. I. Kvasov, and V. L. Miroshnichenko, Methods of Spline Functions (Nauka, Moscow, 1980) [in Russian].
S. B. Stechkin and Yu. N. Subbotin, Splines in Computational Mathematics (Nauka, Moscow, 1976) [in Russian].
A. Pedas and E. Timak, “The cubic spline-collocation method for weakly singular integral equations,” Differ. Equations 37 (10), 1491–1500 (2001).
I. K. Daugavet, Introduction to the Theory of Function Approximation (Leningr. Gos. Univ., Leningrad, 1977) [in Russian].
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
The authors declare that they have no conflicts of interest.
Additional information
Translated by E. Chernokozhin
Rights and permissions
About this article
Cite this article
Gabbasov, N.S., Galimova, Z.K. On Numerical Solution of One Class of Integral Equations of the Third Kind. Comput. Math. and Math. Phys. 62, 316–324 (2022). https://doi.org/10.1134/S0965542522020075
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0965542522020075