INTRODUCTION

Numerical implementation of spatial-three-dimensional mathematical models, including differential equations in partial derivatives, can be performed by the finite difference method. We single out its two main stages: at the first stage, a difference approximation of the system of differential equations of mathematical physics on a grid is carried out—the development of a difference scheme with specified properties (stability, accuracy)—of a discrete analog of the original continuous mathematical model; the second stage consists of solving systems of linear algebraic equations (SLAEs) of a special type (with high-order matrices having a band structure and ill-conditioned) using the most modern computer technology, including supercomputers, MCS, and graphics accelerators. The use of classical methods of higher algebra for solving such SLAEs is often impractical due to the fact that these methods require a large amount of computational work and necessitate information storage. In recent decades, for the numerical implementation of computationally labor-intensive problems of continuum mechanics, aerodynamics, hydrodynamics, and biological kinetics, methods have been actively developed that take into account the specifics of the problem posed, which make it possible to reduce the number of operations compared to the standard methods. In [1, 2], the iterative method is treated as an operator-difference scheme with an operator in the Hilbert space of grid functions.

Selection of the optimal parameters for which the rate of convergence of the iterative method used to solve the emerging SLAE is the highest is a separate problem. Various iterative methods used to solve grid equations are compared using the canonical form of iterative schemes, which allows singling out the operators responsible for the convergence of the computational process. An iterative method for solving a problem is selected based on studying the nature of computational stability and estimating the rate of convergence [3]. Iterative methods of the variational type, including the methods of conjugate gradients, steepest descent, minimum residuals, and minimum corrections, are widely used in the implementation of mathematical models of hydrophysics.

Specialized libraries and application packages are being developed for the implementation of iterative methods for a nonself-adjoint, degenerate, and indefinite SLAE operator, triangular methods with the inversion of a triangular matrix at the subsequent iteration, including the upper relaxation method, the alternating direction method, and the method for solving nonlinear difference equations using curvilinear systems coordinates. The highly efficient alternating triangular method proposed and developed in [3, 4] is universal and is used to solve the Dirichlet problem for the Poisson equation with a strongly varying coefficient.

In [5], a nonstandard approach is used to increase the stability margin of explicit schemes in the mathematical modeling of multiphase flows in an underground space. The core of the approach lies in the use of regularized schemes oriented on supercomputers. Part of the research discusses the development of an original kinetically based approach to model filtration processes, including a mathematical model built by analogy with a quasi-gasdynamic system of equations and allowing implementation by explicit numerical methods.

In [6], for the multiple solution of SLAEs, the methods of conjugate gradients and conjugate residuals are considered. The SLAEs under consideration have the same matrices, but different successively determined right-hand sides. When solving the second and subsequent SLAEs, deflation algorithms are used to speed up the iterative processes. In these algorithms, the vectors obtained during the solution of the first system are used as basis direction vectors.

When discretizing the problems of aero- and hydrodynamics in the case of a complex shape of the computational domain, optimal computational grids need to be constructed. In [7], an algorithm for constructing a tetrahedral mesh is proposed. The difference between the algorithm and other currently existing tetrahedral algorithms is that the mesh is guaranteed to be built at the given time. When describing the main stages of the algorithm, an estimate of its computational complexity is given.

The work [8] is devoted to the development of new applications of matrix methods. Particular attention is paid to methods of separating variables based on a special decomposition of matrices and tensors, the development of implementing algorithms and their applications in solving multidimensional problems of computational mathematics, data analysis, and machine learning.

The work [9] considers the numerical implementation of mathematical models in medicine. An analysis of the solutions for the personalization of mathematical models in problems of clinical cardiology, including a virtual assessment of the hemodynamic significance of coronary artery stenoses, an assessment of changes in the systemic blood flow after the hemodynamic correction of complex heart defects, and the calculation of the coaptation characteristics of the reconstructed aortic valve, is given.

In [10], the preconditioner of the block incomplete inverse triangular expansion of the first order “by value” BIIC-IC1 is considered for solving an SLAE with a symmetric positive definite matrix. The BIIC-IC1 preconditioner is constructed and inverted using the MPI+OpenMP technology, while the number of blocks in the preconditioner is a multiple of the number of used processors and used threads. The works [11, 12] study the creation of specialized methods for the implementation of mathematical models.

The complex shape of the computational domain and the need to develop difference schemes that adapt to the solution of the problem significantly complicate the computation process. The “extended geometry”—the characteristic dimensions of the computational domain in the horizontal direction significantly exceed its vertical dimension—is the typical property of a shallow water body or a coastal system. In the numerical implementation of mathematical models of the hydrodynamics of domains of the described shape, grid equations of a certain type appear in the process of their discretization. As a result, there is a need to develop specialized methods for their solution. Classical methods for solving such problems are not suitable. In [13], to solve the problem of transport in a shallow water body, it was proposed to use an explicit-implicit scheme (an explicit scheme in the horizontal directions and a symmetric scheme with weights in the vertical direction), which makes it possible to numerically solve the problem 5‒30 times faster than the explicit scheme. The transition between time layers can be considered as an iterative process for solving the problem of diffusion-convection to settle. This idea formed the base for the formation of a preconditioner in the proposed method for solving grid equations obtained by approximating hydrodynamic problems in areas with an extended geometry.

1 PROBLEM STATEMENT

The system of Navier–Stokes equations and the continuity equation with the corresponding initial and boundary conditions are used to describe hydrodynamic problems [14, 15]. The problem of hydrodynamics with respect to the time variable can be approximated based on splitting schemes with respect to physical processes [16]. In this case, the most computationally time-consuming part will be the calculation of the pressure field based on the Poisson equation. The use of regularizers [5] in the continuity equation entails a change in the method of calculating the pressure field, which will be calculated based on the wave equation of the form [17]:

$$\frac{1}{{{{c}^{2}}}}P_{{tt}}^{{''}} - \Delta P = f,$$
(1)

where Δ is the Laplace operator or Laplacian, c is the speed of sound, and  f  is the given function.

It follows from this approach that pressure cannot propagate faster than the speed of the shock front (in the linear approximation of the speed of sound). If we do not take into account the time of the collision between molecules in the continuity equation, then we obtain the Poisson equation [18]. Applying the Gauss–Ostrogradsky theorem for this equation, we obtain an instantaneous propagation of the pressure field from the sources of the field to the sinks (which is not physical). This approach is less laborious from a computational point of view, since the discrete analog of the wave equation has diagonal dominance, in contrast to the discrete analog of the Poisson equation, as a result of which the conditionality number of the operator, which affects the rate of convergence of the iterative method, decreases. It should also be noted that when calculating the pressure based on the Poisson equation in the case of boundary conditions in the Neumann form, the determinant of the system will be zero, and as a result, the problem of the existence of a solution obtained in the process of discretization of the grid equation arises. When using regularizers [5], there is no such problem.

For Eq. (1), we set the initial conditions in the following form:

$$P{\kern 1pt} {{|}_{{t = 0}}}\; = {{P}_{0}},\quad P_{t}^{'}{\kern 1pt} {{|}_{{t = 0}}}\; = 0.$$
(2)

If the pressure field is known, then we will use the boundary conditions of the first kind:

$${{\left. P \right|}_{{(x,y,z) \in \gamma }}} = {{P}_{1}}.$$

If the flow through the boundary is known, then we will use the boundary conditions of the second (for \({{\alpha }_{{\mathbf{n}}}} = 0\)) or the third kind (when \({{\alpha }_{{\mathbf{n}}}} > 0\)):

$$P_{{\mathbf{n}}}^{'}{\kern 1pt} {{|}_{{(x,y,z) \in \gamma }}}\; = {{\alpha }_{{\mathbf{n}}}}P + {{\beta }_{{\mathbf{n}}}},$$
(3)

where \({\mathbf{n}}\) is the normal directed inside the computational domain, \(\gamma \) is the boundary of the computational domain, and \(\alpha = \{ {{\alpha }_{x}},{{\alpha }_{y}},{{\alpha }_{z}}\} \), \(\beta = \{ {{\beta }_{x}},{{\beta }_{y}},{{\beta }_{z}}\} \) are the given vectors.

Task (1)−(3) is considered in region Г, the linear dimensions of which along the vertical are significantly smaller than the dimensions along the horizontal coordinate directions.

We will assume that the computational domain Г is inscribed in a parallelepiped, which we will cover with a uniform computational grid:

$${{\bar {w}}_{h}} = \{ {{t}^{n}} = n\tau ,\quad {{x}_{i}} = i{{h}_{x}},\quad {{y}_{j}} = j{{h}_{y}},\quad {{z}_{k}} = k{{h}_{z}};$$
$$n = \overline {0,{{N}_{t}}} ,\quad i = \overline {0,{{N}_{x}} - 1} ,\quad j = \overline {0,{{N}_{y}} - 1} ,\quad k = \overline {0,{{N}_{z}} - 1} ;$$
$${{N}_{t}}\tau = T,\quad ({{N}_{x}} - 1){{h}_{x}} = {{l}_{x}},\quad ({{N}_{y}} - 1){{h}_{y}} = {{l}_{y}},\quad ({{N}_{z}} - 1){{h}_{z}} = {{l}_{z}}\} ,$$

where \(\tau \) is the time step; \({{h}_{x}}\), \({{h}_{y}}\), and \({{h}_{z}}\) are steps in space; \({{N}_{t}}\) is the number of time layers; T is the upper time limit; \({{N}_{x}}\), \({{N}_{y}}\), and \({{N}_{z}}\) are number of nodes in space; and \({{l}_{x}}\), \({{l}_{y}}\), and \({{l}_{z}}\) are boundaries in space.

To describe the geometry of the computational domain in the discrete case, the parameter \({{o}_{{i,j,k}}} = {{V}_{{{{\Omega }_{{i,j,k}}}}}}{\text{/}}{{V}_{{{{D}_{{i,j,k}}}}}}\) (\({{V}_{{{{\Omega }_{{i,j,k}}}}}}\) is the volume of the cell filled with the medium and \({{V}_{{{{D}_{{i,j,k}}}}}} = {{h}_{x}}{{h}_{y}}{{h}_{z}}\) is the total volume of the cell), which describes the fullness of the cell \((i,j,k)\), is introduced. To construct a difference scheme, we also need the coefficients \({{q}_{i}}\), \(i = \overline {0,6} \), describing the degree to which the control areas are filled [19]. The difference scheme for Eq. (1) with boundary conditions (3) can be written as

$${{q}_{{0,i,j,k}}}\frac{{P_{{i,j,k}}^{{n + 1}} - 2P_{{i,j,k}}^{n} + P_{{i,j,k}}^{{n - 1}}}}{{{{{(c\tau )}}^{2}}}} - {{q}_{{1,i,j,k}}}\frac{{P_{{i + 1,j,k}}^{{n + 1}} - P_{{i,j,k}}^{{n + 1}}}}{{h_{x}^{2}}} + {{q}_{{2,i,j,k}}}\frac{{P_{{i,j,k}}^{{n + 1}} - P_{{i - 1,j,k}}^{{n + 1}}}}{{h_{x}^{2}}}$$
$$\begin{gathered} + \;\left| {{{q}_{{1,i,j,k}}} - {{q}_{{2,i,j,k}}}} \right|\frac{{{{\alpha }_{x}}P_{{i,j,k}}^{{n + 1}} + {{\beta }_{x}}}}{{{{h}_{x}}}} - {{q}_{{3,i,j,k}}}\frac{{P_{{i,j + 1,k}}^{{n + 1}} - P_{{i,j,k}}^{{n + 1}}}}{{h_{y}^{2}}} + {{q}_{{4,i,j,k}}}\frac{{P_{{i,j,k}}^{{n + 1}} - P_{{i,j - 1,k}}^{{n + 1}}}}{{h_{y}^{2}}} \\ + \;\left| {{{q}_{{3,i,j,k}}} - {{q}_{{4,i,j,k}}}} \right|\frac{{{{\alpha }_{y}}P_{{i,j,k}}^{{n + 1}} + {{\beta }_{y}}}}{{{{h}_{y}}}} - {{q}_{{5,i,j,k}}}\frac{{P_{{i,j,k + 1}}^{{n + 1}} - P_{{i,j,k}}^{{n + 1}}}}{{h_{z}^{2}}} + {{q}_{{6,i,j,k}}}\frac{{P_{{i,j,k}}^{{n + 1}} - P_{{i,j,k - 1}}^{{n + 1}}}}{{h_{z}^{2}}} \\ \end{gathered} $$
(4)
$$ + \;\left| {{{q}_{{5,i,j,k}}} - {{q}_{{6,i,j,k}}}} \right|\frac{{{{\alpha }_{z}}P_{{i,j,k}}^{{n + 1}} + {{\beta }_{z}}}}{{{{h}_{z}}}} = {{q}_{{0,i,j,k}}}{{f}_{{i,j,k}}}.$$

The difference scheme (4) approximates the wave equation (1) and stores information about the boundary conditions, which can be different for each of the spatial directions, and about the geometry of the computational domain, which can have a complex dynamically tunable shape.

When discretizing models of mathematical physics, especially hydrodynamics, we obtain a system of equations. Each equation of the system can be represented in canonical form, and we will use a seven-point template (Fig. 1):

$$A({{m}_{0}})P({{m}_{0}}) - \sum\limits_{r = 1}^6 {B({{m}_{0}},{{m}_{r}})P({{m}_{r}})} = F({{m}_{0}}),$$

where \({{m}_{0}}({{x}_{i}},{{y}_{j}},{{z}_{k}})\) is the center of the template, \(M{\kern 1pt} '(P) = \{ {{m}_{1}}({{x}_{{i + 1}}},{{y}_{j}},{{z}_{k}})\), \({{m}_{2}}({{x}_{{i - 1}}},{{y}_{j}},{{z}_{k}})\), \({{m}_{3}}({{x}_{i}},{{y}_{{j{\text{ + }}1}}},{{z}_{k}})\), \({{m}_{4}}({{x}_{i}},{{y}_{{j - 1}}},{{z}_{k}})\), \({{m}_{5}}({{x}_{i}},{{y}_{j}},{{z}_{{k + 1}}})\), \({{m}_{6}}({{x}_{i}},{{y}_{j}},{{z}_{{k - 1}}})\} \) is the neighborhood of the center, \(A \equiv A({{m}_{0}})\) is the coefficient of the center of the template, \({{B}_{r}} \equiv B({{m}_{0}},{{m}_{r}})\), \(r \in \overline {1,6} \), are the coefficients of the neighborhood of the center of the template, \(F\) is the vector of right parts, and \(P\) is the calculated vector.

Fig. 1.
figure 1

Seven-point stencil for solving grid equations.

In this formulation of the problem, the coefficients of the grid equations and the right-hand side take the form

$${{B}_{{1,i,j,k}}} = {{q}_{{1,i,j,k}}}{\text{/}}h_{x}^{2},\quad {{B}_{{2,i,j,k}}} = {{q}_{{2,i,j,k}}}{\text{/}}h_{x}^{2},\quad {{B}_{{3,i,j,k}}} = {{q}_{{3,i,j,k}}}{\text{/}}h_{y}^{2},$$
$${{B}_{{4,i,j,k}}} = {{q}_{{4,i,j,k}}}{\text{/}}h_{y}^{2},\quad {{B}_{{5,i,j,k}}} = {{q}_{{5,i,j,k}}}{\text{/}}h_{z}^{2},\quad {{B}_{{6,i,j,k}}} = {{q}_{{6,i,j,k}}}{\text{/}}h_{z}^{2},$$
$${{A}_{{i,j,k}}} = \frac{{{{q}_{{0,i,j,k}}}}}{{{{{(c\tau )}}^{2}}}} + \sum\limits_{r = 1}^6 {{{B}_{{r,i,j,k}}}} + \frac{{\left| {{{q}_{{1,i,j,k}}} - {{q}_{{2,i,j,k}}}} \right|{{\alpha }_{x}}}}{{{{h}_{x}}}} + \frac{{\left| {{{q}_{{3,i,j,k}}} - {{q}_{{4,i,j,k}}}} \right|{{\alpha }_{y}}}}{{{{h}_{y}}}} + \frac{{\left| {{{q}_{{5,i,j,k}}} - {{q}_{{6,i,j,k}}}} \right|{{\alpha }_{z}}}}{{{{h}_{z}}}},$$
$${{F}_{{i,j,k}}} = {{q}_{{0,i,j,k}}}\left( {{{f}_{{i,j,k}}} + \frac{{2P_{{i,j,k}}^{n} - P_{{i,j,k}}^{{n - 1}}}}{{{{{(c\tau )}}^{2}}}}} \right) - \frac{{\left| {{{q}_{{1,i,j,k}}} - {{q}_{{2,i,j,k}}}} \right|{{\beta }_{x}}}}{{{{h}_{x}}}} - \frac{{\left| {{{q}_{{3,i,j,k}}} - {{q}_{{4,i,j,k}}}} \right|{{\beta }_{y}}}}{{{{h}_{y}}}} - \frac{{\left| {{{q}_{{5,i,j,k}}} - {{q}_{{6,i,j,k}}}} \right|{{\beta }_{z}}}}{{{{h}_{z}}}}.$$

The transition from a 3D mesh node representation \((i,j,k)\) to a one-dimensional record (node number) is carried out according to the following formula:

$${{m}_{0}} = k + j{{N}_{z}} + i{{N}_{z}}{{N}_{y}}.$$

The numbers of nodes located in the vicinity of the center of the template \({{m}_{i}}\), \(i = \overline {1,6} \), is calculated by the formulas

$$\begin{gathered} {{m}_{1}} = {{m}_{0}} + {{N}_{z}}{{N}_{y}},\quad {{m}_{2}} = {{m}_{0}} - {{N}_{z}}{{N}_{y}},\quad {{m}_{3}} = {{m}_{0}} + {{N}_{z}}, \\ {{m}_{4}} = {{m}_{0}} - {{N}_{z}},\quad {{m}_{5}} = {{m}_{0}} + 1,\quad {{m}_{6}} = {{m}_{0}} - 1. \\ \end{gathered} $$

2 METHOD FOR SOLVING GRID EQUATIONS

We write the grid equation (4) in the matrix form

$$Ax = f,$$
(5)

where A is a linear, self-adjoint, positive definite operator \((A = A^* > 0)\). Imagine the operator A as

$$A = D + {{\Lambda }_{x}} + {{\Lambda }_{y}} + {{\Lambda }_{z}},$$
(6)

where D is the diagonal operator, \({{\Lambda }_{x}} = ({\text{1/}}h_{x}^{2}){{\Lambda }_{{D,x}}} \otimes {{E}_{y}} \otimes {{E}_{z}}\); \({{\Lambda }_{y}} = ({\text{1/}}h_{y}^{2}){{E}_{x}}\)\({{\Lambda }_{{D,y}}} \otimes {{E}_{z}}\); Λz = \(({\text{1/}}h_{z}^{2}){{E}_{x}} \otimes {{E}_{y}} \otimes {{\Lambda }_{{D,z}}}\); \( \otimes \) is the symbol denoting the Kronecker product; \({{E}_{x}},{{E}_{y}},{{E}_{z}}\) are identity matrices with dimensions \({{N}_{x}} \times {{N}_{x}}\), \({{N}_{y}} \times {{N}_{y}}\), and \({{N}_{z}} \times {{N}_{z}}\), respectively; and \({{\Lambda }_{{D,x}}}\), \({{\Lambda }_{{D,y}}}\), \({{\Lambda }_{{D,z}}}\) are square tridiagonal matrices containing the coefficients of the grid equations for the terms describing the finite differences for second-order partial derivatives along the corresponding axes.

Operator \({{\Lambda }_{{D,x}}}\) in the case of boundary conditions of the first kind will be written in the form

$${{({{\Lambda }_{{D,x}}})}_{{i,j}}} = \left\{ \begin{gathered} 2,\quad \;\,{\text{if}}\quad i = j; \hfill \\ - 1,\quad {\text{if}}\quad \left| {i - j} \right| = 1; \hfill \\ 0,\quad \;\,{\text{if}}\quad \left| {i - j} \right| > 1; \hfill \\ \end{gathered} \right.\quad {{\Lambda }_{{D,x}}} = \left( {\begin{array}{*{20}{c}} 2&{ - 1}&0&{...} \\ { - 1}&2&{ - 1}&{...} \\ 0&{ - 1}&2&{...} \\ {...}&{...}&{...}&{...} \end{array}} \right).$$
(7)

Operator \({{\Lambda }_{{D,x}}}\) in the case of boundary conditions of the second kind (\({{\alpha }_{x}} = 0\)) and third kind (\({{\alpha }_{x}} > 0\)) is written in the following form:

$${{({{\Lambda }_{{D,x}}})}_{{i,j}}} = \left\{ \begin{gathered} 1{\text{ + }}{{\sigma }_{x}},\quad {\text{if}}\quad (i + j = 2) \cup (i + j = 2N); \hfill \\ 2,\quad \quad \,{\text{if}}\quad (i = j) \cap (2 < i + j < 2N); \hfill \\ - 1,\quad \;\;\,{\text{if}}\quad \left| {i - j} \right| = 1; \hfill \\ 0,\quad \quad {\kern 1pt} {\text{if}}\quad \left| {i - j} \right| > 1; \hfill \\ \end{gathered} \right.\quad {{\Lambda }_{{D,x}}} = \left( {\begin{array}{*{20}{c}} {1 + {{\sigma }_{x}}}&{ - 1}&0&{...} \\ { - 1}&2&{ - 1}&{...} \\ 0&{ - 1}&2&{...} \\ {...}&{...}&{...}&{...} \end{array}} \right).$$
(8)

Here, \({{\sigma }_{x}} = {{\alpha }_{x}}{{h}_{x}}\). Operators \({{\Lambda }_{{D,y}}}\), \({{\Lambda }_{{D,z}}}\) will be written analogously.

To find a solution to problem (7), we will use the implicit iterative process

$$B({{x}^{{n + 1}}} - {{x}^{n}}){\text{/}}{{\tau }_{{n + 1}}} + A{{x}^{n}} = f.$$
(9)

In Eq. (9)n is the iteration number, \({{\tau }_{{n + 1}}} > 0\) is the iterative parameter, B is a preconditioner, whose conversion in (9) should be much simpler than the direct conversion of the original operator A in (5). In the case of a stationary (\({{\tau }_{{n{\text{ + }}1}}}\) = τ) iterative process, (9) can be represented as

$${{x}^{{n + 1}}} = {{x}^{n}} - \tau {{B}^{{ - 1}}}(A{{x}^{n}} - f).$$
(10)

At each iteration, the resulting solution \({{x}^{n}}\) will differ from the exact solution x by the amount of the error \({{z}^{n}}\): \({{x}^{n}} = x + {{z}^{n}}\). Then Eq. (10) with respect to the error, taking into account (5), will be written as

$${{z}^{{n + 1}}} = {{z}^{n}} - \tau {{B}^{{ - 1}}}A{{z}^{n}}.$$

We replace the variables \({{z}^{n}} = {{B}^{{ - 1/2}}}{{y}^{n}}\) and multiply both sides of the last expression by \({{B}^{{1/2}}}\). As a result, we get \({{y}^{{n + 1}}} = {{y}^{n}} - \tau {{B}^{{ - 1/2}}}A{{B}^{{ - 1/2}}}{{y}^{n}}\). We assume \(C = {{B}^{{ - 1/2}}}A{{B}^{{ - 1/2}}}\), \(C = C* > 0\); then

$${{y}^{{n + 1}}} = (E - \tau C){{y}^{n}}.$$
(11)

The iterative process converges at [3]

$$ - \rho E \leqslant E - \tau C \leqslant \rho E,\quad \rho < 1,$$
(12)

where ρ is the parameter describing the rate of convergence of the iterative method.

Inequality (12) can be represented as

$$((1 - \rho ){\text{/}}\tau )E \leqslant C \leqslant ((1 + \rho ){\text{/}}\tau )E.$$

Parameters \(\rho \) and \(\tau \) are found from the conditions \((1 - \rho ){\text{/}}\tau = {{\lambda }_{{\min }}}\), \((1 + \rho ){\text{/}}\tau = {{\lambda }_{{\max }}}\):

$$\tau = 2{\text{/}}({{\lambda }_{{\min }}} + {{\lambda }_{{\max }}}),\quad \rho = (\nu - 1){\text{/}}(\nu + 1),\quad \nu = {{\lambda }_{{\max }}}{\text{/}}{{\lambda }_{{\min }}}.$$
(13)

Here \({{\lambda }_{{\min }}}\) and \({{\lambda }_{{\max }}}\) are the minimum and maximum values of the operator’s eigenvalues \(C\), and \(\nu \) is the conditionality number of the operator \(C\).

We will form the preconditioner in the following form:

$$B = D + \omega \,{{D}_{{xy}}} + {{\Lambda }_{z}},\,$$
(14)

where Dxy is the operator composed of the diagonal part of the operator \({{\Lambda }_{{xy}}}\) = \({{\Lambda }_{x}} + {{\Lambda }_{y}}\) and ω is the determined parameter (ω > 0). Let us estimate the maximum eigenvalue of operator C taking into account \(C = {{B}^{{ - 1/2}}}A{{B}^{{ - 1/2}}}\) and \({{B}^{{ - 1/2}}}x = y\):

$${{\lambda }_{{\max }}}(C) = \mathop {\max }\limits_{\left| x \right| > 0} \frac{{(Cx,x)}}{{(x,x)}} = \mathop {\max }\limits_{\left| y \right| > 0} \frac{{({{B}^{{ - 1/2}}}Ay,{{B}^{{1/2}}}y)}}{{({{B}^{{1/2}}}y,{{B}^{{1/2}}}y)}} = \mathop {\max }\limits_{\left| y \right| > 0} \frac{{(Ay,y)}}{{(By,y)}}.$$

We write the resulting expression, taking into account expressions (6) and (14):

$${{\lambda }_{{\max }}}(C) = \mathop {\max }\limits_{\left| y \right| > 0} \frac{{((D + {{\Lambda }_{{xy}}} + {{\Lambda }_{z}})y,y)}}{{((D + \omega {{D}_{{xy}}} + {{\Lambda }_{z}})y,y)}} = 1 + \mathop {\max }\limits_{\left| y \right| > 0} \left( {\frac{{(({{\Lambda }_{{xy}}} - \omega {{D}_{{xy}}})y,y)}}{{((D + \omega {{D}_{{xy}}} + {{\Lambda }_{z}})y,y)}}} \right).$$

An estimate of the maximum eigenvalue follows from the inequalities \(\delta {{D}_{{xy}}} \leqslant {{\Lambda }_{{xy}}} \leqslant \Delta {{D}_{{xy}}}\), \(\delta \geqslant 0\), \(\Delta \leqslant 2\):

$$\begin{gathered} {{\lambda }_{{\max }}}(C) \leqslant \left[ \begin{gathered} 1 + \frac{{(\Delta - \omega )}}{{{{\lambda }_{{\max }}}(D_{{xy}}^{{ - 1/2}}BD_{{xy}}^{{ - 1/2}})}},\quad \omega \geqslant \Delta , \hfill \\ 1 + \frac{{(\Delta - \omega )}}{{{{\lambda }_{{\min }}}(D_{{xy}}^{{ - 1/2}}BD_{{xy}}^{{ - 1/2}})}},\quad \omega < \Delta , \hfill \\ \end{gathered} \right. \\ = \left[ \begin{gathered} \frac{{\Delta + {{\lambda }_{{\max }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}}{{\omega + {{\lambda }_{{\max }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}},\quad \omega \geqslant \Delta , \hfill \\ \frac{{\Delta + {{\lambda }_{{\min }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}}{{\omega + {{\lambda }_{{\min }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}},\quad \omega < \Delta {\text{ }}. \hfill \\ \end{gathered} \right. \\ \end{gathered} $$
(15)

We estimate the minimum eigenvalue

$${{\lambda }_{{\min }}}(C) \geqslant \left[ \begin{gathered} \frac{{\delta + {{\lambda }_{{\min }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}}{{\omega + {{\lambda }_{{\min }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}}\quad \omega \geqslant \delta , \hfill \\ \frac{{\delta + {{\lambda }_{{\max }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}}{{\omega + {{\lambda }_{{\max }}}(D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}})}},\quad \omega < \delta . \hfill \\ \end{gathered} \right.$$
(16)

We designate \(S = D_{{xy}}^{{ - 1}}D + D_{{xy}}^{{ - 1/2}}{{\Lambda }_{z}}D_{{xy}}^{{ - 1/2}}\), then an estimate of the conditionality number of operator C is written:

$$\nu (C) \leqslant \left[ \begin{gathered} \frac{{\Delta + {{\lambda }_{{\max }}}(S)}}{{\delta + {{\lambda }_{{\min }}}(S)}}\left( {1 - \frac{{{{\lambda }_{{\max }}}(S) - {{\lambda }_{{\min }}}(S)}}{{\omega + {{\lambda }_{{\max }}}(S)}}} \right),\quad \Delta < \omega , \hfill \\ \frac{{\Delta + {{\lambda }_{{\min }}}(S)}}{{\delta + {{\lambda }_{{\min }}}(S)}},\quad \delta \leqslant \omega \leqslant \Delta , \hfill \\ \frac{{\Delta + {{\lambda }_{{\min }}}(S)}}{{\delta + {{\lambda }_{{\max }}}(S)}}\left( {1 + \frac{{{{\lambda }_{{\max }}}(S) - {{\lambda }_{{\min }}}(S)}}{{\omega + {{\lambda }_{{\min }}}(S)}}} \right),\quad \omega < \delta . \hfill \\ \end{gathered} \right.$$
(17)

It follows from the resulting estimate that the conditionality number of operator C is minimal at \(\delta \leqslant \omega \leqslant \Delta \). Next parameter ω will be taken equal to one. The operation of stationary iterative methods requires a priori information about the values of the maximum and minimum eigenvalues of the operators. In the case of choosing a preconditioner in form (14) for an arbitrary geometry of the computational domain, the complexity is caused by estimating the minimum eigenvalues of the operator.

3 EIGENVALUES AND EIGENVECTORS OF THE OPERATOR \({{\Lambda }_{D}}\)

In [3], in order to find the eigenvalues of the operator of the second difference derivative \({{\Lambda }_{{\text{D}}}}\), boundary value problems are formulated, represented by the grid equation

$${{y}_{{\bar {x}x}}} + \lambda y = 0,\quad {{y}_{{\bar {x}x,i}}} = ({{y}_{{i + 1}}} - 2{{y}_{i}} + {{y}_{{i - 1}}}){\text{/}}{{h}^{2}};$$
(18)

in the case of boundary conditions of the first kind,

$${{y}_{0}} = {{y}_{N}} = 0;$$
(19)

in the case of boundary conditions of the second kind,

$$(2{\text{/}}h){{y}_{{x,0}}} + \lambda {{y}_{0}} = - (2{\text{/}}h){{y}_{{x,N}}} + \lambda {{y}_{N}} = 0;$$
(20)

and in a mixed boundary value problem (of the first and second kind),

$${{y}_{0}} = - (2{\text{/}}h){{y}_{{x,N}}} + \lambda {{y}_{N}} = 0.$$
(21)

When solving applied problems (for example, hydrodynamics), it becomes necessary to find the eigenvalues of the second-order difference derivative operator in the case of boundary conditions of the third kind.

3.1. The First Boundary Value Problem for Eigenvalues

It is known that trigonometric functions will be eigenfunctions for the operator of the second difference derivative. There are N grid trigonometric functions that satisfy the boundary conditions of the first kind. Vectors composed of these functions have the form

$${{{\mathbf{X}}}_{j}} = \sin \left( {{{\pi (i + 1)(j + 1)} \mathord{\left/ {\vphantom {{\pi (i + 1)(j + 1)} {(N + 1)}}} \right. \kern-0em} {(N + 1)}}} \right),\quad i = \overline {0,N - 1} ,\quad j = \overline {0,N - 1} .$$
(22)

We multiply the ith line of operator (7) by vector \({{{\mathbf{{\rm X}}}}_{j}}\):

$$ - \sin \left( {\pi i(j + 1){\text{/}}(N + 1)} \right) + 2\sin \left( {\pi (i + 1)(j + 1){\text{/}}(N + 1)} \right) - \sin \left( {\pi (i + 2)(j + 1){\text{/}}(N + 1)} \right)$$
$$ = 2\left( {1 - \cos \left( {{{\pi (j + 1)} \mathord{\left/ {\vphantom {{\pi (j + 1)} {(N + 1)}}} \right. \kern-0em} {(N + 1)}}} \right)} \right)\sin \left( {{{\pi (i + 1)(j + 1)} \mathord{\left/ {\vphantom {{\pi (i + 1)(j + 1)} {(N + 1)}}} \right. \kern-0em} {(N + 1)}}} \right).$$

Therefore, the eigenvectors for operator (7) are vectors (22) and the eigenvalues are

$${{\lambda }_{j}} = 2\left( {1 - \cos \left( {{{\pi (j + 1)} \mathord{\left/ {\vphantom {{\pi (j + 1)} {(N + 1)}}} \right. \kern-0em} {(N + 1)}}} \right)} \right) = 4{{\sin }^{2}}\left( {{{\pi (j + 1)} \mathord{\left/ {\vphantom {{\pi (j + 1)} {(2(N + 1))}}} \right. \kern-0em} {(2(N + 1))}}} \right).$$
(23)

The minimum and maximum eigenvalues and the conditionality number of operator (7) are expressed as follows:

$${{\lambda }_{{\min }}} = 4{{\sin }^{2}}\left( {\frac{{\pi {\text{/}}2}}{{N + 1}}} \right),\quad {{\lambda }_{{\max }}} = 4{{\cos }^{2}}\left( {\frac{{\pi {\text{/}}2}}{{N + 1}}} \right),\quad \nu = {\text{co}}{{{\text{t}}}^{2}}\left( {\frac{{\pi {\text{/}}2}}{{N + 1}}} \right).$$
(24)

3.2. The Second Boundary Value Problem for Eigenvalues

In the case of boundary conditions of the second kind, we introduce fictitious nodes –1 and N and require the fulfillment of the equalities on the boundaries

$$i = 0:\frac{{ - {{y}_{{i + 1}}} + 2{{y}_{i}} - {{y}_{{i - 1}}}}}{{{{h}^{2}}}} = \frac{{ - {{y}_{{i + 1}}} + {{y}_{i}}}}{{{{h}^{2}}}}\quad {\text{and}}\quad i = N - 1:\frac{{ - {{y}_{{i + 1}}} + 2{{y}_{i}} - {{y}_{{i - 1}}}}}{{{{h}^{2}}}} = \frac{{{{y}_{i}} - {{y}_{{i - 1}}}}}{{{{h}^{2}}}}.$$

Therefore, the trigonometric functions must satisfy the properties

$${{y}_{0}} = {{y}_{{ - 1}}},\quad {{y}_{N}} = {{y}_{{N - 1}}}.$$
(25)

Vectors composed of trigonometric functions satisfying (25) have the form

$${{{\rm X}}_{j}} = \cos \left( {\pi (i + 1{\text{/}}2)j{\text{/}}N} \right),\quad i = \overline {0,N - 1} ,\quad j = \overline {0,N - 1} .$$
(26)

We multiply the ith line of operator (8) when \(\sigma = 0\) by vector \({{{\mathbf{{\rm X}}}}_{j}}\):

$$\cos (\pi (i - 1{\text{/}}2)j{\text{/}}N) + 2\cos (\pi (i + 1{\text{/}}2)j{\text{/}}N) - \cos (\pi (i + 3{\text{/}}2)j{\text{/}}N)$$
$$ = 2\left( {1 - \cos \left( {\pi j{\text{/}}N} \right)} \right)\cos \left( {\pi \left( {i + 1{\text{/}}2} \right)j{\text{/}}N} \right).$$

Consequently, the eigenvectors for operator (8) for \(\sigma = 0\) are vectors (26) and the eigenvalues are

$${{\lambda }_{j}} = 2(1 - \cos (\pi j{\text{/}}N)) = 4{{\sin }^{2}}(\pi j{\text{/}}(2N)),\quad j = \overline {0,N - 1} .$$
(27)

3.3. The Third Boundary Value Problem for Eigenvalues

When forming the eigenvectors of operator (8) in the case of boundary conditions of the third kind (σ > 0), we will use the eigenvectors obtained in the case of boundary conditions of the first (22) and second (26) kind, and also the fact that the operator ΛD in the case of boundary conditions of the third kind (8) at σ = 1 corresponds to operator (7), obtained in the case of boundary conditions of the first kind.

We consider the system of vectors

$${{{\mathbf{{\rm X}}}}_{j}} = \cos \left( {\frac{{\pi (2i + {{a}_{j}} + 1)(j + {{a}_{j}})}}{{2(N + {{a}_{j}})}} - \frac{{\pi {{a}_{j}}}}{2}} \right),\quad i = \overline {0,N - 1} ,\quad j = \overline {0,N - 1} .$$
(28)

System (28) at \({{a}_{j}} = 1\) coincides with (22), and at \({{a}_{j}} = 0\), with (26). We introduce the fictitious nodes –1 and N and we require the following equalities to hold on the boundaries:

$$i = 0:{{\left( { - {{y}_{{i + 1}}} + 2{{y}_{i}} - {{y}_{{i - 1}}}} \right)} \mathord{\left/ {\vphantom {{\left( { - {{y}_{{i + 1}}} + 2{{y}_{i}} - {{y}_{{i - 1}}}} \right)} {{{h}^{2}}}}} \right. \kern-0em} {{{h}^{2}}}} = {{\left( { - {{y}_{{i + 1}}} + (1 + \sigma ){{y}_{i}}} \right)} \mathord{\left/ {\vphantom {{\left( { - {{y}_{{i + 1}}} + (1 + \sigma ){{y}_{i}}} \right)} {{{h}^{2}}}}} \right. \kern-0em} {{{h}^{2}}}}\quad {\text{and}}$$
$$i = N - 1:{{\left( { - {{y}_{{i + 1}}} + 2{{y}_{i}} - {{y}_{{i - 1}}}} \right)} \mathord{\left/ {\vphantom {{\left( { - {{y}_{{i + 1}}} + 2{{y}_{i}} - {{y}_{{i - 1}}}} \right)} {{{h}^{2}}}}} \right. \kern-0em} {{{h}^{2}}}} = {{\left( {(1 + \sigma ){{y}_{i}} - {{y}_{{i - 1}}}} \right)} \mathord{\left/ {\vphantom {{\left( {(1 + \sigma ){{y}_{i}} - {{y}_{{i - 1}}}} \right)} {{{h}^{2}}}}} \right. \kern-0em} {{{h}^{2}}}}.$$

Therefore, the trigonometric functions (27) must satisfy the properties

$$(1 - \sigma ){{y}_{0}} = {{y}_{{ - 1}}},\quad {{y}_{N}} = (1 - \sigma ){{y}_{{N - 1}}}.$$
(29)

We express \(\sigma \) through \({{a}_{j}}\) from (28) and (29) on the left boundary and obtain the dependence \({{a}_{j}}\) from \(j\), which is implicitly expressed from the equation

$$f({{a}_{j}}) = 1 - \cos \left( {\pi \frac{{j + {{a}_{j}}}}{{N + {{a}_{j}}}}} \right) - \tan \left( {\frac{{\pi ({{a}_{j}} + 1)(j + {{a}_{j}})}}{{2(N + {{a}_{j}})}} - \frac{{\pi {{a}_{j}}}}{2}} \right)\sin \left( {\pi \frac{{j + {{a}_{j}}}}{{N + {{a}_{j}}}}} \right) = \sigma .$$
(30)

Figure 2 shows dependence f on \({{a}_{j}}\) for various j at N = 8. On the graph, the curves are arranged from left to right in descending order of  j.

Fig. 2.
figure 2

Dependence  f  on aj for different values of  j at N = 8.

Parameters \({{a}_{j}}\) through \(\sigma = {\text{const}}\) can be expressed numerically. The eigenvalues of operator (8) corresponding to eigenvectors (28) are, respectively,

$${{\lambda }_{j}} = 2\left( {1 - \cos \left( {\pi \frac{{j + {{a}_{j}}}}{{N + {{a}_{j}}}}} \right)} \right) = 4{{\sin }^{2}}\left( {\frac{\pi }{2}\frac{{j + {{a}_{j}}}}{{N + {{a}_{j}}}}} \right),\quad {{a}_{j}} \in \left[ {0,1} \right],\quad j = \overline {0,N - 1} .$$
(31)

4 VARIATIONAL OPTIMIZATION OF ITERATIVE METHODS

To calculate the parameter \({{\tau }_{{n + 1}}}\), we use the steepest descent method [3]. In the nonstationary case, we will use the convergence criterion \((A{{z}^{{n + 1}}},{{z}^{{n + 1}}}) \to \min \) together with \({{z}^{{n + 1}}} = {{z}^{n}} - {{\tau }_{{n + 1}}}{{B}^{{ - 1}}}A{{z}^{n}}\):

$$(A{{z}^{{n + 1}}},{{z}^{{n + 1}}}) = ({{r}^{n}},{{z}^{n}}) - 2{{\tau }_{{n + 1}}}({{r}^{n}},{{w}^{n}}) + \tau _{{n + 1}}^{2}(A{{w}^{n}},{{w}^{n}}),$$
(32)

where \({{r}^{n}} = A{{z}^{n}}\) is the residual vector and \({{w}^{n}} = {{B}^{{ - 1}}}{{r}^{n}}\) is the correction vector.

The minimum value \({{\tau }_{{n + 1}}}\) is achieved at

$${{\tau }_{{n + 1}}} = {{({{r}^{n}},{{w}^{n}})} \mathord{\left/ {\vphantom {{({{r}^{n}},{{w}^{n}})} {(A{{w}^{n}},{{w}^{n}})}}} \right. \kern-0em} {(A{{w}^{n}},{{w}^{n}})}}.$$
(33)

Expression (32), taking into account (33), takes the form

$$\left\| {{{z}^{{n + 1}}}} \right\|_{A}^{2} = \rho \left\| {{{z}^{n}}} \right\|_{A}^{2},\quad {{\rho }^{2}} = 1 - {{{{{({{r}^{n}},{{w}^{n}})}}^{2}}} \mathord{\left/ {\vphantom {{{{{({{r}^{n}},{{w}^{n}})}}^{2}}} {\left( {({{r}^{n}},{{z}^{n}})(A{{w}^{n}},{{w}^{n}})} \right)}}} \right. \kern-0em} {\left( {({{r}^{n}},{{z}^{n}})(A{{w}^{n}},{{w}^{n}})} \right)}}.$$
(34)

To estimate the convergence rate of the steepest descent method, we use the Kantorovich inequality [20]: \(4xy \leqslant {{(ax + y{\text{/}}a)}^{2}}\):

$${{\rho }^{2}} \leqslant 1 - 4{{\left( {a\frac{{({{r}^{n}},{{z}^{n}})}}{{({{r}^{n}},{{w}^{n}})}} + \frac{{(A{{w}^{n}},{{w}^{n}})}}{{({{r}^{n}},{{w}^{n}})a}}} \right)}^{{ - 2}}}.$$
(35)

Taking into account \({{A}^{{1/2}}}{{z}^{n}} = {{y}^{n}}\) and \({{B}^{{ - 1/2}}}y = {{z}^{n}}\), expressions in denominator (35) will take the form

$$\frac{{({{r}^{n}},{{z}^{n}})}}{{({{r}^{n}},{{w}^{n}})}} = \frac{{({{y}^{n}},{{y}^{n}})}}{{({{y}^{n}},C{{y}^{n}})}} = \frac{{(B{{z}^{n}},{{z}^{n}})}}{{(A{{z}^{n}},{{z}^{n}})}} \leqslant {{\left( {{{\lambda }_{{\min }}}(C)} \right)}^{{ - 1}}},\quad \frac{{(A{{w}^{n}},{{w}^{n}})}}{{(B{{w}^{n}},{{w}^{n}})}} \leqslant {{\lambda }_{{\max }}}(C).$$
(36)

Inequality (35), taking (36) and \(a = {{\lambda }_{{\min }}}\) into account, transform to the form

$$\rho \leqslant \sqrt {1 - \frac{4}{{{{{\left( {1 + {{\lambda }_{{\max }}}{\text{/}}{{\lambda }_{{\min }}}} \right)}}^{2}}}}} = \frac{{{{\lambda }_{{\max }}}{\text{/}}{{\lambda }_{{\min }}} - 1}}{{{{\lambda }_{{\max }}}{\text{/}}{{\lambda }_{{\min }}} + 1}} = \frac{{\nu - 1}}{{\nu + 1}},\quad \nu \leqslant \frac{{\Delta + {{\lambda }_{{\min }}}\left( S \right)}}{{\delta + {{\lambda }_{{\min }}}\left( S \right)}}.$$
(37)

We describe an algorithm for solving the grid equation (4) based on method (9), (14), and (33).

Calculation of the residual vector from the equation \(r = A{{x}^{n}} - F\).

Finding the correction vector \({{w}^{n}}\) by the sweep method from the equation

$${{A}_{{i,j,k}}}w_{{i,j,k}}^{n} - {{B}_{{5,i,j,k}}}w_{{i,j,k + 1}}^{n} - {{B}_{{6,i,j,k}}}w_{{i,j,k - 1}}^{n} = r_{{i,j,k}}^{n}.$$

Iterative parameter calculation: \({{\tau }_{n}} = ({{{{r}^{n}},{{w}^{n}})} \mathord{\left/ {\vphantom {{{{r}^{n}},{{w}^{n}})} {(A{{w}^{n}},{{w}^{n}})}}} \right. \kern-0em} {(A{{w}^{n}},{{w}^{n}})}}\).

Transition to the next iteration: \({{x}^{{n + 1}}} = {{x}^{n}} - {{\tau }_{n}}{{w}^{n}}\).

5 SOLVING TEST PROBLEMS BASED ON THE DEVELOPED METHOD FOR SOLVING GRID EQUATIONS WITH A TRIDIAGONAL PRECONDITIONER

In this section, we consider the solution of three test difference problems by the method of solving grid equations with a tridiagonal preconditioner. The first one is the Dirichlet problem for the Poisson equation in a rectangle. The second one is the Poisson difference equation with boundary conditions of the second and third kind in different directions. The choice of boundary conditions is due to the fact that in order to calculate pressure in problems of hydrodynamics, it becomes necessary to solve the Poisson equation with a boundary condition of the third kind, which is specified on the surface of a reservoir. On solid walls, boundary conditions are specified in the Neumann form. The third task is to solve the grid equations that arise when calculating the pressure for the three-dimensional problem of hydrodynamics of the Sea of Azov. Based on the solution of the first two problems, the convergence of the proposed method for solving grid equations was studied theoretically and practically (empirically). The convergence of the solution of the third problem was studied only practically.

5.1. The Dirichlet Problem for the Poisson Equation in a Rectangle

We illustrate the proposed method on the example of solving the Dirichlet difference problem for the Poisson equation in a rectangle \(\bar {G} = \{ 0 \leqslant x \leqslant {{l}_{x}}\), \(0 \leqslant z \leqslant {{l}_{z}}\} \):

$$\begin{gathered} u_{{xx}}^{ - } + u_{{zz}}^{ - } = - \varphi (x,z),\quad {\text{(}}x,z) \in \omega , \\ u(x,z) = g(x,z),\quad (x,z) \in \gamma \\ \end{gathered} $$
(38)

on the grid \(\bar {\omega } = \{ {{x}_{{ij}}}\) = \((i{{h}_{x}},j{{h}_{z}}) \in \bar {G}\), \(0 \leqslant i < {{N}_{x}}\), \(0 \leqslant j < {{N}_{z}}\), \({{h}_{x}}({{N}_{x}} - 1) = {{l}_{x}}\), \({{h}_{z}}({\kern 1pt} {{N}_{z}} - 1) = {{l}_{z}}\} \) with border γ.

We estimate the number of iterations n to solve problem (38) by the method of solving grid equations with a tridiagonal preconditioner (14). To get an estimate of \(\delta {{D}_{x}} \leqslant {{\Lambda }_{x}} \leqslant \Delta {{D}_{x}}\), the eigenvalues of the operator \(D_{x}^{{ - 1/2}}{{\Lambda }_{x}}D_{x}^{{ - 1/2}}\) need to be analyzed based on the solution of the boundary value problem represented by the grid equation \({{y}_{{\bar {x}x}}} + \lambda Dy = 0\) in the case of boundary conditions of the first kind, \(\delta \leqslant \lambda \leqslant \Delta \). Values \(\delta ,\Delta \) will be written as follows:

$$\delta = 2{{\sin }^{2}}(\pi {\text{/}}(2{{N}_{x}} + 2)),\quad \Delta = 2{{\cos }^{2}}(\pi {\text{/}}(2{{N}_{x}} + 2)).$$
(39)

The value of the eigenvalue of the operator \(S = D_{x}^{{ - 1/2}}{{\Lambda }_{z}}D_{x}^{{ - 1/2}}\) follows from (24):

$${{\lambda }_{{\min }}}(S) = \frac{1}{2}{{({{h}_{x}}{\text{/}}{{h}_{z}})}^{2}}{{\sin }^{2}}(\pi {\text{/}}(2{{N}_{z}} + 2)).$$
(40)

An estimate of the conditionality number of operator \(C = {{B}^{{ - 1/2}}}A{{B}^{{ - 1/2}}}\) is written:

$$\nu (C) \leqslant {{(\Delta + {{\lambda }_{{\min }}}(S))} \mathord{\left/ {\vphantom {{(\Delta + {{\lambda }_{{\min }}}(S))} {(\delta \, + {{\lambda }_{{\min }}}(S))}}} \right. \kern-0em} {(\delta \, + {{\lambda }_{{\min }}}(S))}}.$$
(41)

For parameter ρ describing the rate of convergence of the iterative method, we have the estimate

$$\rho \leqslant {{(\nu - 1)} \mathord{\left/ {\vphantom {{(\nu - 1)} {(\nu + 1)}}} \right. \kern-0em} {(\nu + 1)}}.$$
(42)

An estimate of the number of iterations n to solve problem (38) takes the form

$$n \geqslant {{\left( {\ln \left\| {{{z}_{n}}} \right\| - \ln \left\| {{{z}_{0}}} \right\|} \right)} \mathord{\left/ {\vphantom {{\left( {\ln \left\| {{{z}_{n}}} \right\| - \ln \left\| {{{z}_{0}}} \right\|} \right)} {\ln (1{\text{/}}\rho )}}} \right. \kern-0em} {\ln (1{\text{/}}\rho )}}.$$
(43)

Table 1 shows the results of a numerical experiment on the calculation of n, theoretical, and \({{n}_{{{\text{pr}}}}}\), practical, values of the number of iterations for solving problem (38) by the method of solving grid equations with a tridiagonal preconditioner (14) on a grid with dimensions \({{N}_{x}} = {{N}_{z}}\) = 100 in the case of a right side function: \(\varphi (x,z) = \sin (\pi x{\kern 1pt} /{\kern 1pt} {{l}_{x}})\sin (\pi z{\kern 1pt} /{\kern 1pt} {{l}_{z}})\). In this case, the minimum and maximum eigenvalues of the operator \(D_{x}^{{ - 1/2}}{{\Lambda }_{x}}D_{x}^{{ - 1/2}}\) are δ = 5.035 × 10−4 and Δ = 1.999. As a result of calculating the grid equation, the error rate for solving the problem decreased from \(\left\| {{{z}_{0}}} \right\| = 1\) to \(\left\| {{{z}_{n}}} \right\| = {{10}^{{ - 8}}}\). When performing a numerical experiment, the ratio of the steps of the computational grid in spatial coordinates Ox and Oz, respectively, \(k = {{h}_{x}}{\kern 1pt} /{\kern 1pt} {{h}_{z}}\) and \({{h}_{z}} = 1\), was changed. Table 1 also gives theoretical estimates of the minimum eigenvalue of the operator \(S = D_{x}^{{ - 1/2}}{{\Lambda }_{z}}D_{x}^{{1/2}}\), number of conditionality ν(C) of operator C, parameter ρ describing the rate of convergence of the iterative method, and the number of iterations n.

Table 1.  Theoretical and practical value of the number of iterations in solving the Dirichlet problem for the Poisson equation in a rectangle by the proposed method

Table 2 shows the number of iterations calculated practically when solving the Dirichlet problem for the Poisson equation (38) in a rectangle using the proposed method for solving grid equations. When solving this problem, a sequence of refining grids with different ratios of the spatial steps of the computational grid was used.

Table 2. The number of iterations in solving the Dirichlet problem for the Poisson equation on a sequence of refining grids

From the data given in Tables 1 and 2, it can be seen that when using the proposed method, an increase in k leads to a significant reduction in the number of iterations.

5.2. Solution of the Difference Poisson Equation with Boundary Conditions of the Second and Third Kind in Different Directions

We assume that on grid \(\bar {\omega }\) it is required to find the solution of the Poisson equation (1) in the rectangle \(\bar {G} = \{ 0 \leqslant x \leqslant {{l}_{x}}\), \(0 \leqslant z \leqslant {{l}_{z}}\} \),

$$u_{{xx}}^{{''}} + u_{{zz}}^{{''}} = - \varphi (x,z),\quad {\text{(}}x,z) \in G,$$
(44)

satisfying the Neumann boundary conditions on sides x = 0 and \(x = {{l}_{x}}\) at \(0 \leqslant z \leqslant {{l}_{z}}\);

$$u_{x}^{'} = 0,$$
(45)

on sides \(z = 0\) and \(z = {{l}_{z}}\) at \(0 \leqslant x \leqslant {{l}_{x}}\) by the boundary conditions of the third kind

$$u_{z}^{'}{\kern 1pt} {{|}_{{z = 0}}}\; = {{\alpha }_{z}}u,\quad - {\kern 1pt} u_{z}^{'}{\kern 1pt} {{|}_{{z = {{l}_{z}}}}}\; = {{\alpha }_{z}}u.$$
(46)

On the grid \(\bar {\omega } = \{ {{x}_{{ij}}}\) = \((i{{h}_{x}},j{{h}_{z}}) \in \bar {G}\), \(0 \leqslant i < {{N}_{x}}\), \(0 \leqslant j < {{N}_{z}}\), \({{h}_{x}}({{N}_{x}}{\kern 1pt} - 1) = {{l}_{x}}\), \({{h}_{z}}({{N}_{z}} - 1) = {{l}_{z}}\} \), problem (44)−(46) corresponds to the difference scheme

$$\begin{gathered} {{\Lambda }_{x}}u + {{\Lambda }_{z}}u = \varphi (x,z), \\ {{\Lambda }_{x}}u = \left\{ \begin{array}{*{20}{l}} - 2{{u}_{x}}{\text{/}}{{h}_{x}},\quad x = 0, \hfill \\ - {{u}_{{\bar {x}x}}},\quad {{h}_{x}} \leqslant x \leqslant {{l}_{x}} - {{h}_{x}}, \hfill \\ 2{{u}_{{\bar {x}}}}{\text{/}}{{h}_{x}},\quad x = {{l}_{x}}, \hfill \\ \end{array} \right.\quad {{\Lambda }_{z}}u = \left\{ \begin{gathered} 2( - {{u}_{z}} + {{\alpha }_{z}}){\text{/}}{{h}_{z}},\quad z = 0, \hfill \\ - {{u}_{{\bar {z}z}}},\quad {{h}_{z}} \leqslant z \leqslant {{l}_{z}} - {{h}_{z}}, \hfill \\ 2({{u}_{{\bar {z}}}} + {{\alpha }_{z}}){\text{/}}{{h}_{z}},\quad z = {{l}_{z}}. \hfill \\ \end{gathered} \right. \\ \end{gathered} $$
(47)

We estimate the number of iterations n to solve problem (47) by the method of solving grid equations with a tridiagonal preconditioner (14). For the Neumann problem, the minimum and maximum eigenvalues of the operator \(D_{x}^{{ - 1/2}}{{\Lambda }_{x}}D_{x}^{{ - 1/2}}\) are \(\delta = 0\) and \(\Delta = 2\). The value of the eigenvalue of operator S follows from (31):

$${{\lambda }_{{\min }}}(S) = \frac{1}{2}{{\left( {\frac{{{{h}_{x}}}}{{{{h}_{z}}}}} \right)}^{2}}{{\sin }^{2}}\left( {\frac{\pi }{2}\frac{{{{a}_{0}}}}{{{{N}_{z}} + {{a}_{0}}}}} \right).$$
(48)

Parameter \({{a}_{0}}\) is found implicitly from the equation

$$1 - \cos \left( {\frac{{\pi {{a}_{0}}}}{{{{N}_{z}} + {{a}_{0}}}}} \right) - \tan \left( {\frac{{\pi {{a}_{0}}(1 + {{a}_{0}})}}{{2({{N}_{z}} + {{a}_{0}})}} - \frac{{\pi {{a}_{0}}}}{2}} \right)\sin \left( {\frac{{\pi {{a}_{0}}}}{{{{N}_{z}} + {{a}_{0}}}}} \right) = \sigma .$$
(49)

The parameters are calculated according to formulas (41)−(43): estimate of the conditionality number of operator C, the rate of convergence of the iterative method ρ, and a theoretical estimate of the number of iterations n. Tables 3 and 4, respectively, show the results of the numerical experiment on the calculation of n, the theoretical value, and npr, the practical value, of the number of iterations for solving problem (47) by the method with the tridiagonal preconditioner (14) on a grid with dimensions \({{N}_{x}} = {{N}_{z}}\) = 100 in the case of the function of the right side φ(x, z) = \(\sin (\pi x{\text{/}}{{l}_{x}})\sin (\pi z{\text{/}}{{l}_{z}})\). Parameters k and σ were changed. As a result of solving the grid equation, the error rate for solving the problem decreased from \(\left\| {{{z}_{0}}} \right\|\) = 1 to \(\left\| {{{z}_{n}}} \right\|\) = 10−8.

Table 3. Number of iterations calculated analytically
Table 4. Number of iterations calculated experimentally

From the data in Tables 3 and 4, it can be seen that the number of iterations increases with decreasing parameter σ.

5.3. Solution of Grid Equations Arising in the Calculation of Pressure for a Three-Dimensional Problem of Hydrodynamics of the Sea of Azov

The proposed method for solving grid equations is integrated into the previously developed software package for modeling three-dimensional hydrophysical processes in shallow water bodies and is called AZOV3D. The developed software module is used to solve the grid equations obtained by approximating a three-dimensional pressure calculation problem based on the wave equation. The AZOV3D software package is based on a mathematical model of the hydrodynamics of shallow water bodies, which includes three equations of motion (Navier–Stokes) and a continuity equation with regularizers [21, 22] in the case of variable density.

The area of the calculations coincides with the real dimensions of the studied water area of the Sea of Azov, which are 355 km long and 233 km wide. The maximum depth of the sea is 13.2 m. In the developed software package, 4 m above the surface is allocated for the rise in the water level. The spatial step is chosen to be 1000 m horizontally and 50 cm vertically. Thus, the ratio of horizontal steps to vertical steps is 2000. The dimensions of the calculation grid are 255 × 350 × 35 nodes. To calculate the pressure on the surface of the Sea of Azov, boundary conditions of the third kind are set; and on all other parts of the boundary, of the second kind. Figure 3 shows plots of the norm of the residual vector obtained by solving grid equations for the pressure calculation problem based on the modified alternating triangular method (MATM) [3] and the method for solving grid equations with a tridiagonal preconditioner. On the Ox axis, n is the iteration number; and on the Oy axis, the norm of the residual vector \(\left( {{{{\left\| r \right\|}}_{C}}} \right)\).

Fig. 3.
figure 3

Dependence of the values of the uniform norm of the residual vector on the number of iterations: (1) calculation based on MATM; (2) based on the method of solving grid equations with a tridiagonal preconditioner.

It can be seen from Fig. 3 that when solving grid equations of form (8) with the help of the MATM, the norm of the vector barely decreases. When using the proposed method for solving grid equations with a tridiagonal preconditioner, the iterative process converged in less than 1000 iterations. When solving the problem of the pressure calculation, the pressure calculated in the hydrostatic approximation can be used as the initial approximation. The pressure based on the hydrostatic approximation is calculated based on a two-dimensional problem, which is much less complex than calculating the pressure based on a three-dimensional problem. Figure 4 shows the plots of the norm of the residual vector, obtained by solving the grid equations of the pressure calculation problem based on the MATM and the method for solving grid equations with a tridiagonal preconditioner, taking into account the hydrostatic approximation.

Fig. 4.
figure 4

Dependence of the uniform norm of the residual vector on the number of iterations: (1) calculation based on MATM; (2) based on the method of solving grid equations with a tridiagonal preconditioner.

Experimentally, it was found that when using the hydrostatic approximation as the initial pressure values, the methods for solving grid equations converge faster for a model of hydrodynamics of a shallow reservoir, which includes three equations of motion. The developed method for solving grid equations with a tridiagonal preconditioner converges in 10–15 iterations. For the convergence of the MATM in solving the test problem under consideration, about 100 iterations are required.

CONCLUSIONS

For the numerical implementation of a spatial three-dimensional mathematical model of the hydrodynamics of shallow water bodies, including the equations of motion (Navier–Stokes) and the continuity equation with regularizers [5] in the case of variable density, a method for solving grid equations is proposed. The developed method is included in the previously created AZOV3D software package designed for modeling three-dimensional hydrophysical processes in shallow water bodies. A software module has been developed that can be used to solve grid equations obtained as a result of approximating a three-dimensional pressure calculation problem based on the wave equation.

When solving grid equations using the MATM, the vector norm barely decreases, and when using the proposed method for solving grid equations with a tridiagonal preconditioner, the iterative process converges in less than 1000 iterations. When solving the problem of calculating the pressure, the pressure calculated in the hydrostatic approximation based on a two-dimensional problem was used as the initial approximation. The hydrostatic pressure is a good initial approximation for the pressure obtained from the application of splitting schemes by physical processes for a hydrodynamic model that includes three equations of motion. This has a significant effect on the convergence of the applied method.

The numerical experiment carried out with the developed software module made it possible to estimate the residual vector norm obtained by solving the grid equations of the pressure calculation problem based on the MATM and the method for solving grid equations with a tridiagonal preconditioner, taking into account the hydrostatic approximation. It has been experimentally established that when using the hydrostatic approximation as the initial pressure values, the methods for solving grid equations converge faster. For the convergence of the MATM when solving the test problem under consideration, it took about 100 iterations. The proposed method for solving grid equations with a tridiagonal preconditioner converges in 10–15 iterations. According to the specifics of the developed method, it is effective in solving problems of aquatic ecology in the case of a computational domain with an extended geometry.