1 INTRODUCTION

At present, the performance of computer systems has been growing rapidly. For example, performance of 200 PFLOPS was reached in the summer of 2018 (1 PFLOPS corresponds to \({{10}^{{15}}}\) floating-point operations per second, Summit, US). It is expected that the performance level of 1 EFLOPS will be overcome within the next 2–3 years (1 EFLOPS = 1000 PFLOPS). Remarkably, a whole group of computer systems with performance of 10 PFLOPS and higher has appeared in some countries, for example, in Germany. Here, we mean systems with architectures having been tested on real-world problems. It is possible that opportunities offered by quantum computers will be seriously discussed in 5–7 years.

Currently, computational resources available in Russia lag significantly behind those in the leading countries. However, due to the logic of scientific and technological progress and the geopolitical position of Russia, we can undoubtedly expect the rapid growth of the performance of our supercomputer facilities.

The development of expensive and energy-intensive computer systems of this type is associated with the opportunities they offer. Here, we mean not only scientific and engineering problems described by partial differential equations, but also problems of using big data, artificial intelligence, and related requirements for public and corporate management [1, 2]. (It should be noted that seemingly fantastic performance of 1 EFLOPS will hardly cover the need for detailed simulation of many engineering problems. For example, a k-time increase in the degree of description detail and the related increase in an approximating grid in each direction will require, at best, taking into account the time approximation, an increase in computational resources proportional to \({{k}^{4}}\).)

For example, the factor model [3] used to describe complex processes relies on the search for eigenvalues of matrices of rather high dimension [4, 5].

In this work, we omit purely technical questions related to the creation of ultrahigh-performance computer systems (e.g., high power consumption) and avoid issues associated with financial aspects of using such computing facilities. Instead, some problems that can solved on an algorithmic basis are addressed.

2 LOGICAL SIMPLICITY IS THE MAIN REQUIREMENT FOR ALGORITHMS INTENDED FOR SUPERCOMPUTER CALCULATIONS

A seemingly paradoxical reality is that, in ultrahigh-performance computing facilities, the number of problems whose solution requires resources comparable with the performance of the system is rather few. This is explained by the fact that a large number of cores operated simultaneously interfere with each other like a crowd of people running through a narrow door.

An exception is problems relying on logically simple models associated, for example, with transport of particles and photons [6, 7]. Therefore, solving problems on systems with extramassive parallelism requires logically simple and efficient algorithms. Logically simple algorithms are important for systems based on processors (CPU) of traditional architecture, but this problem is especially acute for systems of hybrid architecture using graphics processing units as accelerators [8].

Unfortunately, the requirements for logical simplicity and computational efficiency are extremely rarely satisfied simultaneously in algorithms. To demonstrate this, we consider the solution of systems of parabolic equations, which describe mathematical models for many natural-science and industrial problems. Examples are the Navier–Stokes equations, magnetogasdynamics (MGD) equations with allowance for magnetic viscosity, the heat equation, diffusion equation, etc.

Such systems, for example, the heat equation

$$\frac{{\partial T}}{{\partial t}} = \operatorname{div} K{\kern 1pt} \left( {T,\bar {x}} \right)\operatorname{grad} T + Q{\text{,}}$$
(2.1)

where \(t~\) is time, \(\bar {x}\) is the spatial coordinate, \(K{\kern 1pt} \left( {T,~\bar {x}} \right)\) is the thermal conductivity, \(~T\) is the temperature, \(Q\) is a given source of heat, can be solved using two approaches, namely, explicit and implicit schemes [9].

Implicit schemes have good stability, and the admissible time step is determined only by the requirements for the approximation accuracy. Their solution involves inversions of corresponding matrices of function values at a new time level and requires the application of iterative methods. Note that similar methods can be used to invert matrices in factor models [3].

Fast-converging iterative methods include a number of logical switchings [9]. When numerous cores are used simultaneously in computations, such switchings lead to a sharp reduction in the efficiency of parallel processing. (Formally, as such an averaged critical number, we can use \({{10}^{4}}\) cores.) An exception is slow-converging logically simple methods, for example, fixed-point iteration [9].

Thus, a detailed description of processes described by parabolic equations that require a large number of spatial nodes (at present, up to \({{10}^{{10}}}\) nodes) and a corresponding number of cores for executing computations faces serious difficulties in the case of implicit schemes.

Explicit schemes are ideal for adaptation to the architecture of computer systems with extramassive parallelism. However, they impose severe stability constraints on the admissible time step (for first-order hyperbolic systems, we have the Courant stability condition \(\Delta t \lesssim h~\), which seems physically justified, since a detailed spatial approximation should be associated with a detailed time approximation). However, for parabolic equations, a different condition is valid, namely,

$$\Delta t \lesssim h{{~}^{2}}{\kern 1pt} {\text{.}}$$
(2.2)

As a result, explicit schemes become unacceptable in the case of approximations on fine spatial grids.

Thus, in the case of using detailed approximations, which are required, for example, in direct simulation of turbulence, and for which, at first glance, ultrahigh-performance computer systems are well suited, both explicit and implicit schemes encounter serious difficulties.

However, before discussing a possible way out of this situation, we note another difficulty that can seriously complicate the use of ultrahigh-performance computer systems in the near future. This difficulty is associated with fault tolerance of computer systems when a large number of processors are used simultaneously [10, 11].

The fact is that, according to the law of large numbers, when the number of processors is huge, some of them continually fail. Moreover, the failure rate increases as the number of processors grows.

To replace the faulty processors, the computation process has to return to the last checkpoint. After replacing them, the computation resumes, starting at this checkpoint. However, if numerous processors are used in the computation, then the procedure of returning to a checkpoint will occur frequently. The estimates obtained in [12] show that, for computer systems with performance of several exaflops in the case of this processor replacement scheme, computations can hardly be executed (as expected, this performance will have been reached by the mid-2020s). This problem cannot be solved in principle by purely technical means.

Below, we consider an approach allowing the use of explicit schemes with a stability condition milder than (2.2). This approach is based on the relationship between the kinetic and gasdynamic descriptions of a continuous medium [13, 14] and has been successfully applied to the simulation of hydro- and gasdynamic problems [15]. An important point is that a parabolic system of equations is replaced by a hyperbolic one with a small parameter multiplying the second time derivative [15, 16]. In addition to the use of explicit schemes in simulation on computer systems with extramassive parallelism, this approach makes it possible to advance algorithmically in the problem of fault tolerance [10].

Let us describe this approach.

3 QUASI-GASDYNAMIC SYSTEM OF EQUATIONS

Let us derive the quasi-gasdynamic system (QGS) with the help of the following model, which describes the behavior of the single-particle distribution function \(f{\kern 1pt} \left( {t,\bar {x},\bar {\xi }} \right)\), where \(\bar {\xi }\) is the molecular velocity. The derivation procedure is presented in the one-dimensional case.

Assume that, at the time \({{t}^{j}},\) there is a locally Maxwellian distribution function \({{f}_{0}}{\kern 1pt} \left( {t,x,\xi } \right)\) that varies weakly over the mean free path \(l\):

$$f{\kern 1pt} \left( {t,x,\xi } \right) = \frac{{\rho {\kern 1pt} \left( {t,x} \right)}}{{{{{\left( {2\pi RT{\kern 1pt} \left( {t,x} \right)} \right)}}^{{3/2}}}}}e - \frac{{{{{\left( {\xi - u{\kern 1pt} \left( {t,x} \right)} \right)}}^{2}}}}{{2RT}};$$
(3.1)

here, \(\rho \) is the density, \(u\) is the macroscopic velocity, \(T\) is the temperature, and \(R~\) is the gas constant.

Next, the gas particles move without collisionless over a time interval \(\tau \), where \(\tau \) is the characteristic time between molecular collisions.

Finally, at the time \({{t}^{{j + 1}}} = {{t}^{j}} + \tau ,\) the gas particles are instantaneously maxwellized and the entire procedure is repeated.

Note that \(\tau \) is used as a time step because, in gas dynamics, it is meaningless to consider variations in macroscopic parameters over time less, in order of magnitude, than the characteristic time between molecular collisions. The condition that \({{f}_{0}}\) varies weakly over a distance of \(l\) is associated with this circumstance.

The distribution function \(f\) at the new time step \({{t}^{{j + 1}}}\) before Maxwellization is related to \({{f}_{0}}({{t}^{j}},x,\xi )\) by the formula

$$f({{t}^{{j + 1}}},x,\xi ) = {{f}_{0}}({{t}^{j}},x - \tau \xi ,\xi ).$$
(3.2)

Here, for simplicity, we assume that the gas moves in the absence of external force fields.

Expression (3.2) is expanded in a Taylor series up to terms of order \(O\left( {{\text{K}}{{{\text{n}}}^{2}}} \right)\), where \({\text{Kn}}\) is the Knudsen number:

$$\frac{{{{f}^{{j + 1}}} - f_{0}^{j}}}{\tau } + \frac{{\partial \xi f_{0}^{j}}}{{\partial x}} = \frac{\partial }{{\partial x}}\frac{\tau }{2}\frac{{{{\xi }^{2}}\partial f_{0}^{j}}}{{\partial x}}.$$
(3.3)

Expression (3.3) is multiplied by the summation invariants \(\varphi {\kern 1pt} \left( \xi \right) = (1,\xi ,~{{\xi }^{2}}{\text{/}}2)\), and the result is integrated with respect to all molecular velocities. Next, we take into account that

$$\int {f\varphi } {\kern 1pt} \left( \xi \right)d\xi = \int {{{f}_{0}}\varphi } {\kern 1pt} \left( \xi \right)d\xi {\kern 1pt} {\kern 1pt} .$$
(3.4)

The time difference between the values of the gasdynamic parameters \(\phi = \left( {\rho ,u,E} \right)\), where \(E\) is the total energy, are expanded up to \(O({\text{K}}{{{\text{n}}}^{2}})\). Finally, we obtain the quasi-gasdynamic (QGS) system of equations, which is presented, for simplicity, in the one-dimensional case:

$$\frac{{\partial \rho }}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}\rho }}{{\partial {{t}^{2}}}} + \frac{{\partial \rho u}}{{\partial x}} = \frac{\partial }{{\partial x}}\frac{\tau }{2}\frac{\partial }{{\partial x}}(\rho {{u}^{2}} + p),$$
(3.5)
$$\frac{{\partial \rho u}}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}\rho u}}{{\partial {{t}^{2}}}} + \frac{{\partial (\rho {{u}^{2}} + p)}}{{\partial x}} = \frac{\partial }{{\partial x}}\frac{\tau }{2}\frac{\partial }{{\partial x}}(\rho {{u}^{3}} + 3pu),$$
(3.6)
$$\frac{{\partial E}}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}E}}{{\partial {{t}^{2}}}} + \frac{\partial }{{\partial x}}(u(E + p)) = \frac{\partial }{{\partial x}}\frac{\tau }{2}\frac{\partial }{{\partial x}}({{u}^{2}}(E + 2p)),$$
(3.7)

where \(p\) is the pressure and \(\tau \) is chosen according to the elementary kinetic theory [17]:

$$\tau = 2\mu {\text{/}}p.$$
(3.8)

Here, \(\mu \) is the viscosity, which is determined theoretically or experimentally. In contrast to the conventional Navier–Stokes equations, the QGS system involves second time derivatives and the additional dissipative term \(\frac{{\partial w}}{{\partial x}}\) on the right-hand side of Eq. (3.5), which is a consequence of the mass conservation law, where

$$w = \frac{\tau }{2}\frac{\partial }{{\partial x}}(\rho {{u}^{2}} + p){\text{.}}$$
(3.9)

The QGS system differs from the Navier–Stokes equations by \(~O({\text{K}}{{{\text{n}}}^{2}})\) terms (of the second order of smallness in the Knudsen number). For example, additional as compared with the continuity equation

$$\frac{{\partial \rho }}{{\partial t}} + \frac{{\partial \rho u}}{{\partial x}} = 0,$$
(3.10)

the terms of Eq. (3.5) satisfy the relation

$$\frac{\tau }{2}\frac{{{{\partial }^{2}}\rho }}{{\partial {{t}^{2}}}} - \frac{\partial }{{\partial x}}\frac{\tau }{2}\frac{\partial }{{\partial x}}(\rho {{u}^{2}} + p) = O({\text{K}}{{{\text{n}}}^{2}}).$$
(3.11)

Note that the Navier–Stokes equations themselves are derived from the Boltzmann kinetic equation up to \(O({\text{K}}{{{\text{n}}}^{2}})\) terms.

Therefore, the QGS system can be used as an alternative to the Navier–Stokes equations [15, 18, 19].

A disadvantage of the QGS system is its cumbersomeness, which is manifested most pronouncedly in its MGD version [20, 21]. A simpler form preserving all the properties of QGS is its compact version CQGS [22]. Computations and theoretical estimates for CQGS show that it hardly differ from the original QGS, as well as from the Navier–Stokes equations [23]. The CQGS system of equations is given by

$$\frac{{\partial \rho }}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}\rho }}{{\partial {{t}^{2}}}} + {\text{div}}(\rho (\bar {u} - \bar {W})) = 0,$$
(3.12)
$${{W}_{i}} = \frac{1}{\rho }\frac{\tau }{2}\frac{\partial }{{\partial x}}\left( {\rho {{u}_{i}}{{u}_{k}} + {{\delta }_{{ik}}}p} \right),$$
(3.13)
$$\frac{{\partial \rho {{u}_{i}}}}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}\rho {{u}_{i}}}}{{\partial {{t}^{2}}}} + \frac{{\partial (\rho {{u}_{{ix}}}({{u}_{k}} - {{W}_{k}}) + p)}}{{\partial {{x}_{k}}}} = \operatorname{div} {{P}_{{{\text{NS}}}}}{\text{,}}$$
(3.14)
$$\frac{{\partial E}}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}E}}{{\partial {{t}^{2}}}} + \operatorname{div} ((\bar {u} - \bar {W})(E + p)) = \operatorname{div} q + \operatorname{div} {{P}_{{NS}}}u,$$
(3.15)

where \({{P}_{{{\text{NS}}}}}\) is the viscous stress tensor in the Navier–Stokes equations and \(~q\) is the heat flux vector with

$${{q}_{i}} = \chi \frac{{\partial T}}{{\partial {{x}_{i}}}};$$
(3.16)

here, \(\chi \) is the thermal conductivity.

4 NUMERICAL ALGORITHMS FOR THE CQGS SYSTEM

The spatial derivatives in CQGS system (3.12)–(3.16) can be approximated by applying the same algorithms that are used to approximate the gas dynamics equations. The additional velocity \({{W}_{i}}\) (3.13) can be approximated using algorithms for approximating the spatial derivatives in the momentum equation of the Euler system for an ideal gas. For example, a second-order accurate Godunov scheme was used to approximate spatial derivatives in [23].

A more interesting task is to obtain a time approximation that takes into account second derivatives and the hyperbolic nature of system (3.12)–(3.16). As a model, we use the hyperbolic heat equation, which is rather widely used in modeling plasma physics problems:

$$\frac{{\partial T}}{{\partial t}} + \tau {\text{*}}\frac{{{{\partial }^{2}}T}}{{\partial {{t}^{2}}}} = \operatorname{div} \chi \operatorname{grad} T + F.$$
(4.1)

Here, \(\tau {\text{*}}\) has the dimension of time and is determined by the condition

$$\left[ {\tau {\text{*}}\frac{{{{\partial }^{2}}T}}{{\partial {{t}^{2}}}}} \right] \ll \left[ {\frac{{\partial T}}{{\partial t}}} \right].$$
(4.2)

Equation (4.1) is approximated by the three-level scheme

$$\frac{{T_{i}^{{j + 1}} - T_{i}^{{j - 1}}}}{{2\Delta t}} + \tau {\text{*}}\frac{{T_{i}^{{j + 1}} - 2T_{i}^{j} + T_{i}^{{j - 1}}}}{{\Delta {{t}^{2}}}} = \operatorname{div} \varkappa \operatorname{grad} ~{{T}^{{j}}} + F.$$
(4.3)

Here, i is the cumulative index of the nodes of the spatial approximation and the right-hand side of (4.3) is determined by data from the time level t = \({{t}^{j}}\).

This scheme is explicit. Namely, the values at the level \({{t}^{{j + 1}}}\) are determined by known data at the times \({{t}^{j}}\) and \(~{{t}^{{j - 1}}}\). The time approximation is second-order accurate. Due to its explicit character, the scheme can be successfully adapted to the architecture of computer systems with extramassive parallelism.

The stability of scheme (4.3) depends on the choice of \(\tau {\text{*}}\). For example, if

$$\tau \text{*} = h{\text{/}}{c},$$
(4.4)

where h is the characteristic length of the spatial grid and с is the characteristic velocity of the process, the stability of scheme (4.3) is determined by the expression [24]

$$\Delta {\tau } \lesssim {{h}^{{3/2}}}.$$
(4.5)

Note that this choice of \(\tau {\text{*}}\) guarantees the fulfillment of condition (4.2).

The stability condition (4.5) is rather restrictive, but more acceptable than the stability condition for explicit schemes designed for parabolic equations (2.2). These advantages in the admissible time step are especially pronounced in the case of fine spatial grids consisting of 1010 nodes and more for 3D problems [25]. At present, such grids becomes available for simulation on ultrahigh-performance computer systems.

The new hyperbolic nature of system (3.11)–(3.15) opens up numerous opportunities for solving the evolution problem.

For example, the characteristic-conservative approach from [26] can be applied for its solution. A possible technique is to represent system (3.11)–(3.15) in the form

$$\frac{{\partial \bar {Q}}}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}\bar {Q}}}{{\partial {{t}^{2}}}} = \operatorname{div} {{\overline S }_{Q}},$$
(4.6)

where \(\bar {Q}\) is the vector of gasdynamic variables, \({{\bar {S}}_{Q}}\) is the flux formed by the terms involved in the spatial derivatives of, say, density, when Q is equivalent to \(\rho \):

$${{\bar {S}}_{\rho }} = \rho {{u}_{i}} - \frac{{\tau \cdot \partial (\rho {{u}_{i}}{{u}_{k}} + {{\delta }_{{ik}}}p)}}{{2\partial {{x}_{k}}}}.$$
(4.7)

System (4.6) can be represented in the form of two equations:

$$\frac{{\partial \bar {Q}}}{{\partial t}} = \operatorname{div} {{\bar {\Phi }}_{Q}},$$
(4.8)
$$\tau {\text{*}}\frac{{\partial {{{\bar {\Phi }}}_{Q}}}}{{\partial t}} + {{\bar {\Phi }}_{Q}} = {{\bar {S}}_{Q}}.$$
(4.9)

System (4.8), (4.9) can be solved by applying explicit schemes, namely, using a characteristic scheme for the flux values \({{\Phi }_{Q}}\) localized on cell edges (faces in the 3D case):

$$\bar {\Phi }_{Q}^{{j + 1}} = \bar {\Phi }_{Q}^{j}\exp {{l}^{{{{ - \tau \text{*}} \mathord{\left/ {\vphantom {{ - \tau *} {\Delta t}}} \right. \kern-0em} {\Delta t}}}}} + \bar {S}_{Q}^{j}(1 - \exp {{l}^{{{{ - \tau \text{*}} \mathord{\left/ {\vphantom {{ - \tau *} {\Delta t}}} \right. \kern-0em} {\Delta t}}}}})$$
(4.10)

and using a conservative scheme for \(\bar {Q}\):

$$\frac{{{{{\bar {Q}}}^{{j + 1}}} - {{{\bar {Q}}}^{j}}}}{{\Delta t}} = \operatorname{div} \bar {\Phi }_{Q}^{{j + 1}},$$
(4.11)

which allows us to determine \({{\bar {Q}}^{{j + 1}}}\) at the cell center (see Fig. 1). After finding \({{Q}^{{j + 1}}}\), we determine the fluxes \(\bar {S}_{Q}^{{j + 1}}\) on the cell edges (faces).

Fig. 1.
figure 1

Conservative characteristic scheme.

Other approaches are also possible for solving the QGS and CQGS (3.12)–(3.16) systems.

As was noted above, numerical experiments show that the Navier–Stokes equations and the QGS and CQGS systems do yield noticeable differences in numerical results.

However, it is of interest to theoretically study the behavior of solutions to the hyperbolized system of gas dynamics equations.

An analysis of solutions to a model parabolic equation and a hyperbolic equation with a small parameter multiplying the second time derivative shows that their difference is determined by the product of the square of this parameter and the norm of the second time derivative in the original equation [27]. This difference can be noticeable only in the case of high-frequency time variations in the solution, which are directly connected with short-wave variations in space. Note that these short-wave variations are not described in difference schemes for both parabolic and hyperbolic versions.

The well-posedness of the hyperbolic version of the gasdynamic equations was investigated in [28]. As in the case of the two-dimensional Navier–Stokes equations, the uniqueness and time-global existence of a solution to this system were proved.

Note also that the hyperbolic nature of the QGS and CQGS systems allows one to make progress in solving the fault tolerance problem, which is important for ultrahigh-performance computer systems. Specifically, by using the structure of the solution to hyperbolic equations, the replacement of faulty processors can be organized in principle without interrupting the computations on the basic processors (see [10, 11]).

5 QUASI-GASDYNAMIC MODEL FOR MAGNETOGASDYNAMICS

Let us examine the applicability of the kinetic approach for describing MHD and MGD problems. Since the derivation of these equations relies heavily on the Maxwell system, it seems at first glance that a model based directly on the single-particle distribution function is difficult to construct. Define the function [20, 21]

$${{f}_{{OM}}} = \frac{{\rho (t,\bar {x})}}{{{{{[2\pi RT(t,\bar {x})]}}^{{3/2}}}}}\exp \left[ {\frac{{ - {{{\left\{ {{{\xi }_{k}} - {{u}_{k}}(t,\bar {x}) - i\frac{{{{B}_{k}}(t,\bar {x})}}{{\sqrt {4\pi \rho } }}} \right\}}}^{2}}}}{{2RT}}} \right],\quad k = 1,2,3.$$
(5.1)

Here, \(\bar {B}\) is the magnetic field vector and i is the imaginary unit.

The choice of the function fOM in (5.1) for describing the total ensemble of charged and neutral particles and the magnetic field can be explained as follows. First, \(\frac{{\bar {B}}}{{\sqrt {4{\pi \rho }} }}\) is the Alfven velocity, which is characteristic for MGD. Second, complex variables provide a convenient tool for describing the behavior of charged particles in a magnetic field [29].

Anyway, by using the moments of fOM, the gasdynamic parameters and the magnetic field can be expressed in perfect analogy with the kinetic theory of gases:

$$\rho (t,\bar {x}) = \operatorname{Re} \int {{{f}_{{OM}}}d\xi } ,$$
(5.2)
$$\bar {B}(t,\bar {x}) = \frac{1}{{\sqrt \rho }}\operatorname{Im} \int {\xi {\text{*}}} {{f}_{{OM}}}d\xi ,$$
(5.3)
$$\bar {u}(t,\bar {x}) = \frac{1}{\rho }\operatorname{Re} \int \xi {{f}_{{OM}}}d\xi ,$$
(5.4)
$$O = \operatorname{Im} \int {{{f}_{{OM}}}d\xi } ,$$
(5.5)
$$E(t,\bar {x}) = \operatorname{Re} \int {\frac{1}{2}{{\xi }^{2}}} {{f}_{{OM}}}d\xi ,$$
(5.6)
$$p + \frac{{{{B}^{2}}}}{{8\pi }} = \operatorname{Re} \int {{{c}^{2}}} {{f}_{{OM}}}d\xi .$$
(5.7)

Here, \(\bar {\xi }{\text{*}}\) is the complex conjugate of the molecular velocity, which is related to \(\bar {\xi }\) by the simple linear relationship

$$\bar {\xi }\text{*} = \bar {c} + \bar {u} - i\frac{{\bar {B}}}{{\sqrt {4\pi \rho } }},$$
(5.8)
$$\bar {c} = \bar {\xi } - \bar {u} - i\frac{{\bar {B}}}{{\sqrt {4\pi \rho } }}.$$
(5.9)

Note that expression (5.5) implies that the divergence of the magnetic field is zero. Expression (5.7) for the sum of gas-kinetic and magnetic pressure is similar to the expression

$$p = \int {{{c}^{2}}} {{f}_{0}}d\xi $$
(5.10)

for the gasdynamic pressure, where f0 is defined by (3.1).

Consider a hypothetical transport equation for the function \(f(t,\bar {x},\bar {\xi })\):

$$\frac{{\partial f}}{{\partial t}} + {{\xi }_{k}}\frac{{\partial f}}{{\partial {{x}_{k}}}} = I\left( {f,f{\kern 1pt} '} \right).$$
(5.11)

Without specifying the form of the collision integral I, we assume that its moments

$$\int {I\varphi (\xi )} {\kern 1pt} {\kern 1pt} d\xi = 0$$
(5.12)

with summation invariants

$$\varphi (\xi ) = (1,\xi ,\xi \text{*},{{\xi }^{2}}{\text{/}}2)$$
(5.13)

vanish.

As in the case of deriving the Euler equations from the Boltzmann one, the kinetic equation (5.11) is sequentially multiplied by the summation invariants and the result is integrated over the space of molecular velocities \({\bar {\xi }}\):

$$\frac{{\partial \rho }}{{\partial t}} + \operatorname{div} \rho \bar {u} = 0,$$
(5.14)
$$\frac{\partial }{{\partial t}}\rho {{u}_{k}} + \frac{\partial }{{\partial {{x}_{p}}}}\left[ {\left( {P + \frac{{{{B}^{2}}}}{{8\pi }}} \right){{\delta }_{{kp}}} + \rho {{u}_{k}}{{u}_{p}} + {{B}_{k}}{{B}_{p}}} \right] = 0,$$
(5.15)
$$\frac{{\partial E}}{{\partial t}} + \frac{\partial }{{\partial {{x}_{k}}}}\left[ {{{u}_{k}}\left( {E + P + \frac{{{{B}^{2}}}}{{8\pi }}} \right)} \right] = 0,$$
(5.16)
$$\frac{{\partial {{B}_{k}}}}{{\partial t}} + \frac{\partial }{{\partial {{x}_{k}}}}[{{u}_{k}}{{B}_{p}} - {{u}_{p}}{{B}_{k}}] = 0,$$
(5.17)
$$\frac{{\partial {{B}_{k}}}}{{\partial {{x}_{k}}}} = 0.$$
(5.18)

Equations (5.14)(5.18) are the ideal MGD equations. However, they were obtained using kinetic model (5.11) and the function fOM (5.1).

Following the approach described in Section 3, we can write a balance equation based on the function fOM [20, 30]:

$$\frac{{{{f}^{{j + 1}}} - f_{{OM}}^{j}}}{{\widetilde \tau }} + {{\xi }_{k}}\frac{{\partial f_{{OM}}^{j}}}{{\partial {{x}_{k}}}} = \frac{\partial }{{\partial {{x}_{k}}}}\left( {\frac{{\tilde {\tau }}}{2}{{\xi }_{k}}{{\xi }_{p}}} \right)\frac{{\partial f_{{OM}}^{j}}}{{\partial {{x}_{p}}}};$$
(5.19)

here, \(\tilde {\tau }\) is set equal to the time \(\tau ~\) between molecular collisions when (5.19) is multiplied by the summation invariants \(\varphi (\xi ) = (1,\xi ,\xi \text{*},{{\xi }^{2}}{\text{/}}2)\) and is set equal to \({{\tau }_{m}}\) (to be specified later) when (5.19) is multiplied by \(\xi {\text{*}}\). Applying integration and discarding small-order terms, we obtain the following MGD analogue of CQGS system (3.12)–(3.15) [22, 31]:

$${{W}_{k}} = \frac{1}{\rho }\frac{\partial }{{\partial {{x}_{p}}}}\left[ {\left( {P + \frac{{{{B}^{2}}}}{{8\pi }}} \right){{\delta }_{{kp}}} + \rho {{u}_{k}}{{u}_{p}} - {{B}_{k}}{{B}_{p}}} \right],$$
(5.20)
$$\frac{{\partial \rho }}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}\rho }}{{\partial {{t}^{2}}}} + \operatorname{div} \left[ {\rho \left( {\bar {u} - \bar {w}} \right)} \right] = 0,$$
(5.21)
$$\frac{{\partial \rho \bar {u}}}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}\rho \bar {u}}}{{\partial {{t}^{2}}}} + {\text{div}}\left[ {\rho \bar {u} \times \left( {\bar {u} - \bar {w}} \right) + {{B}_{k}}{{B}_{p}}} \right] + \nabla \left( {P + \frac{{{{B}^{2}}}}{{8\pi }}} \right) = \operatorname{div} {{P}_{{{\text{NS}}}}},$$
(5.22)
$$\frac{{\partial E}}{{\partial t}} + \frac{\tau }{2}\frac{{{{\partial }^{2}}E}}{{\partial {{t}^{2}}}} + {\text{div}}\left[ {\left( {E + P + \frac{{{{B}^{2}}}}{{8\pi }}} \right)\left( {\bar {u} - \bar {w}} \right)} \right] = \operatorname{div} q + \operatorname{div} {{P}_{{{\text{NS}}}}}\bar {u},$$
(5.23)
$$\frac{{\partial \bar {B}}}{{\partial t}} + \frac{{{{\tau }_{m}}}}{2}\frac{{{{\partial }^{2}}\bar {B}}}{{\partial {{t}^{2}}}} = {\text{curl}}\left( {\left( {\bar {u} - \bar {w}} \right) \times \bar {B} + {{\nu }_{m}}{\text{curl}}\bar {B}} \right),$$
(5.24)
$$\operatorname{div} \bar {B} = 0,$$
(5.25)
$${{\nu }_{m}} = \frac{{{{c}^{2}}}}{{4\pi \sigma }};$$
(5.26)

here, c is the speed of light, \(\sigma \) is the electric conductivity coefficient, and

$${{\tau }_{m}} = \frac{{2{{\nu }_{m}}}}{{P + \frac{{{{B}^{2}}}}{{8\pi }}}},\quad \tau = \frac{{2\mu }}{P}.$$
(5.27)

As in the case of CQGS system (3.11)–(3.15), the spatial derivatives can be approximated by applying algorithms used previously to solve MGD problems. Time approximation can be based on explicit schemes, for example, those considered in Section 4.

An interesting issue is the relation between \({{\tau }_{m}}\) and the magnetic viscosity \({{\nu }_{m}}\). Expression (5.27) implies that \({{\nu }_{m}}\) can be defined not only by (5.26), which follows from electrodynamics, but also by

$${{\nu }_{m}} = \frac{{{{\tau }_{m}}\left( {P + \frac{{{{B}^{2}}}}{{8\pi }}} \right)}}{2}.$$
(5.28)

This expression is identical to the definition of molecular viscosity following from the elementary kinetic theory [17], in which p is replaced by the sum of the gas-kinetic and magnetic pressure. In this sense, \({{\tau }_{m}}\) can be treated by analogy with \(\tau \) as the characteristic time of reaching equilibrium between the magnetic field and the ensemble of charged and neutral particles.

6 NUMERICAL EXAMPLES

Numerous computations comparing numerical results based on the Navier–Stokes and MGD equations with data relaying on the QGS system and its CQGS and MGD modifications (i.e., systems (3.12)–(3.16) and (5.20)–(5.27), respectively) were performed in [15, 22, 23]. No noticeable differences between these numerical results were found. At the same time, models based on kinetic representations make it possible to carry out successful simulation on grids consisting of 1010 spatial nodes. For example, the accretion of interstellar matter onto a massive astrophysical object with the formation of a stable collinear jet was simulated on such grids in [25, 32].

In this work, along with Eqs. (5.20)(5.27), we solved equations for the gravitational potential \(\Phi \), namely,

$$\Delta \Phi = 4\pi \rho .$$
(6.1)

Equations (6.1), like the other ones in the system, were solved using the three-level explicit scheme

$$\frac{{{{\Phi }^{{j + 1}}} - {{\Phi }^{{j - 1}}}}}{{\Delta t}} + \tau {\text{*}}\frac{{{{\Phi }^{{j + 1}}} - 2{{\Phi }^{j}} + {{\Phi }^{{j - 1}}}}}{{\Delta {{t}^{2}}}} = \Delta {{\Phi }^{j}} - 4\pi {{\rho }^{j}}.$$
(6.2)

An example of computing incompressible flow in a magnetic field on the basis of system (5.20)–(5.27) is given below. A brief description of this problem can be found in [33].

Figure 2 shows the two-dimensional computational domain used in the numerical experiment.

Fig. 2.
figure 2

Computational domain of the numerical experiment: 2.5 × 2.5 cm2.

A plane rectangular channel 30 cm long with 2.5 × 2.5 cm2 cross section is connected to a rectangular cavity of size 2.5 × 30 × 30 cm3, which ends with an identical outlet rectangular channel.

The 10-cm-long initial segment of the inlet channel is exposed to an electromagnetic field, which sets an electrically conducting fluid—melted sodium (Na)—in motion.

System (5.20)–(5.27) is closed by adding the relation between the pressure and density given by

$$p = {{p}_{0}} + \beta (\rho - {{\rho }_{0}}).$$
(6.3)

The value of the parameter \(\beta ~\) is large, which reflects the fact that the medium is nearly incompressible. Previously conducted research [15] shows that the numerical results are hardly affected by \(\beta ~\) if its value is sufficiently large. The three-level explicit scheme (see Section 4) was used in the computations.

The characteristics of the melt were taken from [34]. The temperature of the melt GOOK, the density was 874 kg/m3, and the ratio of specific heats was \(\delta = 1.2~\). The viscosity was defined by the empirical formula

$$\ln \mu = - 6.44 - \ln T + {{536} \mathord{\left/ {\vphantom {{536} T}} \right. \kern-0em} T}.$$
(6.4)

The thermal and electric conductivity coefficients were also specified by the empirical formulas

$$\chi = 124 - 0.011T + 5.52 \times {{10}^{{ - 3}}}{{T}^{2}} - 1.18 \times {{10}^{{ - 8}}}{{T}^{3}},$$
(6.5)
$$\sigma = - 9.91 + 8.2 \times {{10}^{{ - 2}}}T - 1.32 \times {{10}^{{ - 4}}}{{T}^{2}} - 1.78 \times {{10}^{{ - 7}}}{{T}^{3}}.$$
(6.6)

The numerical rectangular grid in 3D simulation consisted of 200 × 2000 × 2000 nodes. The initial velocity of Na in the electromagnetic field region was 3.60 m/s.

Figure 3 shows the magnetic field in relative units.

Fig. 3.
figure 3

Initial relative distribution of the magnetic field.

The variations in the density, which is governed by Eq. (5.21), were within 0.18%, which confirms the applicability of this approach for the simulation of an incompressible fluid. The maximum velocity in the channel expansion was found to be 2.70 m/s.

Figures 4a and 4b show sodium streamlines at various times. Note a steady vortex flow typical of the incompressible flow in a cavity [35]. It can be seen that the initial vortices merge later into a single one, which agrees with the flow pattern observed in experiments.

Fig. 4.
figure 4

Streamlines in the cavity (a) at the beginning of the process at t = 0.02 s and (b) in the steady state at t = 0.3 s.

With sufficient computational resources, model (5.20), (5.21) and the algorithm can be run on finer grids to determine a more detailed flow field.

CONCLUSIONS

Serious difficulties arising in simulation on ultrahigh-performance computer systems can be overcome algorithmically if numerical methods are designed using additional information from related fields of knowledge. The present approach relies heavily on the relationship between the kinetic and gasdynamic descriptions of continuous media, which is well known in theoretical mechanics.

With the kinetic model used as an underlying tool, hydro- and gasdynamic processes can be described by a hyperbolic system of equations. Its application, in turn, makes it possible to develop algorithms adaptable to the architecture of computer systems with extramassive parallelism.

To conclude, we note that this paper generally follows the plenary talk given at the international conference (June 2019, Moscow) dedicated to the 100th birthday of Alexander Andreevich Samarskii. For many years, Chetverushkin worked on a team headed by Samarskii, and, in many respects, the influence of his scientific school determined the subject matter of this work.