1 INTRODUCTION

The State Standards (GOSTs) currently introduced by the Federal Technical Regulation and Metrology Agency (e.g., [1]) imply the verification and validation of computational fluid dynamics’ codes using a comparison with the set of test problems corresponding to either exact (analytical) solutions or reference numerical solutions. A detailed review of the works in this direction can be found in [2]. These GOSTs are potentially capable of determining the immediate prospects of computational fluid dynamics in Russia. However, the approach recommended in the standards has two obvious drawbacks: it does not provide a method for quantitatively comparing the verified calculation with the test problem and it does not indicate what to do if there is no reference solution. Here we will discuss possible ways to eliminate these disadvantages with an example of calculating an inviscid supersonic flow.

Let us start by analyzing the possibilities for the quantitative comparison of solutions. The absence of a specific quantitative method for comparing the verified calculation with the test problem significantly complicates actions [1] such as checking the ability of a numerical scheme and a code to maintain the state of rest. It is not clear in which norm and with which accuracy this rest is defined. Strictly speaking, it is possible to demand the conservation of rest with machine precision; however, this requirement is clearly excessive from a practical point of view.

It is typical that the correlation of solutions is often made by a visual comparison of two flow fields, which leaves plenty of room for subjectivity. For example, numerical solutions are quite often compared in terms of the presence of oscillations in the shock and in the degree of its smearing. The solutions produced by TVD schemes look best from this perspective. However, oscillating solutions can converge in the \({{L}_{1}}\) and \({{L}_{2}}\) norms in some cases better than monotonic (TVD) solutions [3].

Thus, the question arises of determining a quantitative measure of proximity between the numerical and reference solutions. If the reference solution is exact (analytical), the traditional measure of proximity can be any norm of the computational error (the deviation of the numerical solution from the exact one), for example, in \({{L}_{1}}\), \({{L}_{2}}\), \({{H}^{{ - 1}}}\), etc. These norms define the corresponding distances (metrics) in the solution space, which can also be used to compare numerical solutions with each other. However, the distances between solutions can be defined without referring to norms but using, for example, a Mahalanobis-type metric or a Riemannian metric. This expands the capabilities of the analysis. Thus, there are many different options for choosing a quantitative measure of proximity between solutions. At the moment, there is no generally accepted standard (or even an established terminology) for proximity measures when numerical solutions (flow fields) of the aerogasdynamics equations are compared. Below we discuss the advantages and disadvantages of some measures in order to find the optimal variant.

The use of the norm for a quantitative comparison of flows is complicated by the fact that, in the case of aerogasdynamics, the norm of the deviation of one solution from another does not have a clear physical meaning (it contains, for example, the sum of the density and temperature errors) and is not directly related to the valuable flow functionals used in practice such as for instance lift or drag. Accordingly, there are no intuitive ideas about what value of the error norm is acceptable and what is not.

Of course, quantitative verification criteria can be obtained using such valuable functionals for which there is information about the tolerated error and whose physical meaning is clear (drag coefficient, lift, etc.). However, the proximity of solutions in terms of one functional does not imply the proximity in terms of another, which significantly limits the generality of the verification results. Therefore, the use of norms or distances (metrics) as a measure for the proximity of solutions is more expedient. However, it should be noted that the number of possible metrics and norms potentially suitable for a comparison of solutions is very large and their properties differ significantly. For example, the convergence of numerical solutions in norms \({{L}_{1}}\), \({{L}_{2}}\), and \({{H}^{{ - 1}}}\) depends on the regularity of the considered solution. However, the variation of the valuable functional \(\varepsilon (u)\) can be related to the error norm of the solution via the Cauchy–Bunyakovsky inequality \(\left| {\Delta \varepsilon (u)} \right| \leqslant \left\| {\nabla \varepsilon } \right\| \cdot \left\| {\Delta u} \right\|\), which allows us to combine the approaches to the comparison of solutions based on the norm and on the error of the valuable functional, respectively. Unfortunately, the Cauchy–Bunyakovsky inequality can be applied only to those norms that are generated by the inner product. Therefore, the \({{L}_{1}}\) norm does not allow the use of the Cauchy-Bunyakovsky inequality, despite the fact that it seems to be the most natural, since it is in this norm that the main results on the convergence of schemes for the compressible Euler equations of gas dynamics were obtained. From this point of view, the \({{L}_{2}}\) norm is better; however, there may be no convergence in grid spacing in \({{L}_{2}}\) for solutions containing discontinuities. Accordingly, the analysis should be performed without taking into account the asymptotics that arise when the mesh is refined.

Another natural (and strong) limitation of the approach to verification of the solution and software based on comparing the numerical solution with the reference solution (analytical or accurate numerical solution [1]) is the use (availability) of the reference solution. For software verification [1], the use of the reference solution makes sense if the solutions in the area of software applicability are close in a certain sense to the reference ones, at least in structure. Taking into account the fact that the class of aerogasdynamics problems having analytical solutions is extremely narrow and the accurate numerical solution itself needs verification (this applies both to solutions obtained using highly accurate algorithms and to solutions obtained on a fine grid), it seems unrealistic to rely on such closeness for practical problems. In this regard, methods of software and solution verification that work without reference solutions are of interest.

The American National Standard [4] and the AIAA Guide [5] propose the use of Richardson’s extrapolation for verification without reference solutions. However, for aerogasdynamics problems, it is extremely expensive from the computational point of view [6].

Verification on an ensemble of solutions obtained by independent algorithms on the same grid [7, 8] can be one of the alternative versions.

2 QUANTITATIVE COMPARISON OF SOLUTIONS

Let us consider the possibilities of a quantitative comparison of the numerical flow fields using different measures of proximity.

The numerical solution is denoted by the vector (grid function) \({{u}^{{(i)}}} \in {{R}^{N}}\), \(i\) is the scheme’s index, \(N = {{N}_{S}}{{N}_{g}}\) is the number of discrete variables (the product of the number of grid nodes by the number of gas-dynamic variables), and the values of the unknown true solution at grid nodes are denoted by the vector \(\tilde {u} \in {{R}^{N}};\) then the computational error is \(\Delta {{u}^{{(i)}}} = {{u}^{{(i)}}} - \tilde {u} \in {{R}^{N}}.\) As proximity measures, we use the distances between the solutions generated by the \({{L}_{1}}\) and \({{L}_{2}}\) norms and by some metrics \(d(u,{v})\).

The optimal proximity measure must make it possible to reliably distinguish the flow structure and provide a comparison of the qualitatively close flow fields obtained on different grids or by different methods. From this point of view, we compare the distances generated by the \({{L}_{1}}\) and \({{L}_{2}}\) norms, for example,

$${{\left\| {{{u}^{{(i)}}} - \tilde {u}} \right\|}_{{{{L}_{2}}}}} = {{\left\| {{{\rho }^{{(i)}}} - \tilde {\rho },{{U}^{{(i)}}} - \tilde {U},{{V}^{{(i)}}} - \tilde {V},{{e}^{{(i)}}} - \tilde {e}} \right\|}_{{{{L}_{2}}}}} = {{\left( {\frac{1}{N}\left( {\sum\limits_{k = 1}^{{{N}_{S}}} {(\Delta \rho _{k}^{2} + \Delta U_{k}^{2} + \Delta V_{k}^{2} + \Delta e_{k}^{2})} } \right)} \right)}^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0em} 2}}}},$$
(1)

and the Mahalanobis-type metric [9]

$$(\Delta u,M\Delta u) = {{({{M}_{{j,k}}}\Delta u_{j}^{{}}\Delta u_{k}^{{}})}^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0em} 2}}}}$$
(2)

corresponding to a certain smoothing of the flow field. In this paper, the image Euclidean distance (IMED) metric [10] is used, which is described by the metric tensor

$${{M}_{{ij}}} = \frac{1}{{2\pi {{\sigma }^{2}}}}\exp \{ {{ - {{{\left| {{{P}_{i}} - {{P}_{j}}} \right|}}^{2}}} \mathord{\left/ {\vphantom {{ - {{{\left| {{{P}_{i}} - {{P}_{j}}} \right|}}^{2}}} {(2{{\sigma }^{2}})}}} \right. \kern-0em} {(2{{\sigma }^{2}})}}\} ,$$
(3)

where \(\left| {{{P}_{i}} - {{P}_{j}}} \right|\) is the distance between the grid nodes \({{P}_{i}}\) and \({{P}_{j}}\) and σ is a smoothing parameter (σ ≥ 0.5 in the computations) that provides some smearing and, accordingly, the stability of the distance estimate to small variations of the variables in the flow field. This distance corresponds to the spatially averaged error. Asymptotically, IMED tends to L2 as σ → 0.

Metrics of this type correspond to the Euclidean metric in some transformed space. They are generated by the scalar product and admit the generalized form of the Cauchy–Bunyakovsky inequality

$${{\left| {\Delta \varepsilon (u)} \right|}^{2}} \leqslant (\nabla \varepsilon ,{{M}^{{ - 1}}}\nabla \varepsilon )(\Delta u,M\Delta u).$$
(4)

Therefore, estimates of the computational error in the Mahalanobis-type metric allow us to estimate the error of valuable functionals.

The Riemannian metric based on the relative error (REM) is also considered:

$${{\left\| {{{u}^{{(i)}}} - \tilde {u}} \right\|}_{{REM - {{L}_{2}}}}} = {{\left\| {\frac{{{{\rho }^{{(i)}}} - \tilde {\rho }}}{{\left\| {{{\rho }^{{(i)}}}} \right\|}},...,\frac{{{{e}^{{(i)}}} - \tilde {e}}}{{\left\| {{{e}^{{(i)}}}} \right\|}}} \right\|}_{{{{L}_{2}}}}}.$$
(5)

Strictly speaking, this metric is local but, for small deviations of the comparable solutions, it can be considered as a Mahalanobis-type metric.

3 TEST PROBLEMS

The test problems deals with flows governed by the two-dimensional Euler equations in the stationary limit

$$\frac{{\partial \rho }}{{\partial t}} + \frac{{\partial (\rho {{U}^{k}})}}{{\partial {{x}^{k}}}} = 0,$$
(6)
$$\frac{{\partial (\rho {{U}^{i}})}}{{\partial t}} + \frac{{\partial (\rho {{U}^{k}}{{U}^{i}} + P{{\delta }_{{ik}}})}}{{\partial {{x}^{k}}}} = 0,$$
$$\frac{{\partial (\rho E)}}{{\partial t}} + \frac{{\partial (\rho {{U}^{k}}{{h}_{0}})}}{{\partial {{x}^{k}}}} = 0.$$
(8)

Here, \(i,k = 1,2\); U 1 = U and U 2 = V are the velocity components; \({{h}_{0}} = {{({{U}^{2}} + {{V}^{2}})} \mathord{\left/ {\vphantom {{({{U}^{2}} + {{V}^{2}})} {2 + h}}} \right. \kern-0em} {2 + h}}\), \(h = \frac{\gamma }{{\gamma - 1}}\frac{P}{\rho } = \gamma e\), \(e = \frac{{RT}}{{\gamma - 1}}\), and \(E = \left( {e + \frac{1}{2}({{U}^{2}} + {{V}^{2}})} \right)\) are enthalpies and energies; P = \(\rho RT\) is the equation of state; and \(\gamma = {{{{C}_{p}}} \mathord{\left/ {\vphantom {{{{C}_{p}}} {{{C}_{{v}}}}}} \right. \kern-0em} {{{C}_{{v}}}}}\).

As test examples, we considered the interaction of shock waves of kind I according to the Edney classification [11] and the supersonic flow around a thin plate.

It is of particular interest to estimate the error of valuable functionals used in practice such as the lift and drag. As a rule, the range of reasonable and unreasonable error values is clear for these quantities. This permits in several cases to infer if the value of the error norm is reasonable or not, which we will apply below for some calibration of the error estimates obtained in terms of norms. A comparison of the errors of the lift coefficients \(\varepsilon = {{C}_{L}}\) = \({{{{F}_{y}}} \mathord{\left/ {\vphantom {{{{F}_{y}}} {(Sq)}}} \right. \kern-0em} {(Sq)}}\) and the drag coefficients \(\varepsilon = {{C}_{D}} = {{{{F}_{x}}} \mathord{\left/ {\vphantom {{{{F}_{x}}} {(Sq)}}} \right. \kern-0em} {(Sq)}}\) (where \(q = {{\rho }_{\infty }}{{(U_{\infty }^{2} + V_{\infty }^{2})} \mathord{\left/ {\vphantom {{(U_{\infty }^{2} + V_{\infty }^{2})} 2}} \right. \kern-0em} 2}\)) is performed in a series of computations for the flow around a thin plate at the angle of attack (with M = 3 and \(\alpha = {{5}^{^\circ }}\)). The following data from linear theory [12] are taken as the exact values of the functionals:

$${{\tilde {C}}_{L}} = {{4\alpha } \mathord{\left/ {\vphantom {{4\alpha } {\sqrt {M_{\infty }^{2} - 1} }}} \right. \kern-0em} {\sqrt {M_{\infty }^{2} - 1} }},\,\,\,\,{{\tilde {C}}_{D}} = {{4{{\alpha }^{2}}} \mathord{\left/ {\vphantom {{4{{\alpha }^{2}}} {\sqrt {M_{\infty }^{2} - 1} }}} \right. \kern-0em} {\sqrt {M_{\infty }^{2} - 1} }}.$$
(9)

We compare the error of the valuable functionals and the error norm using the Cauchy–Bunyakovsky inequality. For this, we need estimates for the error norm and for the norm of the valuable functional gradient. The estimation of the norm of the solution error is of special interest here, since we do not use the reference solution.

4 ERROR ESTIMATION ON AN ENSEMBLE OF SOLUTIONS IN THE ABSENCE OF THE REFERENCE SOLUTION

If no reference solution is available, we use an ensemble of numerical solutions obtained by schemes of different approximation orders on the same grid.

As a measure for the proximity of solutions, we use both the norm of the error \({{\left\| {\Delta {{u}^{{(k)}}}} \right\|}_{{{{L}_{2}}}}} = {{\left\| {{{u}^{{(k)}}} - \tilde {u}} \right\|}_{{{{L}_{2}}}}}\) and the distance between the solutions \(d({{u}^{{(k)}}},\tilde {u})\). To unify the notation, we assume that the norm of the error defines the distance \(d({{u}^{{(k)}}},\tilde {u}) = {{\left\| {{{u}^{{(k)}}} - \tilde {u}} \right\|}_{{{{L}_{2}}}}}\) from the exact solution to the numerical solution and denote this distance as \(d({{u}^{{(k)}}},\tilde {u}) = {{\delta }_{{0,k}}}\).

4.1 Triangle Inequality

The triangle inequality makes it easy to obtain the following relation [7, 8]: for two numerical solutions \({{u}^{{(1)}}}\) and \({{u}^{{(2)}}}\) for which the relation \({{\delta }_{{0,1}}} \geqslant 2{{\delta }_{{0,2}}}\) between the errors’ magnitudes (the distances between the exact and numerical solutions) is a priori known, the exact solution is located inside the hypersphere of radius \({{\delta }_{{1,2}}}\) centered in the more accurate solution \({{u}^{{(2)}}}\):

$$\delta ({{u}^{{(2)}}},\tilde {u}) \leqslant {{\delta }_{{1,2}}}.$$
(10)

An obvious weakness of (10) is the assumption that we know how the solutions are ranged according to their error magnitudes; however, the analysis of the distances between the numerical solutions in certain cases allows us to reveal this solutions’ ordering. Consider the case when the solution u(1) is significantly less accurate than any of the others (δ0,1⪢ δ0,i). It is easy to see that the set of distances \({{\delta }_{{i,j}}}\) splits into

— a cluster of large distances \({{\delta }_{{1,j}}}\) (from accurate solutions to the rough one) and

— a cluster of small distances \({{\delta }_{{i,j}}}(i \ne 1)\) (between more accurate solutions).

Let \(\delta _{{0,i}}^{{\max }}\) denote the maximum error in the subset of more accurate solutions. We take the maximum of \({{\delta }_{{i,j}}}(i \ne 1)\) as the upper bound \({{d}_{1}}\) of the first cluster (the set of distances between accurate solutions). The minimum of \({{\delta }_{{1,j}}}\) is taken as the lower bound \({{d}_{2}}\) of the second cluster (of the distances between the accurate solutions and the most inaccurate one).

In this case, instead of \({{\delta }_{{0,1}}} \geqslant 2{{\delta }_{{0,2}}}\), the following heuristic criterion stated in [7, 8] can be used.

If \({{d}_{2}} - {{d}_{1}} > {{d}_{1}}\), then the true solution lies in the hypersphere of radius \({{\delta }_{{1,i}}}\) centered at \({{u}^{{(i)}}}\): \({{\delta }_{{0,i}}} \leqslant {{\delta }_{{1,i}}}\), where \({{u}^{{(i)}}}\) belongs to the cluster of more accurate solutions and \({{u}^{{(1)}}}\) is the most inaccurate solution.

4.2 Measure’s Concentration

As another option for error estimation, the maximum distance between solutions obtained by different computational algorithms can be used. The grid functions used for discretizing multidimensional partial differential equations also belong to a high-dimensional space (from N ~ 106 and higher). The so-called concentration effect of the measure of the solutions’ proximity [13, 14] depends on the choice of the algorithms compared on the set of solutions. The numerical solutions of the aerogasdynamic equations obtained by different methods describe a unique solution and, therefore, exhibit a high degree of correlation. Denote the system of equations by \(A\tilde {u} = f\); the numerical solution obtained by the kth algorithm is determined by the discrete operator \({{A}_{h}}{{u}^{{(k)}}} = {{f}_{h}}\). The truncation error \(\delta {{u}^{{(k)}}} = \sum\nolimits_{i \geqslant k}^\infty {{{C}_{i}}{{h}^{i}}{{{{\partial }^{{i + 1}}}u} \mathord{\left/ {\vphantom {{{{\partial }^{{i + 1}}}u} {\partial {{x}^{{i + 1}}}}}} \right. \kern-0em} {\partial {{x}^{{i + 1}}}}}} \) is obtained by expanding the numerical solution u(k) in a Taylor series and substituting the result into the main system of equations. The transformation \({{u}^{{(k)}}} \to \delta {{u}^{{(k)}}}\) requires an algorithm of infinite length, which can be interpreted as the algorithmic independence [15] of δu(k). At first sight, this also pertains to the discretization error \(\Delta {{u}^{{(k)}}} = ({{u}^{{(k)}}} - \tilde {u}) = A_{h}^{{ - 1}}\delta {{u}^{{(k)}}}\). Consider an ensemble of M numerical solutions produced on the same grid by independent difference schemes (in the simplest case, schemes of different orders). If we assume that the algorithmic independence of discretization error \(\Delta {{u}^{{(k)}}}\) ensures the arbitrariness of the choice, then the fact that the error belongs to spaces of very large dimensions provides nonstandard possibilities for determining the error norm and the position of the true solution. It is known that the distance d1,2 between two arbitrarily chosen vectors \({{{v}}^{{(1)}}} \in {{R}^{N}}\) and \({{{v}}^{{(2)}}} \in {{R}^{N}}\) in spaces of a sufficiently large dimension N is greater with probability 1 than the length of these vectors \(\left\| {{{{v}}^{{(i)}}}} \right\| \leqslant {{d}_{{1,2}}}\). This is due to the fact that the chord in these spaces is almost always larger than the radius [14] and two arbitrarily chosen unit vectors are orthogonal with a high probability [13], namely,

$$P\{ ({{{v}}^{{(1)}}},{{{v}}^{{(2)}}}) > \delta \} < \sqrt {{\pi \mathord{\left/ {\vphantom {\pi 2}} \right. \kern-0em} 2}} \,{{e}^{{{{ - {{\delta }^{2}}N} \mathord{\left/ {\vphantom {{ - {{\delta }^{2}}N} 2}} \right. \kern-0em} 2}}}}.$$
(11)

In our case, we take into account that the difference between the numerical solutions is equal to the difference between the discretization errors of these solutions \({{u}^{{(1)}}} - {{u}^{{(2)}}} = {{u}^{{(1)}}} - \tilde {u} - {{u}^{{(2)}}} + \tilde {u} = \) \(\Delta {{u}^{{(1)}}} - \Delta {{u}^{{(2)}}}\). We also assume that the error norm is bounded: \(\left\| {\Delta {{u}^{{(i)}}}} \right\| \leqslant R\) (the errors belong to some hypersphere of radius \(R\) centered at zero). In this case, if discretization errors are chosen arbitrarily, the distance \({{d}_{{1,2}}} = \left\| {{{u}^{{(1)}}} - {{u}^{{(2)}}}} \right\|\) \( = \left\| {\Delta {{u}^{{(1)}}}{\kern 1pt} - {\kern 1pt} \Delta {{u}^{{(2)}}}} \right\|\) between the two numerical solutions \({{u}^{{(1)}}} \in {{R}^{N}}\) and \({{u}^{{(2)}}} \in {{R}^{N}}\) is greater with probability 1 than the distance between the exact and numerical solutions

$$d({{u}^{{(i)}}},\tilde {u}) = \left\| {\tilde {u} - {{u}^{{(i)}}}} \right\| \leqslant {{d}_{{1,2}}} = \left\| {\Delta {{u}^{{(1)}}} - \Delta {{u}^{{(2)}}}} \right\| = \left\| {{{u}^{{(1)}}} - {{u}^{{(2)}}}} \right\|.$$
(12)

Despite the different initial prerequisites, expression (12) is quite close to expression (10) despite being somewhat stronger. However, expression (10) has a deterministic character, while expression (12) is fulfilled with probability 1.

The calculation data show that, for the ensemble of numerical solutions obtained by independent methods, the maximum distance between the solutions \(\left\| {d{{u}_{{\max }}}} \right\|\) can serve as the upper estimate of the discretization error with the efficiency index [16] \({{I}_{{{\text{eff}}}}} = {{\left\| {d{{u}_{{\max }}}} \right\|} \mathord{\left/ {\vphantom {{\left\| {d{{u}_{{\max }}}} \right\|} {\left\| {\Delta u} \right\|}}} \right. \kern-0em} {\left\| {\Delta u} \right\|}}\sim 0.6{\kern 1pt} - {\kern 1pt} 3\) (its acceptable value according to [16] is \({{I}_{{{\text{eff}}}}}\sim 1{\kern 1pt} - {\kern 1pt} 3\)). The left bound of the expression is due to the fact that the discretization error of the solutions obtained by modern algorithms is determined not only by the truncation error but also by artificial monotonizers, which limits its degree of independence. As the ensemble expands, the magnitude on the left bound approaches unity.

In addition to the purely technical goal related to the use of the Cauchy–Bunyakovsky inequality for calibrating the errors, expressions (10) and (12) can be used for verification in the absence of the reference solution.

5 NUMERICAL TESTS: ESTIMATES OF THE COMPUTATIONAL ERROR

As the test example, the interaction of shock waves of kind I according to the Edney classification [11] (the regular intersection of oblique shock waves) is considered. For this problem, it is quite easy to construct analytical solutions, whose projections onto the computational grid are treated as the true solutions.

The numerical solution (grid function) was regarded as the vector \({{u}^{{(i)}}} \in {{R}^{N}}\) (\(i\) is the ordinal scheme number and \(N\) is the number of grid nodes). The values of the exact solution \(\tilde {u} \in {{R}^{N}}\) at the grid nodes form a vector of the same size. We compare these vectors using the proximity measures generated by the \({{L}_{1}}\) and \({{L}_{2}}\) norms, as well as by the REM metric (5) and the IMED metric (3) from [10].

We considered the deviation of the computed solution from the exact solution as the distance \(d({{u}^{{(i)}}} - \tilde {u})\) from the true to the numerical solution for the set of numerical schemes of different orders, grids of different sizes, and different flow patterns.

The computations were performed on uniform meshes containing 100 × 100 and 400 × 400 spatial nodes by using schemes of accuracy orders of 1–4:

— the first-order accurate scheme of the Courant–Isaacson–Rees (CIR) type from [17] in the variant described in [18], denoted as \(S1\);

— the second-order accurate scheme based on the MUSCL method [19] (using AUFS [20] at the boundaries and denoted as S2);

— the third-order accurate modified Chakravarthy–Osher scheme S3 [21];

— the fourth-order accurate scheme [22] denoted as S4.

Table 1 presents the solution error for the flow of Edney-I type (M = 3, flow deflection angles \({{\alpha }_{1}} = 20^\circ \) and \({{\alpha }_{2}} = 15^\circ \), \(100 \times 100\) grid).

Table 1

\(d({{u}^{{(i)}}} - \tilde {u})\)

S1

S2

S3

S4

\({{\alpha }_{2}} + 1^\circ \)

\({{\alpha }_{2}} + 2^\circ \)

\({{\alpha }_{2}} = 0^\circ \)

\({{L}_{1}}\)

0.110

0.047

0.057

0.041

0.105

0.19

0.86

\({{L}_{2}}\)

0.251

0.156

0.186

0.171

0.256

0.41

1.155

REM–\({{L}_{2}}\)

0.171

0.101

0.114

0.105

0.155

0.24

0.904

IMED

0.172

0.077

0.092

0.087

0.195

0.35

1.18

The results of a comparison with the exact solution of the computed solutions by four different schemes, the results of flow computations with small changes of \({{\alpha }_{2}}\) by the second-order scheme, and the results of a comparison of the flow with another structure (α2 = 0°) are presented. Table 2 shows the same data for a \(400 \times 400\) grid.

Table 2

\(d({{u}^{{(i)}}} - \tilde {u})\)

S1

S2

S3

S4

\({{\alpha }_{2}} + 1^\circ \)

\({{\alpha }_{2}} + 2^\circ \)

\({{\alpha }_{2}} = 0^\circ \)

\({{L}_{1}}\)

0.033

0.0139

0.016

0.0125

0.068

0.138

0.56

\({{L}_{2}}\)

0.11

0.077

0.094

0.090

0.223

0.35

0.83

REM–\({{L}_{2}}\)

0.099

0.060

0.071

0.068

0.162

0.258

0.75

IMED

0.081

0.0457

0.058

0.056

0.205

0.342

0.845

It can be seen from the analysis of these data that the \({{L}_{1}}\) norm provides the best estimate of the error. Unfortunately, this norm does not allow the use of the Cauchy–Bunyakovsky inequality; accordingly, it is not possible to connect its value with the errors of valuable functionals and to give a physical meaning to the obtained magnitudes. Therefore, further, we analyze only the proximity measures (norms and metrics) generated by the scalar product.

The data in Tables 1 and 2 show that all the proximity measures used make it possible to distinguish the following changes and errors:

(1) the approximation errors of different schemes with a proximity measure of the order of \(\sim {\kern 1pt} 0.01\) (only a few percent for REM–\({{L}_{2}}\)),

(2) insignificant deviations of the flow parameters \(\sim {\kern 1pt} 0.1\) (tens of percent for REM–\({{L}_{2}}\)), and

(3) strong changes in the flow structure \(\sim {\kern 1pt} 1\) (hundreds of percent).

The percentage error can be interpreted only for REM–\({{L}_{2}}\).

The surprising (at first glance) closeness of the results obtained with the REM–\({{L}_{2}}\) and \({{L}_{2}}\), as well as IMED, is explained by the fact that \({{L}_{2}}\) and IMED are applied to the dimensionless flow parameters obtained in the calculation.

In general, for small deviations of the parameters (concerning the estimation of the approximation error), the IMED metric operates slightly better than the other tested metrics.

5.1 Numerical Tests. Estimates for the Errors of Valuable Functionals

Let us consider the relationship between the error of valuable functionals and the approximation error in the \({{L}_{2}}\) norm via the Cauchy–Bunyakovsky inequality. For this, we use the drag coefficient \(\varepsilon = {{C}_{D}} = {{{{F}_{x}}} \mathord{\left/ {\vphantom {{{{F}_{x}}} {(Sq)}}} \right. \kern-0em} {(Sq)}}\) and the lift coefficient \(\varepsilon = {{C}_{L}} = {{{{F}_{y}}} \mathord{\left/ {\vphantom {{{{F}_{y}}} {(Sq)}}} \right. \kern-0em} {(Sq)}}\) for a thin plate at the angle of attack as valuable functionals \((q = {{{{\rho }_{\infty }}(U_{\infty }^{2} + V_{\infty }^{2})} \mathord{\left/ {\vphantom {{{{\rho }_{\infty }}(U_{\infty }^{2} + V_{\infty }^{2})} 2}} \right. \kern-0em} 2})\). These valuable functionals are determined by integrating the pressure over the plate surface. Due to the boundedness of its length, \(\left\| {{{\nabla }_{p}}\varepsilon } \right\| < 1\), the expression \(\left| {\Delta \varepsilon (p)} \right| \leqslant \left\| {{{\nabla }_{p}}\varepsilon } \right\| \cdot \left\| {\Delta p} \right\|\) implies that the error norm of the solution \(\left\| {\Delta p} \right\|\) (respectively, \(\left\| {\Delta u} \right\|\)) gives the upper estimate for the error of the valuable functionals.

The resulting estimates are presented in Tables 3 and 4 for M = 3, \(\alpha = 5^\circ \), and the grid \(100 \times 100\). Here, \(\Delta {{\tilde {C}}_{D}} = \left| {{{C}_{D}} - {{{\tilde {C}}}_{D}}} \right|\) and \(\Delta {{\tilde {C}}_{L}} = \left| {{{C}_{L}} - {{{\tilde {C}}}_{L}}} \right|\).

Table 3

\(\Delta {{\tilde {C}}_{L}}\)

\({{\Delta {{{\tilde {C}}}_{L}}} \mathord{\left/ {\vphantom {{\Delta {{{\tilde {C}}}_{L}}} {{{{\tilde {C}}}_{L}}}}} \right. \kern-0em} {{{{\tilde {C}}}_{L}}}}\)

\({{\left| {d{{C}_{L}}} \right|}_{{{{L}_{2}}}}}\)

\({{\left\| {\Delta p} \right\|}_{{{{L}_{2}}}}}\)

\({{\left\| {\nabla {{C}_{L}}} \right\|}_{{{{L}_{2}}}}}\)

\(6.4 \times {{10}^{{ - 4}}}\)

\(5.2 \times {{10}^{{ - 3}}}\)

\(2.46 \times {{10}^{{ - 2}}}\)

\(2.77 \times {{10}^{{ - 2}}}\)

\(8.91 \times {{10}^{{ - 1}}}\)

Table 4

\(\Delta {{\tilde {C}}_{D}}\)

\({{\Delta {{{\tilde {C}}}_{D}}} \mathord{\left/ {\vphantom {{\Delta {{{\tilde {C}}}_{D}}} {{{{\tilde {C}}}_{D}}}}} \right. \kern-0em} {{{{\tilde {C}}}_{D}}}}\)

\({{\left| {d{{C}_{D}}} \right|}_{{{{L}_{2}}}}}\)

\({{\left\| {\Delta p} \right\|}_{{{{L}_{2}}}}}\)

\({{\left\| {\nabla {{C}_{D}}} \right\|}_{{{{L}_{2}}}}}\)

\(8.3 \times {{10}^{{ - 5}}}\)

\(7.8 \times {{10}^{{ - 3}}}\)

\(2.15 \times {{10}^{{ - 3}}}\)

\(2.77 \times {{10}^{{ - 2}}}\)

\(7.79 \times {{10}^{{ - 2}}}\)

If there is no analytical solution for the flow field under consideration, the maximum norm of the difference of numerical solutions on the ensemble of computations performed by independent methods [19, 20] and [19, 23] is taken as the error norm. The expressions \(\left| {\Delta {{{\tilde {C}}}_{D}}} \right| \leqslant \left| {d{{C}_{D}}} \right| = {{\left\| {\nabla {{C}_{D}}} \right\|}_{{{{L}_{2}}}}} \cdot {{\left\| {\Delta u} \right\|}_{{{{L}_{2}}}}}\) and \(\left| {\Delta {{{\tilde {C}}}_{L}}} \right| \leqslant \left| {d{{C}_{L}}} \right| = {{\left\| {\nabla {{C}_{L}}} \right\|}_{{{{L}_{2}}}}} \cdot {{\left\| {\Delta u} \right\|}_{{{{L}_{2}}}}}\) are satisfied. This allows us to impart a practical meaning to the estimates of \(\left\| {\Delta u} \right\|\), at least for the simplest functional of the integral type.

6 DISCUSSION

The error of the lift and drag coefficients is significantly smaller than the estimate obtained by using the Cauchy–Bunyakovsky inequality, which is related to the averaging and mutual compensation of the local errors and is quite natural.

The Cauchy–Bunyakovsky inequality allows us to reliably find the upper bound of the considered valuable functionals with using \({{\left\| {\Delta p} \right\|}_{{{{L}_{2}}}}}\) obtained in the way indicated above. For functionals of the considered form (pressure integrals), \(\left\| {\nabla \varepsilon } \right\|\) is less than unity.

The relative simplicity of calculating the gradient of the valuable functional over the available flow field makes it easy to apply these estimates to any set of functionals.

The verification on an ensemble of solutions obtained by independent algorithms [7, 8] can be implemented if we are a priori sure of the existence of an exact unique solution somewhere in the vicinity of this ensemble.

7 CONCLUSIONS

To compare solutions, it is most promising to use proximity measures (the distances between solutions) determined by \({{L}_{2}}\) or IMED [10]. These distances are related to the scalar products and are thus allowed to be related to the errors of the valuable functionals used in practice.

If the deviation from the reference solution in IMED or \({{L}_{2}}\) is by a magnitude of the order of \(\sim {\kern 1pt} {{10}^{{ - 2}}}\) (several percent in REM-\({{L}_{2}}\)), the verification problem can be considered completed.

If no reference solution is available, verification can be performed on the set of solutions obtained by different methods [7, 8]. This concerns both software verification with an insufficiently fine grid of test problems and verification of a separate calculation. The verification can be considered implemented if there are distances between solutions within the considered set of variants as a magnitude of the order of \(\sim {\kern 1pt} {{10}^{{ - 2}}}\).

The Cauchy–Bunyakovsky inequality makes it possible to deduce reliable estimates for the errors of the valuable flow functionals from the known norm of the solution error.