1 Introduction

Over the past two decades, the computational fluid dynamics (CFD) research community has invested significant efforts into the development of high-order methods. The high-order methods offer greater accuracy and efficiency for scale-resolving simulations, such as large eddy simulation and direct numerical simulation. The methods can reach engineering error tolerance levels with less computational effort [22]. However, for high-order methods to be effective, they must be paired with high-order meshes that have high-order curved boundaries [1, 16, 17, 25]. The codependency of high-order methods and high-order meshes has been recognized by researchers from the 1st International Workshop of High-Order CFD Methods, who prioritized high-order mesh generation as a pacing item for future research [23].

The oldest general purpose tool capable of producing high-order curved meshes is probably the open-source Gmsh,Footnote 1 which began to gain high-order mesh generation capability circa 2008, prior to the 1st International Workshop on High-Order CFD Methods [5]. Although able to produce high-order meshes, its limited ability to process CAD files has been a hindrance to complex geometric modeling [4]. In the commercial market, the Centaur grid generator has been used for high-order meshing [7]. Also, Pointwise and GridPro are expected to gain high-order capability in coming releases [13].

Clearly, the best way to obtain a high-order mesh is to produce it directly from the modeled geometry, but frequently the geometry files are unavailable. Legacy meshes in research labs are often exchanged and stored without the original geometry, precluding the use of new simulation algorithms. Therefore, algorithms and software are needed to convert low-order linear meshes to high-order curved meshes. Our lab set out to address this problem.

In this work, we present a low-order to high-order conversion algorithm that operates without a CAD geometry. Requiring minimal human input, it estimates geometry directly from the mesh itself: identifying feature curves, approximating surface curvature, and interpolating through high-aspect ratio boundary layers. Although this algorithm was developed for the purpose of supporting ongoing work in high-order CFD, it is not CFD specific. When CAD is unavailable, the algorithm may be used to produce a curved high-order mesh from an existing unstructured linear mesh. We have named our cross-platform software implementation meshCurve and have made it freely available for download.Footnote 2

Recently, a research software program with similarities to meshCurve was produced at the Institute of Aerodynamics and Gas dynamics at the University Stuttgart. Like meshCurve, HOPRFootnote 3 is designed to convert low-order meshes to high-order and reportedly can approximate the mesh surface if CAD is completely unavailable [8, 9]. An outwardly visible difference between meshCurve and HOPR is that meshCurve has a graphical user interface, which allows users to easily apply adjustments to the computation process. Internally, the two programs use different mathematics. While HOPR utilizes Bezier curves and Lagrange interpolation, meshCurve utilizes least squares reconstruction.

Besides the spline interpolation approach of HOPR, other notable ideas for surface reconstruction include interpolation via radial basis functions (RBF) [2, 3] and Poisson reconstruction [14], where the surface points are considered to be from a 3D shell, separating an OUTSIDE from an INSIDE. These two ideas come from the world of 3D scanning, where the task is to infer 3D geometry from a point-cloud distribution. The calculations that underlie both RBF and Poisson reconstruction involve the totality of surface points, an approach that is very expensive for meshes of practical size. In contrast, the surface reconstruction approach of meshCurve, i.e., polynomial approximation via least squares, requires only a small subset of the surface points for each step of the calculation [12].

An overview of this paper is as follows: definitions in Sect. 2, three stages of the algorithm in Sect. 3, application examples and tests of performance in Sect. 4, and conclusions and future work in Sect. 5.

2 Definitions

This paper describes an algorithm for processing meshes. A mesh is a virtual tessellation of space, either 2D or 3D, for the purpose of discretizing a partial differential equation (PDE) for computer simulations. The mesh divides space into discrete elements: 3D cells, 2D faces. Cells are bordered by faces; faces are bordered by edges; edges are bordered by nodes. Figure 1 (upper) illustrates the components of a 2D mesh.

Meshes may be structured, with cells/faces arrayed in an orderly progression of Cartesian rows and columns, or meshes may be unstructured, with cells/faces arrayed haphazardly. Both structured and unstructured meshes have their benefits and disadvantages. A high-order mesh, illustrated by Fig. 1 (lower), has additional nodes embedded within its cells, faces, and edges. These extra nodes may serve as additional degrees of freedom both for the PDE solver and for the geometric representation. They allow the mesh to represent curved geometries via quadratic or higher degree polynomial interpolations.

Fig. 1
figure 1

A low-order 2D mesh versus a curved high-order 2D mesh

The outermost face or edge elements are often grouped into patches for the purpose of applying boundary conditions for the PDE solver. Very large meshes may also be subdivided into zones to specify regions of distinct material property for the PDE solver.

To describe our algorithm, we define a feature curve as a line or point of discontinuity on a mesh surface. Feature curves are assumed to correspond to curves or corners on the underlying geometry. Examples include the edge of a box or the rim of a cockpit.

3 Algorithms

3.1 Overview

The algorithm for low-order to high-order conversion, implemented by meshCurve, operates on 3D unstructured linear meshes of one or more zones. The algorithm adds both high-order nodes and surface curvature, while preserving feature curves. In a simulation, accurate curvature may not be needed on surfaces such as far-field boundaries, so our algorithm applies curvature on a per-patch basis, controllable by the user. The surface modification may cause intersections in clustered boundary layers, so as an optional step, surface changes may be propagated into the mesh, resulting in curved interior cells.

The upgrade process consists of three steps, each performed by a separate algorithm:

  1. 1.

    Programmatic identification of feature curves.

  2. 2.

    Least squares reconstruction between feature curves, with curvature inferred from the surrounding mesh.

  3. 3.

    Propagation of surface modification inward, bending interior cells to remove boundary layer intersections.

3.2 Critical feature detection

Autonomous feature curve detection is based on an algorithm by Jiao and Bayyana [11] for characterizing surface discontinuities on coarse meshes. Their algorithm takes a probabilistic approach to feature curve identification. It employs seven angle-based metrics, inspired by differential geometry, combined with an interconnected twenty-four level classification scheme, which first ranks edges and vertices by likelihood and then assembles candidate curves. Empirical rules reject curves that are likely spurious, leaving behind chains of edges and vertices, closely approximating the true features of the underlying geometry. The interested reader is referred to [11] for a reproduction of their detailed algorithm.

We have modified the Jiao and Bayyana algorithm in two ways. (1) Our low-order to high-order procedure incorporates only the \(C^{1}\) discontinuity portion of Jiao and Bayyana’s algorithm since the curves we seek already register as \(C^{1}\) discontinuities. (2) We augment autonomous detection with a manual override, which may be activated either after or in lieu of auto-detection. This modification is needed because autonomous feature detection is probability based, so there is always the chance that a curve will be misidentified, either not flagged or flagged incorrectly. Manual mode allows the user to build arbitrary feature curves by selecting individual edges along a path or to remove edges from auto-detected curves. In addition to manual mode, the user is given access to all of the threshold parameters for the auto-detection algorithm.

For later processing of the mesh, it is important to flag kinks along any manually selected feature curve to prevent an interpolation from smoothing an intentionally sharp angle. Thus, as the user builds a curve, edge by edge, the algorithm calculates turning angles between neighbors. See Fig. 2 for an illustration. If the turning angle exceeds a user-determined threshold, the connecting node is flagged as an intersection between distinct feature curves. Intersections are also identified as the cross points between manual curves and autonomous curves.

Fig. 2
figure 2

Illustration of turning angle

Figure 3 illustrates the capabilities of the algorithm by showing the autonomously detected feature curves on a fighter jet mesh. Figures 4 and 5 show closeup views of the fighter jet mesh, illustrating the types of adjustments possible via changes in the configuration parameters and via manual override. In Fig. 4, the bold feature curve is shallow and diminishes as it crosses the surface, making the curve hard to detect and open to judgment. The feature curve detection algorithm allows flexibility according to the user’s needs. In Fig. 5a, sharp angles within the coarse mesh around the airplane’s nose have confused the autonomous feature curve detection engine. Manual override allows easy correction of the problem (Fig. 5b).

Fig. 3
figure 3

A fighter jet mesh, processed with meshCurve’s autonomous feature detection engine. The engine has correctly identified most feature curves on this complex mesh

Fig. 4
figure 4

A close view of the fighter jet wing demonstrates meshCurve’s configurable sensitivity. At higher sensitivity, the detected curve will trace the path marked by the dotted line

Fig. 5
figure 5

A closeup view of the fighter jet nose demonstrates the value of manual override

3.3 Surface reconstruction

To give curvature to the linear mesh surface, we apply separate treatment to the feature curves and to the surfaces between the curves. Feature curves longer than one edge are traced by fixed-order cubic splines parameterized by arch length. The ending conditions are interpolated or periodic as necessary. The surfaces plus any 1-edge feature curves are reconstructed with least squares polynomial fits, an algorithm based on Jiao and Wang’s Weighted Averaging of Local Fittings (WALF) procedure [12].

3.3.1 Polynomial representation

Fig. 6
figure 6

To reconstruct the curvature of the surface, a least squares polynomial fit is applied to each face and its surrounding nodes

Like WALF, our reconstruction algorithm approximates surface geometry with locally fitted 2D polynomials. Each polynomial represents a height function \(\phi (u,v)\), expressing distance above a local tangent plane, parameterized via a local uv coordinate system. The formula for this polynomial (1) comes from the truncated 2D Taylor series (2):

$$\begin{aligned} \phi (u,v) = \sum _{p=0}^d \sum _{j,k\ge 0}^{j+k=p} \left( \frac{u^jv^k}{j! k!} \right) c_{j k}, \end{aligned}$$
(1)
$$\begin{aligned} \phi (u, v)&= \, \phi (0,0) + u \left. \frac{\partial \phi }{\partial u}\right| _{(0,0)} + v \left. \frac{\partial \phi }{\partial v}\right| _{(0,0)} \nonumber \\&\quad + \frac{1}{2} \left( u^2 \left. \frac{\partial ^{2} \phi}{\partial u^{2} }\right| _{(0,0)} + 2 u v \left. \frac{\partial ^{2} \phi }{\partial u \partial v}\right| _{(0,0)} + v^{2} \left. \frac{\partial ^{2} \phi}{\partial v^2}\right| _{(0,0)} \right) \nonumber \\&\quad + \cdots \end{aligned}$$
(2)

The constants \(c_{jk}\) represent the unknown derivatives of \(\phi\), to be calculated by a least squares fit to the heights of the surrounding surface nodes. Indices j, k, and p are dummy variables, helping to collapse the infinite Taylor series (2) into a compact form. Variable d sets the degree of the polynomial representation.

The original WALF procedure calls for one least squares fit at every low-order surface node. These least squares fits are combined via a weighted average to predict the height, i.e., the value of \(\phi (u,v)\), for each newly added high-order node. In contrast, our algorithm performs least squares on a per-face basis, as depicted in Fig. 6. To summarize, we fit a polynomial of the form (1) to each face on the mesh surface, placing the local coordinate system at the average position of each face’s corners, oriented to align with the face’s normal vector. We use each fitted polynomial to position high-order nodes within the face. Along the edges between faces, we position high-order nodes according to the average of the polynomial with its neighbor.

3.3.2 Reconstruction stencil

Each least squares support stencil is assembled in a three-step process. First, mini-stencils are formed around each face corner, incorporating nodes from the local neighborhood. Second, the mini-stencils are pooled. Third, and optionally, the pooled stencil is augmented with additional surrounding nodes, selected via a different process. The size of the mini-stencils and the inclusion of step #3 are both controlled by the value of d from Eq. (1), i.e., by the degree of the polynomial interpolant.

Mini-stencil assembly takes its inspiration from the k-ring neighborhood concept of the WALF algorithm [12]. The mini-stencils grow outward in a series of iterations. At the first iteration, the founding node reaches out to its neighbors, incorporating them into the mini-stencil. In following iterations, each node added in the last iteration becomes active, incorporating its own neighbors.

The least squares fits must be prevented from combining portions of the surface with intentionally different slopes, so the support stencils are disallowed from crossing the feature curves detected earlier. This restriction is enforced during mini-stencil assembly and is accomplished by constraining the definition of neighbor on a per-node basis. In the explanation of the neighbor rules, we refer to nodes and edges within feature curves as critical. We refer to nodes at the intersection of two or more feature curves as sharp. Nodes at isolated corners, such as the tip of a cone, are also designated as sharp. All sharp nodes are critical, but critical nodes are not necessarily sharp.

  • For non-critical nodes, neighbors constitute all adjacent nodes.

  • For critical non-sharp nodes, neighbors constitute only the adjacent nodes that are both critical and connected by a critical edge.

  • Sharp nodes have no neighbors.

The three rules are illustrated by Fig. 7. With the rules in place, mini-stencils do not grow across feature curves but do grow along them.

Fig. 7
figure 7

Neighbor definitions for assembly of the stencils for least squares

Stencil size is determined dynamically from the degree of the polynomial interpolant (1) via Algorithm 1. The procedure determines both the number of mini-stencil growth iterations and whether the pooled stencil will be augmented with additional nodes. The algorithm is expressed with C++ notation, so “int(k)” means “truncate k to an integer.” Likewise, “bool(...)” means “cast the result to a TRUE/FALSE value (zero representing FALSE).”

figure a

Note that since d is an integer, \(add\_nodes\) is always TRUE, so the algorithm could be simplified. We have not simplified because in principle, alternative formulas could be used for k. In fact, the original WALF algorithm uses both \(k = (d+1)/2\) and \(k = d/2 + 1\), depending on the quality of the mesh [12]. Based on testing, we chose \(k=d+1/2\) because it results in more aggressive growth of the support stencil. To prevent the stencils from growing too large, the nodes are weighted by proximity to the central face. Weighting will be discussed in Sect. 3.3.4.

Fig. 8
figure 8

The contribution of mini-stencils (ac) to a full least squares stencil (d) for \(d=2\). The shadings within d differentiate: (1) the central face, (2) the mini-stencil faces, and (3) the half-ring faces

When \(add\_nodes\) is TRUE, the pooled stencil is expanded by including the corners of faces that border the stencil. In terms of the WALF procedure, these additional nodes correspond to the half-ring neighbors [12]. Feature curves are again taken into account to prevent growth across the curves.

Figure 8 illustrates a set of mini-stencils for \(d=2\) and the final resulting stencil, including half-ring neighbors. In each illustration, the central face is shaded and in the illustration of the final stencil (Fig. 8d), all faces are shaded with different styles to distinguish their relationship to the stencil. The blue dotted faces are within the union of the mini-stencils; the red-striped faces are within the half-ring. Algorithm 2 summarizes the full stencil assembly process.

figure b

3.3.3 The local coordinate system

To determine the local coordinate system for each polynomial fit, we start with the face’s normal vector, calculated from the corners via Newell’s method [20]. Newell’s method is applicable to both triangular and non-triangular faces, incorporating every corner of the face as opposed to the three corners that would be used in a cross-product calculation. The method, originating from computer graphics, is used for its robustness against: perturbed coordinates, non-planer faces, collinear corners, and concave faces.

Face orientation, i.e., whether the corners are ordered counter-clockwise or clockwise, determines whether the computed vector is facing outward or inward. The direction itself does not matter to the least squares fit because the resulting tangent plane is unaffected, but the direction does need to be consistent across the mesh. If some normal vectors face outward while others face inward, problems arise when per-node weighting is applied during the least squares operation, as discussed in Sect. 3.3.4.

The \(\mathbf {u}\) and \(\mathbf {v}\) vectors are selected to form a triad with the normal vector. The calculation is accomplished by finding an arbitrary vector \(\mathbf {u}_{\text {temp}}\) that is linearly independent from the normal and then a second vector that is perpendicular to both \(\mathbf {u}_{\text {temp}}\) and the normal. The three vectors are orthonormalized via the Gram–Schmidt process with the direction of the normal held constant [21].

The weighting process, described in Sect. 3.3.4, requires point-normal vectors. These are computed from the face normals by averaging the face normals surrounding each node, weighting each face normal by the angle subtended by its parent face.

3.3.4 Oscillation mitigation

In our implementation of the algorithm, the user is provided access to parameter d, which  controls the degree of the interpolating polynomial. While it may seem intuitive that a high-degree polynomial will be higher quality than a low-degree polynomial, in practice that might not be the case. High-degree interpolations are generally more oscillatory than their low-degree counterparts, so while a high-degree interpolant may potentially produce a better match to the heights in the least squares stencil, the interpolant may also oscillate spuriously in-between the nodes, especially if there are flaws in the mesh such as misaligned normal vectors. The tendency to oscillate is exhibited by all types of high-degree polynomial interpolants, not just those in meshCurve. The  phenomenon is well documented in the mathematics literature [6, 19] and sometimes described as “Runge’s  phenomenon,” after particularly infamous examples discovered by mathematician Carl Runge [18]. In terms of the surface reconstruction, there is an upper bound on d, which is mesh dependent. By default, our algorithm sets d to two, but the user is free to increase the value to as high as six. In practice, \(d=2\), \(d=4\), and \(d=3\) usually produce the most acceptable meshes, in that order. \(d=3\) can at times be problematic because the resulting polynomial is neither convex nor concave, so the polynomial may oscillate simply due to an internal sign change.

Our algorithm includes several components to mitigate oscillation. These include (1) statistical weighting within the stencil, (2) a biased iterative matrix solver for least squares, and (3) programmatic checks for numerical instability.

The weighting process bares similarity to the WALF procedure in that both procedures account for distance from the interpolation center and for the surface slope alignment. We use a modified weighting formula (3), and we also give additional weight to the corners of the central face to encourage flatness rather than oscillation.

Weighting is a two-step process. First, all sample nodes i are weighted according to

$$\begin{aligned} W_i = {\left\{ \begin{array}{ll} \dfrac{\mathbf {n}_\text {face} \cdot \mathbf {n}_\text {point} }{\lVert \mathbf {v} \rVert^{d+1} } &{}\quad \text {if} \quad \mathbf {n}_\text {face} \cdot \mathbf {n}_\text {point}> \epsilon \\ 0 &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(3)

Then the originating corner nodes receive additional weight in the amount of \(W_i = \left( 2 \, \lVert \mathbf {v} \rVert^{d+1} \right) ^{-1}\). In these formulae, \(\mathbf {n}_\text {face}\) and \(\mathbf {n}_\text {point}\) denote the face-centered and point-centered normal vectors, respectively. \(\mathbf {v}\) denotes the distance from the face centroid, where the polynomial is centered. \(\epsilon\) denotes a tolerance parameter, which we set to \(1 \times 10^{-6}\).

To further reduce oscillations, our algorithm employs the iterative conjugate residual (CR) method to solve for the coefficients \(c_{jk}\). The CR iteration is initialized to zero and operated with a moderately high \(10^{-8}\) error bound to stop iteration before we reach an exact solution. The iterative procedure reduces oscillation at locations where the surface is only a few faces wide, as illustrated by Fig. 9, which shows a closeup of an airplane wingtip. Figure 9a shows the resulting surface when least squares is solved exactly, while Fig. 9b shows the result of the inexact solve. Clearly, the inexact solve preserves the flat surface at the wingtip, while the exact solve produces oscillations.

To understand why the CR iteration is effective, consider the geometry of the surface. A thin strip of surface lacks resolution in the thin direction, so the algebraic system representing the polynomial is singular. Because it is unconstrained in the thin direction, an exact solve will conform to floating point errors and round off errors instead of to the intended flat surface.

The iterative solver, in effect, guides the unconstrained math to a solution that aligns, in the thin direction, with the original linear surface. In other words, when the mesh lacks resolution for a high-order surface, the solver falls back to the original low-order approximation. Another way to conceptualize is to view the least squares process in the context of data science, as a regression problem applied to sparse data. In data science, early termination is a regularization technique that prevents overfitting.

Fig. 9
figure 9

An inexact, iterative solve of least squares suppresses oscillations

The last algorithmic defense against oscillation is a series of programmatic checks for obvious errors such as (1) \(c_{jk} = \infty\), (2) \(c_{jk}=\)NAN, or (3) node displacements hundreds of times greater than mesh size. These errors can arise from numerical instabilities or suboptimal spacial distributions of the input stencil nodes, situations beyond programmatic control. When the error checks are triggered, recovery is attempted by downgrading the polynomial degree.

3.3.5 Least squares formulation

Like the original WALF procedure, we use a scaling matrix to improve the numerical properties of the least squares computation, but we correct an apparent error with the scaling matrix [12]. Our full least squares normal equation is

$$\begin{aligned} \left( {\mathsf {S}}^{T} {\mathsf {U}}^{T} {\mathsf {\Omega }} {\mathsf {U}} {\mathsf {S}} \right) \left( {\mathsf {S}}^{-1} {\mathsf {C}} \right) = {\mathsf {S}}^{T} {\mathsf {U}}^{T} {\mathsf {\Omega }} {\mathsf {\Phi }}. \end{aligned}$$
(4)

Each row of matrices \({\mathsf {U}}\), \({\mathsf {\Phi }}\), and \({\mathsf {\Omega }}\) corresponds to a single node within the least squares stencil.

  • \({\mathsf {U}}\) contains the terms \(\left( {u^j v^k}\big/{j! k!} \right)\) from polynomial (1).

  • \({\mathsf {\Phi }}\) contains the height of each stencil node, i.e., the \(\phi\) values from the polynomial (1).

  • \({\mathsf {\Omega }}\) is a diagonal weighting matrix, containing the weights from Eq. (3).

  • \({\mathsf {C}}\) is a column vector of the \(c_{jk}\) coefficients from the polynomial (1).

  • \({\mathsf {S}}\) is a scaling matrix, intended to lower the condition number of matrix \({\mathsf {U}}^{T} {\mathsf {\Omega }} {\mathsf {U}}\).

For the calculation of \({\mathsf {S}}\), let \({\mathsf {u}}_i\) represent the ith column of \({\mathsf {\Omega }}^{{1}/{2}} \, {\mathsf {U}}\). We define matrix \({\mathsf {S}}\) to be diagonal with its ith term equal to \(1 \big/ \left\lVert {\mathsf {u}}_i \right\rVert_2\).

3.4 Interior deformation

Fig. 10
figure 10

Illustration of a distorted, self-intersecting cell

When the mesh surface is altered to give it curvature, the cells below the surface may distort and intersect, especially if the boundary layer mesh is clustered, as shown in Fig. 10. To remedy the problem, a mesh deforming procedure is applied, which can alter the interior of the mesh to match the surface change. The procedure was originally developed for moving mesh CFD by Luke, Collins, and Blades [15]. The algorithm treats the mesh surface as if it were a moving boundary, pushing and pulling the interior nodes. Motion is transferred directly to the interior nodes via a 3D interpolation with the following formula:

$$\begin{aligned} \mathbf {s}\left( \mathbf {r} \right) = \frac{ \sum w_i\left( \mathbf {r} \right) \mathbf {s}_i }{ \sum w_i\left( \mathbf {r} \right) }. \end{aligned}$$
(5)

In this formula, \(\mathbf {r}\) represents position, and \(\mathbf {s}\) represents displacement, so a node at position \(\mathbf {r}\) moves \(\mathbf {s}\) units, achieving a final position of \(\mathbf {r} + \mathbf {s}\). The \(\mathbf {s}_i\) vectors represent the displacements of the high-order surface nodes from the linear to the curved surface mesh, and the \(w_i\left( \mathbf {r}\right)\) coefficients represent weights, dependent on the relative distance between surface and interior. The farther surface node i is from position \(\mathbf {r}\), the smaller its weight coefficient will be. The radially decreasing weight function is defined as

$$\begin{aligned} w_i\left( \mathbf {r}\right) = \left( \frac{L_\text {def}}{\Vert \mathbf {r}-\mathbf {r}_i\Vert } \right) ^a + \left( \frac{ \alpha L_\text {def} }{ \Vert \mathbf {r}-\mathbf {r}_i\Vert } \right) ^b, \end{aligned}$$
(6)

with \(\mathbf {r}_i\) representing the position of surface node i. \(L_\text {def}\) represents a scaling length, controlling the propagation distance of the motion. \(\alpha\) is a weighting constant, balancing near-field and far-field contributions. Variables a and b are tuning parameters, controlling overall behavior of the formula. They are set to \(a = 3\) and \(b = 5\), as recommended by [15]. The two remaining variables, \(\alpha\) and \(L_\text {def}\), have been chosen on the basis of numerical experiments: \(\alpha\) to 0.9 and \(L_\text {def}\) to average edge length. To save computational time, average edge length is estimated via a uniformly distributed sample of edges within the active surface.

Fig. 11
figure 11

2D illustration of formula (5), the displacement function for interior deformation

In evaluating the algorithm, it is illustrative to consider how it performs near thin curved walls, where the surface nodes pull in opposite directions. Figure 11 plots Eq. (5), illustrating the thin-wall situation in two dimensions. In the figure, black circles represent surface nodes that have been displaced from \(y=0\). The plotted lines mark the positions that interior nodes would assume after processing by the algorithm. Two different lines are plotted, illustrating the effect of different surface node positions.

As illustrated by the graph, each surface node has a global influence on the displacements, but its primary influence is on the immediate surrounding area. At the location halfway between competing surface nodes, the displacement assumes an average value.

The moving boundary algorithm cannot guarantee the removal of cell intersections, but it is effective in practice, as will be demonstrated in Sect. 4. The effectiveness is due to the fact that clustered boundary layers, where intersections are most prone to occur, have closely spaced nodes. As a result, nodes just beneath the surface get displaced almost as much as the surface itself.

4 Results

Our low-order to high-order conversion algorithm has been implemented in an application called meshCurve. It features an easy-to-use graphical user interface with 3D interactive graphics. MeshCurve runs on each of the major operating systems: Windows, Mac OS X, and Linux. It provides full support for quadratic 3D unstructured CGNS meshes, both multi-patch and multi-zone. As a demonstration of performance, three tests are presented. The first is an error analysis on a set of meshes with spherically shaped boundaries. The second is an application example from a recent laboratory CFD project. The third is a complete CFD simulation, incorporating an upgraded mesh produced by meshCurve.

4.1 Error analysis

Fig. 12
figure 12

The surface curvature gained through the high-order conversion is evident in 12b

Fig. 13
figure 13

\(L_{\infty }\) error plotted against average edge length for polynomial degrees up to four. See Table 1 for the orders of convergence

To measure performance as both a function of mesh fineness and interpolation order, meshCurve version 1.0 was used to convert a set of meshes with spherically shaped boundaries, of various cell densities, using increasingly higher degree polynomials for the surface reconstruction. The meshes were of mixed type, some composed of tetrahedral cells and others of hexahedral cells. Figure 12 illustrates a closeup view of one of the coarsest hexahedral meshes before and after conversion, illustrating the surface-curving effect of the procedure. For the ensuing analysis, error was defined as the straight line distance between the newly curved surface and the true geometry.

It would be expected that with surface reconstruction based on the Taylor polynomial (2), error E would roughly equate to the Taylor remainder term, which means it would follow the proportion \(E \propto \Vert \text {edge length}\Vert ^{d+1}\), [11, 12, 19]. If so, a logarithmically scaled plot of Error versus Edge Length would manifest as a linearly decreasing sequence of points, with a slope of approximately \((d+1)\). Figure 13 shows error plots for the experiment. Axes are scaled logarithmically. Maximum absolute error (\(L_{\infty }\)) is plotted on the vertical, and average edge length is plotted on the horizontal. Lines of unique color connect the set of points for each interpolation degree. As predicted, the data visually exhibits a linear decrease.

The orders of convergence, listed in Table 1, largely follow the predicted order \((d+1)\). These numbers are calculated analytically, via formula (7), following a similar error analysis of the WALF algorithm by Jiao and Wang [12]. For the convergence order calculation, only the two densest tetrahedral meshes and the two densest hexahedral meshes were used,

$$\begin{aligned} \text {Order of convergence} = \frac{\log \left( {E_i}/{E_\text {ref}}\right) }{\log \left( {\overline{h_i}}/{\overline{h_\text {ref}}}\right) }. \end{aligned}$$
(7)

Variable \(\overline{h_i}\) represents the average edge length for mesh i, and variable \(\overline{h_{\text {ref}}}\) represents the average edge length for the coarser reference mesh. Likewise, variable \(E_i\) represents the interpolation error on mesh i, and variable \(E_{\text {ref}}\) represents the interpolation error on the reference mesh.

Table 1 Order of convergence for the error plots of Fig. 13

Table 1 indicates a superconvergence of the \(L_\infty\) error for polynomial degrees 2 and 4, evident also in Fig. 13 via comparison with the plotted theoretical lines. The superconvergence is due to the fact that there are no polynomials of odd degree within the Taylor expansion of the geometric description of a sphere. Therefore, polynomials of degree 2 and 4 approximate/interpolate the sphere with an error convergence of order 4 and 6.

4.2 Application: turbulent airflow over blade

Fig. 14
figure 14

Application example: this mesh, modeling a turbine blade, was converted from low-order to high-order for use in a CFD turbulence investigation

Fig. 15
figure 15

Jacobian histograms from the turbine blade example. After surface curving, there are negative Jacobians, indicating the presence of self-intersecting cells. After interior deformation, there are no negative Jacobians, indicating successful removal of the self-intersecting cells

Fig. 16
figure 16

Interior deformation is illustrated by the clustered boundary layer around the cylinders. Curving the surface results in self-intersecting cells, shown in Fig. 10. Deforming the interior results in the final mesh, shown in Fig. 16b

Figure 14 depicts a meshCurve-upgraded (version 1.0) mesh from a recent CFD investigation [24]. The mesh models a turbine blade preceded by a row of cylindrical bars that generate turbulence. The mesh contains a clustered boundary layer, making high-order conversion a challenge because the addition of curvature to the surface causes self-intersecting cells, invalidating the mesh. A picture of the boundary layer is shown in Fig. 16a, and one of the self-intersecting cells is shown by Fig. 10.

The self-intersecting cells are detected by meshCurve, registering as negative Jacobians, seen in the upper Jacobian histogram of Fig. 15. After the application of interior deformation, the negative Jacobians disappear, indicating the successful removal of the self-intersecting cells. A direct examination of the boundary layer (Fig. 16b) confirms that the surface curvature is successfully applied to the mesh interior, resulting in a fully high-order mesh.

4.3 Application: CFD simulation

Fig. 17
figure 17

Mesh for the Onera M6 wing simulation

Fig. 18
figure 18

Closeup view of the boundary layer of the Onera M6 mesh from Fig. 17

The goal of this test case was to verify meshCurve’s performance under a challenging, real-life condition. The mesh depicted by Fig. 17 was used in an inviscid CFD simulation of ideal gas at Mach 0.3, using a flow solver based on the FR/CPR method. The mesh features another clustered boundary layer, visible in Fig. 18, plus a line of collapsed cells along the trailing edge of the wing, an artifact of the geometry and the way it was meshed. For the inviscid flow problem, the clustered boundary layer is clearly unnecessary, but it provides a test for the robustness of the flow solver. MeshCurve version 1.1 successfully gave curvature to the mesh.

Fig. 19
figure 19

The effect of mesh curvature on the Onera M6 CFD simulation. The linear simulation diverged. The curved simulation diverged

Two simulations were run, one on the original linear mesh and one on the curved mesh produced by meshCurve. The simulation on the linear mesh diverged while its curved counterpart ran to completion. Results are shown in Fig. 19. Figure 19a depicts the flow solution on the linear mesh shortly before divergence. Figure 19b depicts the final flow solution on the curved mesh. Close examination of the simulation outputs indicated that within the simulation on the linear mesh, a line of high pressure developed near the leading edge of the wind, presumably due to the sharp corners of the linear mesh. The sharp cell corners also generated numerical instabilities, which caused the simulation to break. In contrast, the curved mesh was problem free, producing a smooth final pressure distribution. The only difference between the simulations was the mesh; all other parameter settings were identical.

5 Conclusions

5.1 Summary

We have described an algorithm for the conversion of low-order linear meshes to curved high-order meshes. By design, the algorithm does not utilize CAD geometry files. Instead, it infers the shape of the mesh from an analysis of the surface. This design requires minimal human input. Internally, the algorithm contains logic for autonomous detection of feature curves, for reconstruction of surface curvature, and for curving the interior of a mesh. This algorithm has become the basis for a cross-platform application called meshCurve, developed to address a recognized and prioritized need for high-order mesh generation. The software has been used daily in the University of Kansas CFD lab since the summer of 2015, when it previewed at the summer AIAA Aviation conference [10]. Although motivated by the needs of CFD research, meshCurve has the potential for broader use as a general purpose low-order to high-order unstructured mesh converter.

MeshCurve is freely available from its website https://sites.google.com/site/meshcurvesoftware/. It has been used in universities, industry, and government organizations for a wide variety of purposes, including water modeling, biophysics research, medical research, file conversion, visualization, and as a pre-processing tool for commercial mesh generators.

5.2 Future work

The primary issue for future work is the determination of the optimal least squares stencil and of the optimal polynomial degree, given the constraints of the mesh. Best performance is achieved when (1) surface curvature is small within the stencil width, and (2) the stencil provides \(360^\circ\) coverage with similar spacial extent and node density in all directions. But these conditions are not always available. For example, an anisotropic mesh will produce long and thin stencils, limiting resolution in the short direction. The lack-of-resolution situation can also arise with very coarse meshes that have high curvature. In this case, the curvature is high and the face density low, so stencils may wrap around tight bends, incorporating faces from the reverse side of the mesh. The backward faces will have normal vectors that point opposite to the local coordinate system, so the weighting function (3) will exclude them. Although the computation proceeds, the resolution is lacking in the direction of the bend. One possible solution to the lack-of-resolution problem could be to augment the least squares stencil with low-weighted points from inside the faces. This procedure might work because the faces themselves are an approximation of the mesh surface.

Another area of possible improvement and investigation is the least squares solution process. We have observed that the solved-for polynomial is sensitive to the origin of the local coordinate system. That is, if the local coordinate system is positioned at a face edge rather than the face center, the computed polynomial will differ although it will still pass through the nodes in the stencil. Because more than one polynomial can be matched to the surface, it would seem that the algebraic system is not as constrained as it could be. It may be worthwhile to incorporate additional information from the mesh, perhaps relating to the gradient of the surface or the direction of the normal vectors, assuming the normal vectors can be determined accurately. It may also be worthwhile to reformulate the polynomial in terms of orthogonal functions. With orthogonal functions, it should be possible to obtain values of the coefficients that will not vary as the polynomial degree is increased. Constant coefficients would likely produce a more stable calculation.