1 Introduction

Many tilings of a d-dimensional space have been introduced; see for example a thorough summary of results on tilings by congruent simplices in [7]. It has been also shown that any unit d-dimensional cube can be decomposed into d! simplices defined by

$$\begin{aligned} S_\pi = \left\{ x\in {\mathbb {R}}^d; 0 \le x_{\pi (1)} \le \cdots \le x_{\pi (d)} \le 1 \right\} , \quad \pi \in \Pi _d, \end{aligned}$$
(1.1)

where \(\Pi _d\) is the set of all permutations of numbers \(1, \dots , d\). Moreover, these simplices have the same volume, \({\mathrm {meas}}_d S_\pi = (d!)^{-1}\). See Kuhn’s original paper [16] or the papers of Brandts et al. [2, 4].

But not all partitions of the space need to use congruent simplices. When a simplicial partition of some general polyhedral domain satisfies the so-called face-to-face property, it can be effectively used as a computational mesh for various computational methods. A technique of such mesh generation can be found in [15].

A majority of today’s computations take place in two or three spatial dimensions, while those in higher dimension still occur rather rarely. However, some elliptic problems are treated in higher dimension, see e.g. [22] for such example emanating from stochastic analysis. In addition, for problems represented by evolutionary partial differential equations of the hyperbolic type in three spatial dimensions, one can understand time as a fourth variable and use a mesh in four-dimensional space, see e.g. the practical examples [11, 17].

In this paper we introduce a method for creating a d-parametric family of tilings. Despite the set of parameters available, subsets of these tilings create only very rigid meshes. However, some theoretical results suggest that for numerical methods to be convergent, the numerical domain and target domain do not necessarily have to coincide and that is where our meshes might find their use. Two different approaches can be found in works of Feireisl et al. [8, 9] and of Angot et al. [1, 14].

Our result is strongly based on the (almost 100-year-old) construction developed by Sommerville, which uses a regular triangle as a base for building a one-parametric family of tetrahedral elements that tile the three-dimensional space, see [10, 12] or Sommerville’s original article [24]. Our tilings are definitely not performed by congruent simplices and they do not cover d-dimensional cubes, thus they are clearly distinct from those introduced in [7, 16].

We start the construction from one-dimensional simplices, i.e. segments, to increase the dimension repeatedly and build a d-parametrical family of simplicial tessellations of d-dimensional space. Its existence is stated in Theorem 2.1 and its proof covers Sect. 2. Then, in Sect. 3 we determine the shape-optimizing vector of parameters with the result summarized in Theorem 3.2. Section 4 introduces some concluding remarks and open questions.

2 Construction of the Tessellation

We start with stating the existence result in the first of the central theorems of this article.

Theorem 2.1

For any d-dimensional space there exists a d-parametric family of simplicial tessellations \(\mathcal {T}_d(\mathbf {p})\), \(\mathbf {p} = (p_1, p_2, \dots , p_d), p_i \ne 0, i = 1, \dots , d\). For \(\mathbf {p}\) fixed, all elements \(K \in \mathcal {T}_d({\mathbf {p}})\) have the same d-dimensional measure equal to

$$\begin{aligned} {\mathrm {meas}}_d K = \prod \limits _{i=1}^d |p_i|. \end{aligned}$$
(2.1)

Moreover, every connected compact subset of the tessellation builds a face-to-face mesh.

Fig. 1
figure 1

Illustration of Sommerville’s original construction creating a three-dimensional face-to-face mesh above unilateral triangular mesh. For the sake of clarity, only elements \(K^{0,0}_2\) and \(L^{0,0,0}_3, L^{0,0,1}_3, L^{0,0,2}_3\) are shown

We introduce Sommerville’s original construction (see [10] or [24]), which creates a tessellation of an infinite triangular prism over an equilateral triangle (which tessellates the two-dimensional space). In the construction, new vertices are created above (and below) the three vertices of the triangle in the heights \(\dots ,0, 3p, 6p, \dots \); \(\dots , p, 4p, 7p, \dots \) and \(\dots , 2p, 5p, \dots \), respectively, with a positive parameter p. Ordering these vertices with respect to their height (i.e. third component), tetrahedra are defined as convex hulls of four consequent vertices. A sketch of this construction is given by Fig. 1, with the notation given in upcoming Lemma 2.2, which is the key ingredient of Theorem 2.1.

Lemma 2.2

(Induction step) Let \(d\ge 2\) and \(\mathcal {T}_{d-1} = \{ K^k_{d-1}\}_{k \in {\mathbb {Z}}^{d-1}}\) be a simplicial tessellation of \((d-1)\)-dimensional space such that the graph constructed from vertices and edges of \(\mathcal {T}_{d-1}\) is a d-vertex-colorable graph. Then

  • there exists \(\mathcal {T}_d = \{ L_d^l\}_{l \in {\mathbb {Z}}^d}\) a simplicial tessellation of d-dimensional space with additional shape parameter \(p_d\),

  • any connected compact subset of \(\mathcal {T}_d\) is a face-to-face mesh,

  • \(\mathcal {T}_d\) is a \((d+1)\)-vertex-colorable graph.

Proof

Take an element \(K^k_{d-1} \in \mathcal {T}_{d-1}\), \(K^k_{d-1} = \mathrm {co}\,\{A_0, A_1, \dots A_{d-1}\}\). Thanks to the d-vertex-colorability we can assume that the labels of vertices represent their color. Let \(A_i = [A_{i,1}, A_{i,2}, \dots A_{i,d-1}]\) be the coordinates of \(A_i\) in \((d-1)\)-dimensional space.

We define the following points in d-dimensional space:

$$\begin{aligned} B_{j} = [A_{i(j),1}, A_{i(j),2}, \dots , A_{i(j),d-1}, jp_d], \quad j \in {\mathbb {Z}}, \end{aligned}$$

where \(i(j) \equiv j \mod d\) and \(p_d \ne 0\) is a parameter. Denote

$$\begin{aligned} L^{k,z}_d = \mathrm {co}\,\{B_{z}, B_{z+1}, \dots , B_{z+d+1}\}, \end{aligned}$$
(2.2)

the d-simplex as a convex hull of \(d+1\) consequent vertices. Then \(\{L^{k,z}_d\}_{z \in {\mathbb {Z}}}\) is a tessellation of an infinite d-dimensional prism with the cross-section \(K^k_{d-1}\), see Figs. 1 and 2 for illustration. As \(\mathcal {T}_{d-1} = \{K^k_{d-1}\}_{k \in {\mathbb {Z}}^{d-1}}\) is a tessellation of \((d-1)\)-dimensional space, then the set \(\mathcal {T}_d := \{L^{k,z}_d\}_{(k,z) \in {\mathbb {Z}}^{d-1}\times {\mathbb {Z}}^d}\) forms a tessellation of d-dimensional space.

The construction uses the colors from the previous tessellation. Thus, it is ensured that from any vertex \(A_j\) that is shared by more simplices in \(\mathcal {T}_{d-1}\), we create new vertices \(V_{z}\) of only one type, with the last coordinate of the form

$$\begin{aligned} V_{z,d}\, \frac{1}{p_d} \equiv c_{d-1}(A_j)\mod d = j. \end{aligned}$$
(2.3)

This implies the face-to-face property, i.e. the facet of a simplex in tessellation \(\mathcal {T}_d\) is a facet of another simplex.

Finally, we define the new coloring with

$$\begin{aligned} c_d(B_j) \equiv j \mod d+1 \quad \text{ for }~ B_j = [A_{i(j)},jp_d]. \end{aligned}$$
(2.4)

Such mapping is a vertex coloring, since edges of the graph are only edges in simplices and vertices in any simplex \(L^{k,z}_d\) have a different last component, but the “height” difference of two vertices connected by an edge does not exceed \(d|p_d|\). \(\square \)

Fig. 2
figure 2

Illustration of creating a simplicial face-to-face mesh of two-dimensional space out of the one-dimensional space, with the parameters \(p_1 = 1, p_2 = {1}/{2}\). The simplices \(K^{2}_1\) and \(L^{2,1}_2\) are marked in bold to clarify the notation defined by (2.2). For general values of the parameters there are two candidates for diameter of \(L^{k,z}_2\), equal to \(\sqrt{p_1^2 + p_2^2}\) and \(2|p_2|\). Notice also the vertex coloring, assigned through (2.4)

The part that proves the face-to-face property based on vertex coloring of a graph was used already in [12]. Lemma 2.2 supplies the induction step, to complete the proof of Theorem 2.1, we show the initial step.

Proof of Theorem 2.1

A one-dimensional Euclidean space (a line) can be divided into intervals of the length \(|p_1|\). The color of a border point \(A_z \in \{zp_1\}_{z \in {\mathbb {Z}}}\) is given by

$$\begin{aligned} c_1(A_z) \equiv z \mod 2. \end{aligned}$$

The assumptions of Lemma 2.2 are satisfied, hence we have the initial step and the induction step. For every use of Lemma 2.2 we use the coloring that was generated by the lemma in its previous use, which finishes the proof. The equivolumetric property is proved by Proposition 2.4. \(\square \)

Here a thoughtful reader might be confused why we stressed that for the next step of construction the coloring produced by the previous use of the induction lemma is used. Clearly, at every step the original coloring \(c_j\) can be changed using any \(\pi _{j+1}\), a permutation of numbers \(\{0,\dots , j\}\). As a consequence, we may state the following.

Theorem 2.3

For any \({\mathbf {p}}= (p_1, \dots , p_d), p_i \ne 0, i = 1, \dots , d\), and any vector \({\varvec{\pi }}= (\pi _2, \dots , \pi _d)\), where \(\pi _i \in \Pi _i\) is a permutation of numbers \(\{0, \dots , i-1\}\), there exists a tessellation \(\mathcal {T}_d({\mathbf {p}}, {\varvec{\pi }})\) of a d-dimensional Euclidean space. For \({\mathbf {p}}\) fixed, all elements \(K \in \mathcal {T}_d({\mathbf {p}}, {\varvec{\pi }})\) have the same d-dimensional measure equal to

$$\begin{aligned} {\mathrm {meas}}_d K = \prod \limits _{i=1}^d |p_i|. \end{aligned}$$
(2.5)

Moreover, every connected compact subset of the tessellation builds a face-to-face mesh.

Clearly, for a vector of identical permutations we get the original tessellation from Theorem 2.1, i.e. \(\mathcal {T}_d({\mathbf {p}}, (\mathrm{Id}, \dots , \mathrm{Id})) = \mathcal {T}_d({\mathbf {p}})\).

In general, the created simplices are not identical. However, the following proposition shows that all elements of the tessellation \(\mathcal {T}_d({\mathbf {p}})\) have the same volume, i.e. the d-dimensional measure.

Proposition 2.4

(Equal volume of the elements) Let \(\mathcal {T}_d({\mathbf {p}},{\varvec{\pi }})\) be the tessellation constructed by the procedure introduced in the proof of Lemma 2.2, with parameter vector \(\mathbf {p} = (p_1,p_2, \dots , p_d)\) and vector of permutations \({\varvec{\pi }}= (\pi _2, \dots , \pi _d)\). Then for every simplex \(L \in \mathcal {T}_d({\mathbf {p}})\) we have

$$\begin{aligned} {\mathrm {meas}}_d L = \prod \limits _{i=1}^d |p_i|. \end{aligned}$$
(2.6)

Proof

In one-dimensional space, the situation is obvious; points \(zp_1\), \(z\in {\mathbb {Z}}\), divide a line into segments of the same length \(|p_1 |\). We prove the induction step. Let us assume that there exists \(M_{d-1}>0\) such that \({\mathrm {meas}}_{d-1} K = M_{d-1}\) for any \(K \in \mathcal {T}_{d-1}\).

According to the construction, an element \(L \in \mathcal {T}_d\) is determined by the points

$$\begin{aligned} B_z= & {} [A_0, zp_d]; \quad B_{z+1} = [A_1, (z+1)p_d]; \quad \dots \nonumber \\&B_{z+d-1} = [A_{d-1}, (z+d-1)p_d]; \quad B_{z+d} = [A_0, (z+d)p_d], \end{aligned}$$
(2.7)

where \(\mathrm {co}\,\{A_0, A_1 ,\ldots , A_{d-1}\})= K \in \mathcal {T}_{d-1}\).

The d-dimensional measure of a simplex is determined by the determinant of a matrix composed of the vectors that build the simplex, more precisely by the \((d!)^{-1}\) multiple of its absolute value. We use (2.7) and performing operations that do not affect the value of the determinant we obtain

$$\begin{aligned} {\mathrm {meas}}_d L= & {} \frac{1}{d!} \left| \det \begin{pmatrix} A_1-A_0 &{} p_d \\ A_2-A_0 &{} 2p_d \\ \vdots &{} \vdots \\ A_{d-1}-A_0 &{} (d-1)p_d \\ 0 &{} dp_d \end{pmatrix} \right| = \frac{d|p_d| }{d!} \left| \det \begin{pmatrix} A_1-A_0 \\ A_2-A_0 \\ \vdots \\ A_{d-1}-A_0 \end{pmatrix}\right| \nonumber \\= & {} |p_d| \cdot {\mathrm {meas}}_{d-1} K. \end{aligned}$$
(2.8)

The proof is concluded by repeated use of (2.8) up to \(d=1\), which yields (2.6). \(\square \)

Remark 2.5

In what follows, we consider only positive values of \(p_i\), shortly we write \({\mathbf {p}}\in {\mathbb {R}}^d_+\), where \({\mathbb {R}}^d_+ = \{\mathbf {v}=(v_1, \dots , v_d) \in {\mathbb {R}}^d; v_i \ge 0, \text { for all } i \in \{ 1,\dots , d\} \}\). It is rather a technical constraint, in fact one could allow \(p_i \in {\mathbb {R}}\setminus \{0\}\). However, negative parameters affect only the orientation of the elements, not their shape characteristics. Therefore, for the regularity optimization we can restrict ourselves to \({\mathbf {p}}\in {\mathbb {R}}^d_+\) which also simplifies the process. One should bear in mind that if \({\mathbf {p}}^\star = (p_1^\star , \dots , p_d^\star )\) is a vector of shape optimal parameters, then also \((\delta _1 p_1^\star , \dots , \delta _d p_d^\star )\) is shape optimal, for \(\delta _j = \pm 1\).

3 Regularity Optimization

We have constructed a d-parametric family of tessellations in d-dimensional space, where the values of parameters \(p_i, i=1, \dots , d\), influence their shape. We look for a vector of parameters \({\mathbf {p}}^\star = (p^\star _{1}, \dots , p^\star _{d})\) for which the simplicial elements are shape optimal. There are several regularity ratios with respect to which we might optimize. Some of them have been shown to be equivalent in the sense of the strong regularity even in general dimension, see [3], but not in the sense of their maximization.

For convenient calculation we use the following ratio:

$$\begin{aligned} \vartheta (K) = \frac{\mathrm {meas}_d K}{(\mathrm {diam~} K)^d},\qquad d \ge 2, \end{aligned}$$
(3.1)

where \(\mathrm {meas}_d\) is the d-dimensional Lebesgue measure and \({{\mathrm {diam}}~}K\) is the maximal distance between two points in K. The ratio \(\vartheta (K)\) can be interpreted as a similarity of K to an equilateral simplex. In other words, we find \({\mathbf {p}}^\star \) and \(K^\star \), which realize

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+} \min _{K \in \mathcal {T}_d({\mathbf {p}})} \vartheta (K). \end{aligned}$$
(3.2)

As the simplices in \(\mathcal {T}_d({\mathbf {p}})\) are not identical, the optimization focuses on the worst simplex only. Since we proved by Proposition 2.4 that all elements in \(\mathcal {T}_d({\mathbf {p}})\) have the same d-measure, this worst case in the sense of (3.1) occurs when the diameter is maximal.

3.1 Difficulties with the Optimization

One can think through that Sommerville’s construction enables us to rewrite (3.5) using (2.6) as

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+} \min _{\mathbf{w }\in \widetilde{W_d}} \frac{\prod _{i=1}^d p_i}{\bigl (\sum _{i=1}^d w_i^2 p_i^2\bigr )^{{d}/{2}}}, \end{aligned}$$
(3.3)

where \(\widetilde{W_d} \subseteq \widehat{W_d}\), which is defined by

$$\begin{aligned} \widehat{W_d}&:= \left\{ \mathbf{w }\in ({\mathbb {N}}\cup \{0\})^d \,\Bigg |\, \exists \, k \in \{1,\dots ,d\}\right. \nonumber \\&\qquad \quad \quad : \left\{ \begin{array}{ll}w_k = k,&{} \\ w_i = 0,&{}\quad \text{ for }~ 1 \le i< k, \\ w_j \in \{1, \dots , j-1\}, &{}\quad \text{ for }~ k < j \le d \end{array}\right\} .\qquad \qquad \end{aligned}$$
(3.4)

For example \(\widetilde{W_3} = \widehat{W_3} = \{(0,0,3), (0,2,1), (0,2,2), (1,1,2), (1,1,1) \}\) and \(\widetilde{W_2} = \widehat{W_2} = \{(0,2),(1,1)\}\). However, in general \(\widetilde{W_d} \not = \widehat{W_d}\) as the following lemma shows.

Lemma 3.1

For \(d=4\), the vector \((1,1,2,3) \in \widehat{W_4} \setminus \widetilde{W_4}\). In other words, there is no element \(K \in \mathcal {T}_{4}({\mathbf {p}})\) such that \(\pm p_1 {\mathbf {e}}_1 \pm p_2 {\mathbf {e}}_2 \pm 2 p_3 {\mathbf {e}}_3 \pm 3 p_4 {\mathbf {e}}_4 \) in any combination of the signs is an edge of K.

Proof

Let \({\mathbf {p}}' = (p_1, p_2, p_3)\) and \({\mathbf {p}}= ({\mathbf {p}}', p_4)\). Clearly there exists \(L^{z_1, z_2, z_3}_3 \in \mathcal {T}_3({\mathbf {p}}')\) such that \(\overrightarrow{UV} = \pm p_1 {\mathbf {e}}_1 \pm p_2 {\mathbf {e}}_2 \pm 2 p_3 {\mathbf {e}}_3\) (in some combination of the signs) is an edge of \(L^{z_1,z_2,z_3}_3\). (One of such elements is \(L^{0,0,0}_3\), see Fig. 1, for which \(U = B_3, V = B_1\).)

Then necessarily vertices UV have the height difference equal to \(2 p_3\), i.e. \(c_3(U) - c_3(V) \equiv 2 \mod 4\) and vertices created above U and V that belong to any 4-simplex \(K_4^{z_1, z_2, z_3, z_4} \in \mathcal {T}_4({\mathbf {p}}',p_4)\) have the difference vector equal to \(\pm p_1 {\mathbf {e}}_1 \pm p_2 {\mathbf {e}}_2 \pm 2p_3 {\mathbf {e}}_3 \pm 2p_4{\mathbf {e}}_4\). As the construction in each step affects only the last component of the vector \(\mathbf{w }\), we conclude that \(\mathbf{w }\not \in \widetilde{W_4}\). \(\square \)

The fact that \(\widetilde{W_d} \not = \widehat{W_d}\) and problematic determination of their difference makes the optimization severely difficult. However, as the proof of the above lemma suggests, the difficulties are caused by inheriting the coloring from the preceding step of the construction. Removing this constraint by allowing recoloring before every step of the construction (as in Theorem 2.3), we find out that

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+ } \min _{ \begin{array}{c} {\varvec{\pi }}\in \Pi _2 \times \cdots \times \Pi _d \\ K \in \mathcal {T}_d({\mathbf {p}},{\varvec{\pi }}) \end{array}} \vartheta (K) \end{aligned}$$
(3.5)

is equivalent to

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+} \min _{\mathbf{w }\in \widehat{W_d}} \frac{\prod _{i=1}^d p_i}{\bigl (\sum _{i=1}^d w_i^2 p_i^2\bigr )^{{d}/{2}}}, \end{aligned}$$
(3.6)

and also to

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+} \min _{\mathbf{w }\in W_d} \frac{\prod _{i=1}^d p_i}{\bigl (\sum _{i=1}^d w_i^2 p_i^2\bigr )^{{d}/{2}}}, \end{aligned}$$
(3.7)

where \(W_d\) is defined by

$$\begin{aligned} W_d := \left\{ {\sum ^{1^{1}}} \mathbf{w }\in ({\mathbb {N}}\cup \{0\})^d \,\Bigg |\right. \, \exists \, k \in \{1,\dots ,d\}: \left\{ \begin{array}{ll} w_k = k,&{} \\ w_i = 0, &{}\quad \text{ for }~ 1 \le i< k, \\ w_j = j-1,&{}\quad \text{ for }~ k < j \le d \end{array}\right\} . \end{aligned}$$
(3.8)

The equivalence of the optimization problems (3.6) and (3.7) is based on the fact that \((w_1, \dots , w_d) \mapsto \sum _{i=1}^d w_i^2 p_i^2\) is increasing in each component and that elements in \(W_d\) dominate those in \(\widehat{W_d}\) componentwise.

For example, for \(d=3\) we have \(W_3 = \{ \{1,1,2\},\{0,2,2\},\{0,0,3\}\}\).

Since \(|W_d| =d\), we can also label its elements as \(\mathbf{w }_j = (w_{j,1}, w_{j,2}, \dots , w_{j,d}),\) where j is its first nonzero coordinate. We also define

$$\begin{aligned} D_j({\mathbf {p}}) = \sqrt{\sum _{i=1}^d w_{j,i}^2 p_i^2} \qquad \text{ and } \qquad D({\mathbf {p}}) = \max _{j \in \{1, \dots ,d\}} D_j({\mathbf {p}}), \end{aligned}$$
(3.9)

so that (3.7) can be rewritten as

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+} \min _{k \in \{1,\dots , d\}} \frac{\prod _{i=1}^d p_i}{D_k({\mathbf {p}})^d}. \end{aligned}$$
(3.10)

For illustration, we write out the “worst diameter candidates” \(D_j\) explicitly,

(3.11)

3.2 Optimal Parameters

Now we can state the central theorem of this chapter.

Theorem 3.2

(Optimal parameters) Let \(d\ge 2\) and let \(\mathcal {T}_d({\mathbf {p}},{\varvec{\pi }})\) be a tessellation constructed through the procedure in the proof of Theorem 2.3. Then there exists a unique one-dimensional vector half-space

$$\begin{aligned} \begin{aligned} P^\star =&\Bigg \{{{\mathbf {p}}}_{\kappa }^{\star } \in {\mathbb {R}}_{+}^{d}\,\Bigg |\, {\mathbf {p}}_{\kappa }^{\star } = \kappa {\mathbf {p}}^{\star },\, \kappa > 0,\, {\mathbf {p}}^{\star } = (p_{1}^{\star }, \dots , p_{d}^{\star }), \\&\quad {p}_{1}^{\star } = 1, \,{p}_2^{\star } = \frac{1}{\sqrt{3}},\, p_{j}^{\star } = \frac{1}{j-1} \sqrt{\frac{2}{3}}, \,j \in \{3, \dots , d\} \Bigg \}, \end{aligned} \end{aligned}$$
(3.12)

of optimal parameters that realize

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+} \min _{\begin{array}{c} {\varvec{\pi }}\in \Pi _2 \times \cdots \times \Pi _d \\ K \in \mathcal {T}_d({\mathbf {p}},{\varvec{\pi }}) \end{array}} \frac{{\mathrm {meas}}_d K}{({{\mathrm {diam}}~}K)^d}, \end{aligned}$$
(3.13)

for some \({\varvec{\pi }}\in \Pi _2 \times \cdots \times \Pi _d\).

Remark 3.3

Notice that we do not care much about ideal vector of permutations \({\varvec{\pi }}\) nor the element K. The above result could also be interpreted as a lower bound on regularity ratio of elements \(K({\mathbf {p}}^\star ,{\mathrm{Id}}) = K({\mathbf {p}}^\star )\) for (in some sense) shape optimal value \({\mathbf {p}}^\star \).

The rest of this section is devoted to the proof of Theorem 3.2, which consists of three main steps. First, we prove the existence of the maximizer \({\mathbf {p}}^\star \), then we show the particular form of the largest possible diameter that corresponds to the “most deformed” simplex in \(\mathcal {T}_d({\mathbf {p}}^\star ,{\varvec{\pi }}^\star )\) and conclude the proof with determining the values of the components of \({\mathbf {p}}^\star \) through constrained optimization.

We would like to recall that we have three equivalent formulations of the optimization problem; (3.7), (3.10) and (3.13).

Lemma 3.4

(Existence of the maximizer) Let \(d\ge 2\). Then there exists a one-dimensional vector half-space

$$\begin{aligned} {P}^\star = \big \{{\mathbf {p}}^\star _\kappa \in {\mathbb {R}}^d_+\,|\, {\mathbf {p}}^\star _\kappa = \kappa {\mathbf {p}}^\star , \,\kappa > 0 \big \}, \end{aligned}$$
(3.14)

of optimal parameters that realize

$$\begin{aligned} \sup _{{\mathbf {p}}\in {\mathbb {R}}^d_+} \min _{\mathbf {w} \in W_d} \frac{\prod _{i=2}^d p_i}{\bigl (\sum _{i=1}^d w_i^2 p_i^2\bigr )^{{d}/{2}}} . \end{aligned}$$
(3.15)

Proof

As for the above discussion, (3.13) is equivalent to (3.15). We observe that the ratio in (3.15) is 0-homogeneous, thus without loss of generality we fix \(p_1=1\). We continue with denoting the parametric vector by \({\mathbf {p}}\in {\mathbb {R}}^{d}_+\), keeping in mind that due to its first component being fixed, \({\mathbf {p}}\) may be considered as \((p_2,\dots , p_d) \in {\mathbb {R}}^{d-1}_+\). Defining

$$\begin{aligned} F({\mathbf {p}}):= \min _{\mathbf{w }\in W_d} \frac{\prod _{i=2}^d p_i}{\bigl (\sum _{i=1}^d w_i^2 p_i^2\bigr )^{{d}/{2}}}, \end{aligned}$$

we can rewrite (3.15) as \(\sup _{\mathbf {p} \in {\mathbb {R}}^{d-1}_+} \) and for \(F({\mathbf {p}})\) we observe that

$$\begin{aligned} \lim _{p_j \rightarrow 0^+} F({\mathbf {p}}) = 0, \qquad \lim _{p_j \rightarrow \infty } F({\mathbf {p}}) = 0, \end{aligned}$$

for any \(j \in \{2, \dots , d\}\). Moreover, \(F \in C({\mathbb {R}}^{d-1}_+)\) and \(F>0\). Thus, we infer that for any (sufficiently small) \(\varepsilon \) the set \(\Omega _\varepsilon := \{F({\mathbf {p}}) \ge \varepsilon \}\) is a non-empty, bounded and closed subset of \({\mathbb {R}}^{d-1}_+\) and due to the continuity of F, it must attain its maximum in \(\Omega _\varepsilon \), which necessarily coincides with the maximum of F in \({\mathbb {R}}^{d-1}_+\). \(\square \)

In the next step we show which element of \(W_d\) in (3.7) or equivalently which \(D_k\) in (3.10) realizes the maximal diameter.

Lemma 3.5

Let \({\mathbf {p}}^\star = (1,p_2^\star ,\dots , p_d^\star )\) be the maximizer of (3.10). Then it holds that

$$\begin{aligned} D({\mathbf {p}}^\star ) := \max _{k\in \{1,\dots , d\}} D_k({\mathbf {p}}^\star ) = D_1({\mathbf {p}}^\star ). \end{aligned}$$

Proof

We proceed via contradiction. Let \(D_1({\mathbf {p}}^\star )< D_k({\mathbf {p}}^\star ) =D({\mathbf {p}}^\star )\) for some \(k \in \{2, \dots , d\}\). Then we define \({\mathbf {p}}' = (p_1',\dots , p_d')\) with

$$\begin{aligned} p_1' = 1, \qquad p_j' = p^\star _j \cdot \frac{1}{1+\delta }, \quad j \in \{2,\dots , d\}, \end{aligned}$$
(3.16)

where \(\delta >0\) is chosen small enough to ensure \(D_1({\mathbf {p}}') < D_k({\mathbf {p}}')= D({\mathbf {p}}')\). Then it holds that

$$\begin{aligned} D({\mathbf {p}}') = D_k({\mathbf {p}}') = D_k({\mathbf {p}}^\star )\, \frac{1}{1+\delta } = D({\mathbf {p}}^\star )\, \frac{1}{1+\delta }, \end{aligned}$$
(3.17)

as \(w_j = 0\) for \(j < k\), recall (3.9), the definition of \(D_k\). Substitution from (3.16) and (3.17) into (3.7) yields

$$\begin{aligned} \frac{\prod _{i=1}^d p_i'}{D_k({\mathbf {p}}')^d} = \frac{\prod _{i=1}^d p_i^\star }{D_k({\mathbf {p}}^\star )^d}\cdot \frac{(1+\delta )^d}{(1+\delta )^{d-1}} = (1+\delta ) \, \frac{\prod _{i=1}^d p_i^\star }{D_k({\mathbf {p}}^\star )^d}, \end{aligned}$$

which contradicts the assumption of the maximality of \(D_k(\mathbf {p}^\star )\). \(\square \)

By virtue of Lemma 3.5, the maximization problem (3.10), which is equivalent to (3.13), reduces to the optimization of a \(C^1\) function with inequality constraints,

$$\begin{aligned} \max \left\{ \left. \frac{\prod _{i=1}^d p_i}{D_1({\mathbf {p}})^d} \, \right| \, {\mathbf {p}}\in {\mathbb {R}}^d_{+},\, p_1 = 1,\, D_1({\mathbf {p}})^2 \ge D_j({\mathbf {p}})^2, \,\text{ for }~\text{ all }~ j \in \{2, \dots , d\} \right\} . \end{aligned}$$
(3.18)

To prove Theorem 3.2 it suffices to show that problem (3.18) has a unique solution, which is \({\mathbf {p}}^\star \) in (3.12). By virtue of Lemma 3.5 the optimization problem (3.18) is equivalent to (3.7) and further to the original problem (3.13), hence Lemma 3.4 guarantees it has a solution.

The function

$$\begin{aligned} F_1({\mathbf {p}}) = F_1(p_2,\dots , p_d) = \frac{\prod _{i=2}^d p_i}{D_1(1,p_2, \dots , p_d)^d} \end{aligned}$$
(3.19)

is continuously differentiable in \({\mathbb {R}}^{d-1}_+\), hence its constrained maximizer \({\mathbf {p}}^\star \) satisfies the necessary Karush–Kuhn–Tucker conditions. They read as follows:

$$\begin{aligned}&\frac{\partial }{\partial p_j} F_1({\mathbf {p}}) = \sum _{i=2 }^{d} \mu _i \frac{\partial }{\partial p_j} \bigl ( D_i({\mathbf {p}})^2 - D_1({\mathbf {p}})^2\bigr ), \end{aligned}$$
(3.20)
$$\begin{aligned}&\mu _j\bigl (D_j({\mathbf {p}})^2 - D_1({\mathbf {p}})^2\bigr ) = 0, \end{aligned}$$
(3.21)
$$\begin{aligned}&\mu _j \ge 0, \qquad D_j({\mathbf {p}}) \le D_1({\mathbf {p}}), \end{aligned}$$
(3.22)

for \(j =\{2, \dots , d\}\).

Let us focus on the right-hand side of (3.20). Recalling (3.11) with \(p_1= 1\), one can express

$$\begin{aligned} \frac{\partial }{\partial p_j} \big (D_i({\mathbf {p}})^2 - D_1({\mathbf {p}})^2\big ) = {\left\{ \begin{array}{ll} -2(j-1)^2 p_j \quad &{} \text{ for }~ j < i, \\ 2(2j-1)p_j &{} \text{ for }~ j=i, \\ 0 &{} \text{ for }~ j > i. \end{array}\right. } \end{aligned}$$
(3.23)

Then, by virtue of (3.19) and (3.8) with (3.9) and just derived (3.23), we can rewrite (3.20) as

$$\begin{aligned} \begin{aligned}&\frac{\prod _{i=2}^d p_i}{D_1({\mathbf {p}})^{2d}} \,\biggl ( \frac{1}{p_j}\, D_1({\mathbf {p}})^d - d (j-1)^2 D_1({\mathbf {p}})^{d-2}p_j \biggr ) \\&\quad - 2 \mu _j (2j-1)p_j + 2(j-1)^2p_j \sum _{i=j+1}^d \mu _i = 0, \qquad j \in \{2, \dots , d\}. \end{aligned} \end{aligned}$$
(3.24)

It is not obvious how to get a solution of (3.203.22) or its equivalent (3.21, 3.22, 3.24), nor its uniqueness. At the end, we show that \(\mu _j = 0\) for \(j \in \{3, \dots , d\}\) and \(\mu _2 > 0\), which is then enough to determine uniquely the solution. To get this, we proceed in three steps. We show that

  • there exists \(k \in \{2, \dots , d\}\) such that \(\mu _k > 0\),

  • this k is unique,

  • \(k=2\).

We introduce three lemmas, each corresponding to one of the items at the mentioned list.

Lemma 3.6

(Existence of an active constraint) Let \(d \ge 2\) and \({\mathbf {p}}^\star \) be the maximizer of (3.18). Then \({\mathbf {p}}^\star \) is a solution of (3.203.22) with \((\mu _2, \dots , \mu _d) \ne \mathbf {0}\), i.e. there exists \(k \in \{2,\dots ,d\}\) such that \(\mu _k > 0\).

Proof

We proceed via contradiction. Assume that \(\mu _j = 0\) for all \(j \in \{2, \dots , d\}\). In such case (3.24), which is a consequence of (3.20), implies

$$\begin{aligned} p_j^\star = \frac{D_1({\mathbf {p}}^\star )}{(j-1)\sqrt{d}}, \quad j \in \{2, \dots , d\}, \end{aligned}$$

which substituted into \(D_2({\mathbf {p}})^2\) yields

$$\begin{aligned} D_2({\mathbf {p}}^\star )^2 = \frac{4}{d}\, D_1({\mathbf {p}}^\star )^2 + \sum _{i=3}^d \frac{D_1({\mathbf {p}}^\star )^2}{d} = \frac{d+2}{d}\, D_1({\mathbf {p}}^\star )^2 > D_1({\mathbf {p}}^\star )^2, \end{aligned}$$

which contradicts (3.22). Thus, there is some \(k \in \{2, \dots , d\}\) for which \(\mu _k > 0\). \(\square \)

For \(d=2\) Lemma 3.6 implies directly that \(k=2\). For \(d \ge 3\) we supply the following lemma.

Lemma 3.7

(One active constraint) Let \(d \ge 3\) and \({\mathbf {p}}^\star \) be a maximizer in (3.18), which satisfies (3.203.22) with \(\mu _k > 0\) for some \(k \in \{3, \dots , d\}\). Then \(\mu _j = 0\) for all \(j \in \{2, \dots , k-1, k+1, \dots , d\}\) and \({\mathbf {p}}^\star = (1,p_2^\star , \dots , p_d^\star )\) fulfills

$$\begin{aligned} p_j^\star = {\left\{ \begin{array}{ll} \displaystyle \frac{D_k({\mathbf {p}}^\star )}{(j-1)\sqrt{dk}} \,\sqrt{\frac{2k-1}{k-1}} &{}\quad \text{ for }~ j \in \{2, \dots , k-1\}, \\ ~\\ \displaystyle \frac{D_k({\mathbf {p}}^\star )}{\sqrt{dk}} &{}\quad \text{ for }~ j = k, \\ ~\\ \displaystyle \frac{D_k({\mathbf {p}}^\star )}{(j-1)\sqrt{d}} &{}\quad \text{ for }~ j \in \{k+1, \dots , d\}. \end{array}\right. } \end{aligned}$$
(3.25)

Proof

Let us take the largest \(k \in \{3, \dots , d\}\) for which \(\mu _k > 0\). Then for \(j \in \{k+1, \dots , d\}\) we have \(\mu _j = 0\). This enables us to deduce directly from (3.24) that

$$\begin{aligned} p_j^\star = \frac{D_1({\mathbf {p}}^\star )}{(j-1)\sqrt{d}}, \quad j \in \{k+1, \dots , d\}. \end{aligned}$$
(3.26)

As \(D_1 = D_k\) (this follows from the assumption \(\mu _k>0\) and (3.21)) we can use (3.26) for computing \(p_k^\star \) from definition of \(D_k\) (3.9). The computation

$$\begin{aligned} D_k({\mathbf {p}}^\star )^2 = k^2 (p_k^\star )^2 + \sum _{j = k+1}^d (j-1)^2(p_j^\star )^2 = k^2(p_k^\star )^2 + \frac{d-k}{d}\, D_k({\mathbf {p}}^\star )^2 \end{aligned}$$
(3.27)

yields

$$\begin{aligned} p_k^\star = \frac{D_k({\mathbf {p}}^\star )}{\sqrt{dk}}. \end{aligned}$$
(3.28)

Notice that (3.28) holds even if \(k = d\) and the summation in (3.27) is void.

Since \(D({{\mathbf {p}}}^\star )=D_1({{\mathbf {p}}}^\star ) = D_k({{\mathbf {p}}}^\star )\), then the constrained maximization problem (3.18) is equivalent to a constrained optimization, where \(D_k\) is taken as the diameter, i.e.

$$\begin{aligned} \max \left\{ \left. \frac{\prod _{i=1}^d p_i}{D_k({\mathbf {p}})^d} \, \right| \, {\mathbf {p}}\in {\mathbb {R}}^d_{+},\, p_1 = 1,\, D_k({\mathbf {p}})^2 \ge D_j({\mathbf {p}})^2,\, \text{ for }~\text{ all }~ j \in \{1, \dots , d\} \right\} . \end{aligned}$$
(3.29)

Arguing as before, the maximizer in (3.29) exists and fulfills the following necessary Karush–Kuhn–Tucker conditions:

$$\begin{aligned} \frac{\partial }{\partial p_j} \,\frac{\prod _{i=2}^d p_i}{D_k({\mathbf {p}})^d} = \sum _{\begin{array}{c} i=1 \\ i \ne k \end{array}}^d \nu _i \frac{\partial }{\partial p_j}\, \big (D_i({\mathbf {p}})^2 - D_k({\mathbf {p}})^2\big ) \end{aligned}$$
(3.30)

for \(j \in \{2, \dots , d\}\) and

$$\begin{aligned}&\nu _i\big (D_i({\mathbf {p}})^2 - D_k({\mathbf {p}})^2\big ) = 0, \end{aligned}$$
(3.31)
$$\begin{aligned}&\nu _i \ge 0, \qquad D_i({\mathbf {p}}) \le D_k({\mathbf {p}}), \end{aligned}$$
(3.32)

for \(i \in \{1,\dots ,k-1, k+1, \dots , d\}\) and, moreover, we know that \(D_1({\mathbf {p}}) = D_k({\mathbf {p}})\). As we already settled \(j \in \{k+1, \dots , d\}\), we need to focus on \(j \in \{2, \dots , k-1\}\) only, hence we consider only those.

We know that

$$\begin{aligned} \frac{\partial }{\partial p_j}\, \frac{\prod _{i=2}^d p_i }{D_k({\mathbf {p}})^d} = \frac{\prod _{i=2}^d p_i}{D_k({\mathbf {p}})^d}\,\frac{1}{p_j}, \qquad j \in \{2, \dots , k-1\}, \end{aligned}$$
(3.33)

and using (3.11) we compute the right-hand side of (3.30) for \(j \in \{2, \dots , k-1\}\) as

$$\begin{aligned} \frac{\partial }{\partial p_j} \,\big (D_i({\mathbf {p}})^2 - D_k({\mathbf {p}})^2\big ) = {\left\{ \begin{array}{ll} 2(j-1)^2p_j \quad &{}\text{ for }~ i < j, \\ 2j^2p_j &{} \text{ for }~ i = j, \\ 0 &{} \text{ for }~ i > j. \end{array}\right. } \end{aligned}$$
(3.34)

Collecting (3.333.34) together with \(\nu _i = 0\) for \(i > k\) (as \(D_i({\mathbf {p}}) < D_k({\mathbf {p}})\) by assumption), we can rewrite (3.30) in the form

$$\begin{aligned} \frac{\prod _{i=2}^d p_i}{D_k({\mathbf {p}})^d} \,\frac{1}{p_j} = 2\nu _j j^2 p_j + 2 (j-1)^2 p_j \sum _{i=1}^{j-1} \nu _i, \quad j \in \{2, \dots , k-1\}. \end{aligned}$$
(3.35)

Take any \(j \in \{2, \dots , k-1\}\), we have either \(\nu _j = 0\) or \(\nu _j > 0\).

First assume \(\nu _j = 0\). Then, from (3.35) we deduce

$$\begin{aligned} p_j^2 = p_{j,u}^2 = \frac{\prod _{i=2}^d p_i}{2 D_k({\mathbf {p}})^d} \,\frac{1}{(j-1)^2 \sum _{i=1}^{j-1} \nu _i}. \end{aligned}$$
(3.36)

If \(\nu _j > 0\), then

$$\begin{aligned} p_j^2 = p_{j,c}^2 = \frac{\prod _{i=2}^d p_i}{2 D_k({\mathbf {p}})^d}\, \frac{1}{j^2\nu _j+ (j-1)^2 \sum _{i=1}^{j-1} \nu _i}. \end{aligned}$$

We observe that \(p_{j,c} < p_{j,u}\) and \({\mathbf {p}}^\star \) is supposed to maximize \(\prod _{i=2}^d p_i \cdot (D_k({\mathbf {p}}))^{-d}\), where \(D_k({\mathbf {p}})\) is independent of \(p_j\) for \(j \in \{1,\dots , k-1\}\). Thus \(p_j^\star \) needs to maximize only \(\prod _{i=2}^d p_i\), i.e. only its value. Since \(p_{j,u} > p_{j,c}\), we choose its unconstrained version \(p^\star _j = p_{j,u}\) from (3.36), i.e. \(\nu _j = 0\) for any \(j \in \{2, \dots , k-1\}\). This enables to rewrite (3.36) into

$$\begin{aligned} p_j^2 = p_{j,u}^2 = \frac{\prod _{i=2}^d p_i}{2 \nu _1 D_k({\mathbf {p}})^d}\, \frac{1}{(j-1)^2}. \end{aligned}$$
(3.37)

Computing (3.30) also for \(j = k\), one gets

$$\begin{aligned} \frac{1}{D_k({\mathbf {p}})^{2d}} \left( \prod _{i=2}^d p_i D_k({\mathbf {p}})^d\, \frac{1}{p_k}- d \prod _{i=2}^d p_i D_k({\mathbf {p}})^{d-2} k^2 p_k \right) = \nu _1 2 (-2k+1) p_k,\qquad \end{aligned}$$
(3.38)

and after substituting \(p_k^\star \) from (3.28) into (3.38) we can express \(\nu _1\) as

$$\begin{aligned} \nu _1 = \frac{dk \prod _{i=2}^d p_i}{2 D_k({\mathbf {p}})^{d+2}}\, \frac{k-1}{2k-1}. \end{aligned}$$
(3.39)

Collecting (3.26), (3.28) and substituting from (3.39) into (3.37) we get (3.25), which concludes the proof. \(\square \)

Lemmas 3.6 and 3.7 give rise to the following corollary.

Corollary 3.8

Let \(d \ge {2}\) and \({\mathbf {p}}^\star \) be a maximizer in (3.18). Then there exists a unique \(k \in \{2, \dots , d\}\) such that \(D_k({\mathbf {p}}^\star ) = D_1({\mathbf {p}}^\star ) = D({\mathbf {p}}^\star )\) and (3.25) holds.

Proof

Lemma 3.6 together with (3.21) gives existence of \(k \in \{2, \dots , d\}\) such that \(D_k({\mathbf {p}}^\star ) = D_1({\mathbf {p}}^\star ) = D({\mathbf {p}}^\star )\). For \(d=2\) we get directly \(k=2\). For \(k \ge 3\), Lemma 3.7 gives uniqueness of such k and also (3.25). Using the procedure from the beginning of the proof of Lemma 3.7, one recovers (3.25) also for \({d}=2\). \(\square \)

Finally, we show that k from the previous lemma is equal to 2, which will enable us to determine also the values of \(p_i^\star \).

Lemma 3.9

Let \(d \ge 2\) and \({\mathbf {p}}^\star \) be a maximizer in (3.18). Then it holds that

$$\begin{aligned} D({\mathbf {p}}^\star ) = D_1({\mathbf {p}}^\star ) = D_2({\mathbf {p}}^\star ), \end{aligned}$$
(3.40)

and

$$\begin{aligned} p_2^\star = \sqrt{\frac{1}{3}}, \qquad p_j^\star =\sqrt{\frac{2}{3}} \frac{1}{j-1}, \quad j \in \{3, \dots , d\}. \end{aligned}$$
(3.41)

Proof

Let \(d=2\). Then Lemma 3.6 implies (3.40), which can be written explicitly as \(1 +(p_2^\star )^2 = 4(p_2^\star )^2\). Thus, we infer \(p_2^\star = 3^{-1/2}\).

Let further \(d\ge 3\). Then from Corollary 3.8 we get a unique existence of some \(k \in \{2,\dots , d\}\) for which \(D({\mathbf {p}}^\star ) = D_1({\mathbf {p}}^\star ) = D_k({\mathbf {p}}^\star )\) and the relation (3.25) for \({\mathbf {p}}^\star \).

We prove \(k=2\) via contradiction. Let us assume that \(k \ge 3\). Then, \(D({\mathbf {p}}^\star ) = D_1({\mathbf {p}}^\star ) = D_k({\mathbf {p}}^\star ) > D_2({\mathbf {p}}^\star )\). Writing out \(D_2({\mathbf {p}}^\star )\) explicitly using (3.25), we get

$$\begin{aligned} D^2 > D_2^2 = \frac{D^2}{d} \,\biggl (\frac{4(2k-1)}{k(k-1)} + \frac{(k-2)(2k-1)}{k(k-1)} + \frac{(k-1)^2}{k} + (d-k) \biggr ),\qquad \end{aligned}$$
(3.42)

where we skipped the argument \({\mathbf {p}}^\star \) for the sake of brevity. Direct computation simplifies inequality (3.42) into

$$\begin{aligned} \frac{6k^2 + 6k -1}{k(k-1)} < 0, \end{aligned}$$

which is not true for any \(k \in {\mathbb {N}}\), a contradiction. Thus, \(k=2\), and from (3.25) we get

$$\begin{aligned} p_2^\star = \frac{D({\mathbf {p}}^\star )}{\sqrt{2d}}, \qquad p_j^\star = \frac{D({\mathbf {p}}^\star )}{(j-1)\sqrt{d}}, \quad j \in \{3,\dots , d\}, \end{aligned}$$
(3.43)

which we substitute into \(D_1({\mathbf {p}})^2\) to get

$$\begin{aligned} D({\mathbf {p}}^\star )^2 = D_1({\mathbf {p}}^\star )^2 = 1 + \frac{D({\mathbf {p}}^\star )^2}{2d} + (d-2)\,\frac{D({\mathbf {p}}^\star )^2}{d}. \end{aligned}$$
(3.44)

From (3.44) we deduce \(D({\mathbf {p}}^\star )^2 = \frac{2}{3}d\) which, substituted into (3.43) yields (3.41). \(\square \)

Proof of Theorem 3.2

The optimization problem (3.13) can be equivalently rewritten to (3.7) and also to (3.10). Lemma 3.4 yields existence of the half-space of maximizers to (3.7). Then, factoring the problem by fixing \(p_1 = 1\), Lemma 3.5 reduces (3.10) to a constraint optimization problem (3.18). This problem is shown to have exactly one active constraint (Lemmas 3.6, 3.7 and Corollary 3.8). Further, the active constraint is identified and the maximizer of (3.18) is determined in Lemma 3.9. Equivalence of the optimization problems concludes the proof. \(\square \)

4 Concluding Remarks

We conclude with four remarks on various topics.

4.1 Optimization at Each Step

Notice that the optimal values of parameters (3.41) are independent of the dimension d. This can be interpreted that the most regular partition of d-dimensional space is constructed above the most regular partition of \((d-1)\)-dimensional space. As a consequence, the shape optimization we performed is equivalent to the shape optimization at every dimension, which gives a sequence of one-dimensional optimization problems that is technically much less demanding.

4.2 Integer Sequence for OEIS

One can easily see that for suitable \(\kappa \) it is possible to express the squares of the components of \({\mathbf {p}}^\star _\kappa \) from (3.12) as a fraction with unit numerator and integer denominator. Largest such \(\kappa \), yielding the smallest possible integers in those fractions, is \(\kappa = 2^{-1/2}\). For this value, the denominators give the following values: 2, 6, 12, 27, 48, 75, 108, 147, 192, 243, \(300, \dots \), having the formula for j-th item \(a_j = 3(j-1)^2\) for \(j \ge 3\). This sequence has been upon the suggestion of the author indexed in Sloane’s database of integer sequences [23] as sequence A289443.

4.3 Shape Optimality of the Partition

It is not obvious whether there exists any better simplicial tiling that cannot be constructed by our method. However, in 2D there is no triangle with better ratio \(\vartheta \) than the equilateral one. Similarly, in 3D, our method gives the standard Sommerville tetrahedron (see [13, Fig. 2]), which as for Naylor [21] is the best one among space-filling tetrahedra when considering the regularity ratio \(\vartheta \).

Moreover, we have computed that the regularity ratio of the worst element in \(\mathcal {T}_d({\mathbf {p}}^\star )\) is greater or equal to that of \(\mathcal {T}_d({\mathbf {p}}^\star , {\varvec{\pi }}^\star )\), which is, see (3.12),

$$\begin{aligned} \frac{\prod _{i=1}^d p_i^\star }{D_1({\mathbf {p}}^\star )^d} = \frac{\frac{\sqrt{3}}{2}\left( \frac{2}{3}\right) ^d \frac{1}{(d-1)!} }{\left( \frac{2d}{3}\right) ^{{d}/{2}}} = \frac{\sqrt{3}}{2}\, d \cdot \frac{1}{d^{{d}/{2}} d!}, \end{aligned}$$
(4.1)

while for Kuhn’s partition (1.1) we have

$$\begin{aligned} \vartheta (S_\pi ) = \bigl (d^{{d}/{2}} d!\bigr )^{-1}. \end{aligned}$$
(4.2)

From this we can conclude that the elements of \(\mathcal {T}_d({\mathbf {p}}^\star )\) are at least \(\frac{\sqrt{3}}{2} d\) times more regular than those of Kuhn.

4.4 Non-Euclidean Geometries

We devote the last remark to the fact that the construction is independent of the underlying geometry and thus might be used also for computations in non-Euclidean spaces. However, we cannot apply the optimization result directly, as it uses the equivolumetricity property. This is based on translation invariance which does not hold in non-Euclidean geometries. More on tessellations of hyperbolic spaces can be found in works of Coxeter [5] or [6], and Margenstern [18,19,20]. As Margenstern points out, these works might find a use in computational problems of theory of relativity or cosmological research, but such results had not been published before 2003 and to the best author’s knowledge not even since then.