1 Introduction

Latin hypercube designs (LHDs) (McKay et al., 1979) are becoming popular and preferred choices for designing experiments, especially computer experiments, due to their uniform coverage of each individual factor (Fang et al., 2006; Santner et al., 2003). A LHD is an \(n\times s\) matrix, denoted as LHD(n,s), whose columns are permutations of \(\left\{ -\frac{n-1}{2},-\frac{n-3}{2},...,\frac{n-3}{2},\frac{n-1}{2}\right\} .\) From the perspective of experimental design, the columns are called factors, the rows are called runs and the values are called levels. If each level appears exactly \(\frac{n}{q}\) times, it is called a U-type design with n runs, s factors and q levels, denoted as \(U(n,q^s).\) Developing new efficient techniques for constructing optimal U-type designs from various optimality perspectives is one of the hot topics recently (cf. Elsawah and Fang, 2020; Elsawah, 2022a). A LHD(n,s) is a U-type design with \(q=n\). Since a LHD scatters the representative points of each individual factor in an intelligent manner to cover the experimental region well, it has good projective properties on any single dimension. However, there is no guarantee that a full-dimensional LHD (i.e., all the s factors together) has good space-filling properties when it is randomly generated. From the perspective of statistics, we can simply conclude that LHDs guarantee uniform samples for the marginal distribution of each single input, however there is no guarantee that they will have good multivariate properties.

A maximin distance design (MDD) (Johnson et al., 1990) maximizes the distances between the experimental points by maximizing the minimum \(L_2\)-distance (\(ML_2D\)) so that no two points are too close. For any LHD(n,s) with n rows \(r_i=(r_{i1},...,r_{is}),~1\le i\le n,\) its \(ML_2D\) is defined as follows

$$\begin{aligned} ML_2D(LHD(n,s))=\min \left\{ L_2D_{ij}=\sum _{k=1}^s|r_{ik}-r_{jk}|^2:\vert{r_{i}}\ne r_{j},\vert{r_{i}}, r_{j}\in LHD(n,s)\right\} . \end{aligned}$$

A MDD maximizes the \(ML_2D\) value among all the \(ML_2D\) values over the experimental domain. Therefore, a MDD achieves the space-filling property in the full-dimensional space that cannot be guaranteed by a randomly generated LHD. Moreover, a MDD is asymptotically D-optimal under the Gaussian process model (Johnson et al., 1990). The construction of maximin distance LHDs (MDLHDs) has received a significant attention in the past few years and many techniques have been presented, interested reader can refer to Tang (1994); Van Dam et al. (2007); Zhou and Xu (2015); Joseph et al. (2015); Lin and Kang (2016); Wang et al. (2018). It is worth-mentioning that the level-permutation technique can be employed to enhance the space-filling behavior of a constructed design (cf. Elsawah, 2022b). For first-order polynomial models, orthogonal LHDs (OLHDs) are useful because they ensure the estimates of linear effects are uncorrelated. However, a space-filling LHD in terms of distance (i.e., MDLHD) may not be an OLHD and vice versa (Joseph & Hung, 2008). This fact is investigated in Sect. 5.

Orthogonality is a significant criterion for evaluating designs that estimate more effects without confounding. A design is called orthogonal if the correlation coefficient between any two distinct columns in the design is zero. For any LHD(n,s) with s columns \(c_j=(c_{1j},...,c_{nj}),~1\le j\le s,\) its orthogonality can be measured as follows

$$\begin{aligned} \rho _{max}(LHD(n,s))=\max \left\{ \rho _{ij}(LHD(n,s))=\rho (c_i,c_j):~c_i\ne c_j,~c_i, c_j\in LHD(n,s)\right\} , \end{aligned}$$

where

$$\begin{aligned} \rho (c_i,c_j)=\left| \frac{\sum _{k=1}^n(c_{ki}-\overline{c_i})(c_{kj}- {\overline{c_j}})}{\sqrt{\sum _{k=1}^n(c_{ki}-\overline{c_i})^2\sum _{k=1}^n(c_{kj} -\overline{c_j})^2}}\right| ~\text{ and }~\overline{c_i}=\sum _{k=1}^{n}\frac{c_{ki}}{n}. \end{aligned}$$

A design is called an orthogonal design(OD), if \(\rho _{max}=0.\) If \(\rho _{max}\ne 0\) but very close to zero, it is called a nearly OD. The construction of OLHDs and ODs has received a significant attention in the past few years and many techniques have been given, interested reader can refer to Ye (1998); Steinberg and Lin (2006); Cioppa and Lucas (2007); Bingham et al. (2009); Pang et al. (2009); Georgiou (2009); Lin et al. (2009, 2010); Sun et al. (2009, 2010, 2011); Yang and Liu (2012); Georgiou and Efthimiou (2014); Elsawah et al. (2021); Wang et al. (2022); Weng et al. (2023); Ke et al. (2023).

Recently, Pang et al. (2022) presented a new non-iterative technique for constructing asymptotically MDLHDs with \(2^{\alpha +1}\) runs and \(3\times 2^{\alpha -1}\) factors, \(LHD(2^{\alpha +1},3\times 2^{\alpha -1}),~\alpha \ge 2.\) This paper improves the technique in Pang et al. (2022) and gives a new non-iterative deterministic algorithm for constructing nearly orthogonal MDLHDs (OMDLHDs) with \(2^{\alpha +1}\) runs and \(3\times 2^{\alpha -1}\) factors. The newly constructed LHDs are better than the existing LHDs that can be generated via the non-iterative algorithm in Pang et al. (2022) and the iterative algorithm using the SA2008 function in the R package LHD (cf. Joseph and Hung, 2008). The newly constructed LHDs are OMDLHDs for large numbers of runs and factors. The numerical results show that for \(\alpha \ge 10,\) the newly constructed LHDs are optimal in view of orthogonality and \(ML_2D\), i.e., they are OMDLHDs. Theoretical and numerical justifications for the optimality behavior of the newly constructed LHDs are given. For further improvement of the performance of the newly constructed LHDs, an iterative algorithm is given based on a mixture of orthogonality and \(ML_2D\) criteria.

The rest of this paper is organized as follows. Section 2 gives the new non-iterative deterministic construction algorithm for asymptotically OMDLHDs. Sections 3 and 4 investigate the behavior of the newly constructed LHDs in view of the orthogonality and \(L_2\)-distance, respectively. An iterative algorithm for improving the performance of the newly constructed LHDs is discussed in Sect. 5. We close through the conclusion and future work in Sect. 6. For clarity and due to the limitation of the space, we relegate the all figures (cf. Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17) and all the proofs to an Appendix.

2 A non-iterative algorithm for asymptotically OMDLHDs

Sun et al. (2009) presented a new construction technique for OLHDs with \(2^{\alpha +1}\) runs and \(2^\alpha\) factors \(LHD(2^{\alpha +1},2^\alpha ),~\alpha \ge 2\) with a fold-over structure. Readers who are interested in reading more about fold-over techniques for symmetric and asymmetric designs can refer to Elsawah and Qin (2017) and Elsawah (2017), respectively and the references therein. Recently, Wang et al. (2015) gave two techniques (cf. Algorithm 1 and Algorithm 2 in Wang et al., 2015) to add more columns to any LHD with a fold-over structure. The power of the combination of the techniques in Sun et al. (2009) and Wang et al. (2015) is investigated in this paper to construct asymptotically OMDLHDs with \(2^{\alpha +1}\) runs and \(3\times 2^{\alpha -1}\) factors \(LHD(2^{\alpha +1},3\times 2^{\alpha -1}),~\alpha \ge 2\), called \({{\L }}_{\alpha} ^{new}\), by constructing OLHDs via the technique in Sun et al. (2009) and then following Algorithm 2 in Wang et al. (2015) to add more columns as follows.

Algorithm 1: Constructing asymptotically OMDLHDs

  • Step 1: For an integer \(\alpha \ge 2,\) generate the following two \(2^{\alpha } \times 2^\alpha\) matrices

    $$\begin{aligned} {\mathcal {A}}_{\alpha} =\begin{pmatrix}{\mathcal {A}}_{\alpha -1} &{} -{\mathcal {A}}^\star _{\alpha -1}\\ {\mathcal {A}}_{\alpha -1} &{} {\mathcal {A}}^\star _{\alpha -1}\end{pmatrix}~ \text{ and } ~{\mathcal {B}}_{\alpha} =\begin{pmatrix}{\mathcal {B}}_{\alpha -1} &{} -{\mathcal {B}}^\star _{\alpha -1}-2^{\alpha -1}{\mathcal {A}}^\star _{\alpha -1}\\ {\mathcal {B}}_{\alpha -1}+2^{\alpha -1}{\mathcal {A}}_{\alpha -1} &{} {\mathcal {B}}^\star _{\alpha -1}\end{pmatrix}, \end{aligned}$$

    where \({\mathcal {A}}_1=\begin{pmatrix}1 &{} 1\\ 1 &{} -1\end{pmatrix}, ~{\mathcal {B}}_1=\begin{pmatrix}1 &{} 2\\ 2 &{} -1\end{pmatrix}\) and \({\mathcal {M}}^ \star =\begin{pmatrix} -{\mathcal {M}}_1\\ {\mathcal {M}}_2\end{pmatrix}\) for any \(n\times m\) matrix \({\mathcal {M}}=\begin{pmatrix}{\mathcal {M}}_1\\ {\mathcal {M}}_2\end{pmatrix}\) with even n and \({\mathcal {M}}_1\) and \({\mathcal {M}}_2\) are the top and bottom half, respectively.

  • Step 2: Generate a \(2^{\alpha +1} \times 2^\alpha\) matrix with a fold-over structure by combining the matrices \({\mathcal {A}}_{\alpha} \) and \({\mathcal {B}}_{\alpha} \) as follows

    $$\begin{aligned} {\mathcal {C}}_{\alpha} =\begin{pmatrix}{\mathcal {B}}_{\alpha} -\frac{1}{2}{\mathcal {A}}_{\alpha} \\ \frac{1}{2}{\mathcal {A}}_{\alpha} -{\mathcal {B}}_{\alpha} \end{pmatrix}. \end{aligned}$$
  • Step 3: Generate the following \(2^{\alpha +1} \times 2^{\alpha -1}\) matrix

    $$\begin{aligned} {\mathcal {H}}^{new}_{\alpha} =\begin{pmatrix}{{\mathcal {E}}}_{\alpha -1} \\ {\mathcal {F}}_{\alpha -1}\end{pmatrix}, \end{aligned}$$

    where \({{\mathcal {E}}}_{\alpha -1}=({{\mathcal {E}}}_{ij}),\) \({\mathcal {F}}_{\alpha -1}=({\mathcal {F}}_{ij}),\) \({\mathcal {C}}_{\alpha -1}=({\mathcal {C}}_{ij}),\) \({\mathcal {S}}_{{\mathcal {C}}_{\alpha -1}}=({\mathcal {S}}_{ij}),\) \({\mathcal {S}}_{ij}={\left\{ \begin{array}{ll}+1,~\text{ if } ~{\mathcal {C}}_{ij}\ge 0;\\ -1,~\text{ if } ~{\mathcal {C}}_{ij}< 0 \end{array}\right. }\) and

    $$\begin{aligned} {\left\{ \begin{array}{ll}{{\mathcal {E}}}_{ij}={\mathcal {S}}_{ij}(2|{\mathcal {C}}_{ij}|-\frac{1}{2}),~{\mathcal {F}}_{ij}={\mathcal {S}}_{ij}(2|{\mathcal {C}}_{ij}|+\frac{1}{2}),~\text{ for } ~ 1 \le i \le 2^{\alpha -1};\\ {{\mathcal {E}}}_{ij}={\mathcal {S}}_{ij}(2|{\mathcal {C}}_{ij}|+\frac{1}{2}),~{\mathcal {F}}_{ij}= {\mathcal {S}}_{ij}(2|{\mathcal {C}}_{ij}|-\frac{1}{2}),~\text{ for } ~ 2^{\alpha -1}+ 1 \le i \le 2^{\alpha }. \end{array}\right. } \end{aligned}$$
  • Step 4: The newly constructed \(LHD(2^{\alpha +1},3\times 2^{\alpha -1})\) is given by column-combining the two matrices \({\mathcal {C}}_{\alpha} \) and \({\mathcal {H}}^{new}_{\alpha} \) as follows \({{\L }}_{\alpha} ^{new}=\left( {\mathcal {C}}_{\alpha} ~~{\mathcal {H}}^{new}_{\alpha} \right) .\)

Remark 1

Pang et al. (2022) constructed \(LHD(2^{\alpha +1},3\times 2^{\alpha -1}),~\alpha \ge 2,\) called \({{\L }}_{\alpha} ^{P22},\) by constructing OLHDs via the technique in Sun et al. (2009) and then following Algorithm 1 in Wang et al. (2015) as follows

$$\begin{aligned} {{\L }}_{\alpha} ^{P22}=\left( {\mathcal {C}}_{\alpha} ~~{\mathcal {H}}^{P22}_{\alpha} \right) ,~{\mathcal {H}}^{P22}_{\alpha} = \left(\begin{matrix}2{\mathcal {C}}_{\alpha -1}-\frac{1}{2}{} {\textbf {1}}_{{\mathcal {C}}_{\alpha -1}} \\ 2{\mathcal {C}}_{\alpha -1}+\frac{1}{2}{} {\textbf {1}}_{{\mathcal {C}}_{\alpha -1}}\end{matrix}\right), \end{aligned}$$

where \({\mathcal {C}}_{\alpha} \) is generated as mentioned-above and \({\textbf {1}}_{{\mathcal {C}}_{\alpha -1}}\) is an all-ones matrix with the same size as \({{\mathcal {C}}_{\alpha -1}}.\)

An illustrative example. To understand the mechanism of the above-mentioned algorithms, Table 1 gives the \({\mathcal {A}}_{\alpha} ,~{\mathcal {B}}_{\alpha} ,~{\mathcal {C}}_{\alpha} ,~{\mathcal {H}}_{\alpha} ^{P22},~{{\L }}_{\alpha} ^{P22},~{\mathcal {H}}_{\alpha} ^{new}\) and \({{\L }}_{\alpha} ^{new}\) for \(\alpha = 2.\) From Table 1, we get: (i) \(ML_2D({{\L }}^{new}_2)=55>ML_2D({{\L }}^{P22}_2)=52\), \(ML_2D({\mathcal {H}}^{new}_2)=ML_2D({\mathcal {H}}^{P22}_2)=2\) and \(2^{nd}ML_2D({\mathcal {H}}^{new}_2)=13>2^{nd}ML_2D({\mathcal {H}}^{P22}_2)=10,\) where \(2^{nd}ML_2D\) is the second minimum \(L_2\)-distance. (ii) \(\rho _{max}({{\L }}^{new}_2)=\rho _{max}({{\L }}^{P22}_2)=0.1904\), \(\rho _{max}({\mathcal {H}}^{new}_2)=0<\rho _{max}({\mathcal {H}}^{P22}_2)=0.047\) and \(\rho _{mean}({{\L }}^{new}_2)=0.037<\rho _{mean}({{\L }}^{P22}_2)=0.039,\) where \(\rho _{mean}\) is the mean of all the absolute correlation values among the factors. That is, the newly constructed LHD \({{\L }}^{new}_2\) is better than the existing LHD \({{\L }}^{P22}_2\) in terms of the \(L_2\)-distances among their runs and the correlations among their factors. The forthcoming discussions provide more numerical and theoretical investigations to support this interesting finding.

Lemma 1

For any j-th column \(L_j=(L_{1,j},...,L_{2^{\alpha +1},j})^T,~\alpha \ge 2,~1\le j\le 3\times 2^{\alpha -1}\) of the newly constructed LHD \({{\L }}^{new}_{\alpha}\), we get the following interesting behavior: (a) \(L_j\) is a permutation of \(\left\{ -\frac{2^{\alpha +1}-1}{2},-\frac{2^{\alpha +1}-3}{2},..., \frac{2^{\alpha +1}-3}{2},\frac{2^{\alpha +1}-1}{2}\right\}\) for any \(\alpha \ge 2\) and \(1\le j\le 3\times 2^{\alpha -1}.\) (b) \(|L_{i,j}+L_{2^{\alpha }+i,j}| = 0\) for any \(1\le i\le 2^{\alpha }\) and \(1\le j\le 2^{\alpha }.\) (c) \(|L_{i,j}-L_{2^{\alpha }+i,j}| = 1\) for any \(1\le i\le 2^{\alpha }\) and \(2^{\alpha }+1\le j\le 3\times 2^{\alpha -1}.\) (d) \(|L_{i,j}-L_{k,j}| \ge 1\) for any \(1\le i,k\le 2^{\alpha }\) and \(1\le j\le 3\times 2^{\alpha -1}.\)

Table 1 The newly constructed LHDs and the existing LHDs for \(\alpha =2\)

3 Investigating the orthogonality of the newly constructed LHDs

This section investigates the correlations among the factors of the newly constructed LHDs. A comparison study between the existing LHDs by Pang et al. (2022) (cf. Remark 1) and the newly constructed LHDs via the new proposed Algorithm 1 is given. All the theoretical proofs are given in the Appendix.

Lemma 2

From the above-mentioned Algorithm 1, it is obvious that: (i) The designs \({\mathcal {C}}_{\alpha}\) and \({\mathcal {H}}^{new}_{\alpha} \) are LHDs with a fold-over structure, i.e., they can be written as follows \(\left( d^T ~-d^T\right) ^T.\) (ii) The design \({\mathcal {C}}_{\alpha} \) is an orthogonal LHD, i.e., the correlation between any i-th column and j-th column is \(\rho _{max}({\mathcal {C}}_{\alpha} )=\rho _{ij}({\mathcal {C}}_{\alpha} )=0,~1\le i\ne j\le 2^{\alpha }.\)

Theorem 1

The newly constructed LHDs \({\mathcal {H}}^{new}_{\alpha} \) are orthogonal LHDs, however the existing LHDs \({\mathcal {H}}^{P22}_{\alpha} \) are nearly orthogonal LHDs, where for any \(\alpha \ge 2\) we get

$$\begin{aligned} \rho _{max}({\mathcal {H}}^{new}_{\alpha} )=\rho _{ij}({\mathcal {H}}^{new}_{\alpha} )=0<\rho _{max}({\mathcal {H}}^{P22}_{\alpha} ) =\rho _{ij}({\mathcal {H}}^{P22}_{\alpha} )=\frac{3}{4^{\alpha +1}-1},~1\le i\ne j\le 2^{\alpha -1}. \end{aligned}$$

Theorem 2

The newly constructed LHDs \({{\L }}^{new}_{\alpha} =({\mathcal {C}}_{\alpha} ~{\mathcal {H}}^{new}_{\alpha} )\) are asymptotically OLHDs, where

$$\begin{aligned} \rho _{max}\left( {{\L }}^{new}_{\alpha} \right) \longrightarrow 0~ \text{ as } ~\alpha \longrightarrow \infty . \end{aligned}$$

To provide numerical justifications for the above-mentioned theoretical main findings, Table 2 gives the maximum \(\rho _{max}\) and mean \(\rho _{mean}\) of the correlations correlations \(|\rho _{ij}|,~1\le i\ne j\le 3\times 2^{\alpha -1}\) between each two factors of the \(3\times 2^{\alpha -1}\) factors of the existing LHDs \({{\L }}^{P22}_{\alpha}\) and the newly constructed LHDs \({{\L }}^{new}_{\alpha}\) for \(2\le \alpha \le 10.\) For a closer look and greater understanding, all the \(\left( {\begin{array}{c}3\times 2^{\alpha -1}\\ 2\end{array}}\right)\) correlations \(|\rho _{ij}|,~1\le i\ne j\le 3\times 2^{\alpha -1}\) for \({{\L }}^{P22}_{\alpha}\) and \({{\L }}^{new}_{\alpha}\) are given in Figs. 1, 2, 3, 4 for \(2\le \alpha \le 5.\) From Table 2 and Figs. 1, 2, 3, 4, we can conclude that: (i) The correlations among the \(3\times 2^{\alpha -1}\) factors of the newly constructed LHDs \({{\L }}^{new}_{\alpha}\) are less than those among the factors of the existing LHDs \({{\L }}^{P22}_{\alpha} ,\) where \(\rho _{mean}({{\L }}^{new}_{\alpha} )<\rho _{mean}({{\L }}^{P22}_{\alpha} )\) (cf. the in-depth justification in Remark A1 in the Appendix). (ii) The existing LHDs \({{\L }}^{P22}_{\alpha}\) and the newly constructed LHDs \({{\L }}^{new}_{\alpha}\) are nearly orthogonal LHDs with the same maximum correlation values, where \(\rho _{max}({{\L }}^{P22}_{\alpha} )=\rho _{max}({{\L }}^{new}_{\alpha} ).\) (iii) The efficiency in terms of the orthogonality increases as \(\alpha\) increases, where \(\rho _{max}({{\L }}^{new}_{\alpha} )<\rho _{max}({{\L }}^{new}_{\beta} )~ \text{ for } \text{ any } ~ \alpha >\beta .\) Moreover, it is worth-mentioning that \(\rho _{max}({{\L }}_{\alpha} ^{new})\simeq 0~ \text{ for } \text{ any } ~ \alpha \ge 10.\)

Table 2 The orthogonality of the newly constructed LHDs \({{\L }}_{\alpha} ^{new}\) compared with the existing LHDs \({{\L }}_{\alpha} ^{P22}\) for \(2\le \alpha \le 10\)

4 Investigating the \(L_2\)-distance of the newly constructed LHDs

This section investigates the \(L_2\)-distances among the runs of the newly constructed LHDs. A comparison study between the existing LHDs in Pang et al. (2022) (cf. Remark 1) and the newly constructed LHDs via the new proposed Algorithm 1 is given. All the theoretical proofs are given in the Appendix.

Theorem 3

The design \({\mathcal {C}}_{\alpha}\) is a MDLHD among all the LHDs with \(2^{\alpha +1}\) runs, \(2^\alpha\) factors and a fold-over structure, where

$$\begin{aligned} ML_{2}D({\mathcal {C}}_{\alpha} )=\frac{2^{\alpha -1}}{3}(4^{\alpha +1}-1). \end{aligned}$$

Remark 2

From Lemma 2 and Theorem 3, we conclude that the design \({\mathcal {C}}_{\alpha}\) is an OMDLHD among all the LHDs with \(2^{\alpha +1}\) runs, \(2^\alpha\) factors and a fold-over structure.

Lemma 3

It is worth-mentioning that the LHD \({\mathcal {C}}_{\alpha}\) has only the following two different \(L_2\)-distance values

$$\begin{aligned} L_{2}D_{ij}({\mathcal {C}}_{\alpha} )=\left\{ \begin{array}{ll} \frac{2^{\alpha }(4^{\alpha +1}-1)}{3}, &{} \hbox { for any }j=i+2^{\alpha } \hbox { and }1\le i\le 2^{\alpha }; \\ \frac{2^{\alpha -1}(4^{\alpha +1}-1)}{3}, &{} \hbox {otherwise.} \end{array} \right. \end{aligned}$$

Theorem 4

The newly constructed LHD \({\mathcal {H}}^{new}_{\alpha}\) and the existing LHD \({\mathcal {H}}^{P22}_{\alpha}\) have the same \(ML_2D,\) which is the \(L_2\)-distance between the i-th run and the \(j(=i+2^{\alpha })\)-th run, where

$$\begin{aligned} ML_{2}D({\mathcal {H}}^{new}_{\alpha} )=ML_{2}D({\mathcal {H}}^{P22}_{\alpha} )=2^{\alpha -1}. \end{aligned}$$

Theorem 5

The \(ML_2D\) of the newly constructed LHD \({{\L }}^{new}_{\alpha}\) is larger than the \(ML_2D\) of the existing LHD \({{\L }}^{P22}_{\alpha} ,\) where

$$\begin{aligned} ML_{2}D({{\L }}^{new}_{\alpha} )=2^{3\alpha }-2^{2\alpha -1}-2^ {\alpha -2}>ML_{2}D({{\L }}^{P22}_{\alpha} )=2^{3\alpha }-\frac{3}{4}2^{2\alpha }. \end{aligned}$$

Moreover, the difference between them is given as follows

$$\begin{aligned} ML_{2}D({{\L }}^{new}_{\alpha} )-ML_{2}D({{\L }}^{P22}_{\alpha} )=4^{\alpha -1}-2^{\alpha -2}>0. \end{aligned}$$

Theorem 6

The newly constructed LHDs \({{\L }}^{new}_{\alpha}\) are asymptotically MDLHDs, where the efficiency is given as follows

$$\begin{aligned} eff\left( {{\L }}^{new}_{\alpha} \right) \longrightarrow 1~ \text{ as } ~\alpha \longrightarrow \infty . \end{aligned}$$

To provide numerical justifications for the above-mentioned theoretical main findings, Table 3 gives the \(ML_2D\) between any two runs of all the \(2^{\alpha +1}\) runs of the existing LHDs \({{\L }}^{P22}_{\alpha}\) and the newly constructed LHDs \({{\L }}^{new}_{\alpha}\) for \(2\le \alpha \le 10\). Moreover, the upper bound of the \(ML_2D,\) \(UB=\left\lfloor {2^{3\alpha }+2^{2\alpha -1}}\right\rfloor ,\) where \(\lfloor \zeta \rfloor\) is the integral part of \(\zeta\) (cf. Zhou and Xu (2015)), and the corresponding efficiency of the newly constructed LHDs are calculated in Table 3. For a closer look and greater understanding, all the \(\left( {\begin{array}{c}2^{\alpha +1}\\ 2\end{array}}\right)\) \(L_2\)-distances \(L_2D_{ij}\) between each two runs of all the \(2^{\alpha +1}\) runs of the LHDs \({\mathcal {C}}_{\alpha}\) for \(2\le \alpha \le 4\) are given in Fig. 5; all the \(\left( {\begin{array}{c}2^{\alpha +1}\\ 2\end{array}}\right)\) \(L_2\)-distances between each two runs of all the \(2^{\alpha +1}\) runs of the existing LHDs \({\mathcal {H}}^{P22}_{\alpha}\) and the newly constructed LHDs \({\mathcal {H}}^{new}_{\alpha}\) for \(2\le \alpha \le 4\) are given in Figs. 6, 7, 8; and all the \(\left( {\begin{array}{c}2^{\alpha +1}\\ 2\end{array}}\right)\) \(L_2\)-distances between each two runs of all the \(2^{\alpha +1}\) runs of \({{\L }}^{P22}_{\alpha}\) and \({{\L }}^{new}_{\alpha}\) for \(2\le \alpha \le 4\) are given in Figs. 9, 10, 11. From Table 3 and Figs. 5, 6, 7, 8, 9, 10, 11, we can conclude that: (i) For any \(\alpha \ge 2,\) the LHD \({\mathcal {C}}_{\alpha}\) has only two different \(L_2\)-distance values. (ii) For any \(\alpha \ge 2,\) the newly constructed LHD \({\mathcal {H}}^{new}_{\alpha}\) and the existing LHD \({\mathcal {H}}^{P22}_{\alpha}\) have the same \(ML_2D\), however the \(2^{nd}ML_{2}D\) of the newly constructed LHD \({\mathcal {H}}^{new}_{\alpha}\) is larger than the \(2^{nd}ML_{2}D\) of the existing LHD \({\mathcal {H}}^{P22}_{\alpha} .\) (iii) The newly constructed LHDs \({{\L }}^{new}_{\alpha}\) is better than the existing LHDs \({{\L }}^{P22}_{\alpha}\), where \(ML_{2}D({{\L }}^{new}_{\alpha} )>ML_{2}D({{\L }}^{P22}_{\alpha} ).\) (iv) The efficiency in terms of the \(ML_2D\) increases as \(\alpha\) increases, where \(eff({{\L }}_{\alpha} ^{new})>eff({{\L }}_{\beta} ^{new})~ \text{ for } \text{ any } ~ \alpha >\beta .\) Moreover, it is worth-mentioning that \(eff({{\L }}_{\alpha} ^{new})\simeq 1~ \text{ for } \text{ any } ~ \alpha \ge 10.\)

Table 3 The \(ML_2D\) of the newly constructed LHDs \({{\L }}_{\alpha} ^{new}\) compared with the existing LHDs \({{\L }}_{\alpha} ^{P22}\) for \(2\le \alpha \le 10\)

5 An iterative algorithm for improving the new LHDs

As we can see from the foregoing discussions, the maximum pairwise correlation between the factors and the minimum \(L_2\)-distance between the points are both good criteria for generating efficient LHDs. It is thought that minimizing correlation should spread out the points and maximizing the distance between the points should reduce the correlation. However, the practice demonstrated that each criterion has its advantages and disadvantages from different perspectives and there is no one-to-one relationship between them and designs obtained by them can be quite different (cf. Joseph and Hung (2008)). Figure 12 illustrates this fact for randomly generated 100 LHDs with 10 factors and 8 runs. Thus, it is still hard to classify LHDs based on these two criteria and there is a need to present a new criterion or investigate a new perspective at the use of these two efficient criteria. A bi-objective criterion (\(CorML_2D\)) for constructing nearly OMDLHDs can be defined for the newly constructed LHD \({{\L }}\) with \(2^{\alpha +1}\) runs and \(3\times 2^{\alpha -1}\) factors as follows

$$\begin{aligned} CorML_2D_{\omega }({{\L }})=\omega (1-\rho _{max}({{\L }}))+(1-\omega )\frac{ML_2D({{\L }})}{\left\lfloor {2^{3\alpha }+2^{2\alpha -1}}\right\rfloor },~0\le \omega \le 1. \end{aligned}$$

The optimal LHD \({{\L }}^{optimal}\) from the set of all the possible LHDs \({\mathcal {U}}\) with the same size maximizes \(CorML_2D_{\omega }\) for a given \(\omega\) over all the LHDs in \({\mathcal {U}},\) i.e.,

$$\begin{aligned} CorML_2D_{\omega }\left( {{\L }}^{optimal}\right) =\max _{{{\L }}\in {\mathcal {U}}}\left( CorML_2D_{\omega }({{\L }})\right) . \end{aligned}$$

Therefore, it obvious that: (i) The optimal LHD \({{\L }}^{optimal}\) is an OLHD if \(\omega =1\) and a MDLHD if \(\omega =0.\) (ii) When \(\omega\) is close to 0, the new bi-objective criterion \(CorML_2D_{\omega }\) is more sensitive to the \(ML_2D\) criterion, where they are directly proportional (cf. Fig. 13). (iii) When \(\omega\) is close to 1, the new bi-objective criterion \(CorML_2D_{\omega }\) is more sensitive to the orthogonality criterion, where they are inversely proportional (cf. Fig. 14). (iv) The new bi-objective criterion \(CorML_2D_{\omega }\) is more sensitive to the \(ML_2D\) criterion than the orthogonality criterion (cf. Fig. 15).

To provide numerical justifications of the powerful of the new criterion, Fig. 16 and Table 4 give the new bi-objective criterion \(CorML_2D_{\omega }\) for the existing LHDs \({{\L }}^{P22}_{\alpha}\) and the newly constructed LHDs \({{\L }}^{new}_{\alpha}\) for \(2\le \alpha \le 10\) and \(0\le \omega \le 1.\) From Table 4 and Fig. 16, we get that the newly constructed LHDs \({{\L }}^{new}_{\alpha}\) are better than the existing LHDs \({{\L }}^{P22}_{\alpha}\) in terms of the orthogonality (i.e., \(\omega\) close to 1); the \(ML_2D\) (i.e., \(\omega\) close to 0); and a mixture of orthogonality and \(ML_2D\) (i.e., \(\omega\) close to 0.5). It is worth-mentioning that when \(\omega =1\), \(CorML_2D_{1}({{\L }}_{\alpha} ^{P22})=CorML_2D_{1}({{\L }}_{\alpha} ^{new}),\) which means that \({{\L }}^{P22}_{\alpha}\) and \({{\L }}^{new}_{\alpha}\) have the same maximum correlation that is consistent with the result in Sect. 3. Moreover, this new criterion can be combined with an iterative algorithm for further improvement of the performance of the newly constructed LHDs.

The threshold accepting (TA) algorithm (cf. Fang et al., 2000) and adjusted TA algorithm (cf. Fang et al., 2017) are widely used efficient iterative algorithms for generating as optimal as possible design from a randomly generated initial design. When the initial design is good (as optimal as possible), an optimal (better than the initial one) resulting design can be generated after few iterations. However, when the initial design is not good (far away from optimal), an optimal resulting design cannot be generated after the same number of iterations and more iterations (time) are needed and there is no guarantee that the resulting design will be optimal. Therefore, using as good as possible designs as initial designs is much better than the randomly generated designs. Since the newly constructed LHDs \({{\L }}^{new}_{\alpha}\) in this paper are nearly optimal from orthogonality and \(ML_2D\), they can be used as initial LHDs for the TA algorithm or adjusted TA algorithm to get much better LHDs from various perspectives. Fig. 17 gives the flowchart of an updated version of the TA algorithm as an iterative algorithm for improving the performance of the newly constructed LHDs in terms of the new bi-objective criterion \(CorML_2D_{\omega }\) for a given \(\omega\). It is worth-mentioning that by using the updated version of the TA algorithm in Fig. 17 with small numbers of iterations, we could not get LHDs better than our newly constructed LHDs. Which means that our newly constructed LHDs are optimal LHDs, especially for large sizes.

Table 4 The new bi-objective criterion of the newly constructed LHDs \({{\L }}_{\alpha} ^{new}\) compared with the existing LHDs \({{\L }}_{\alpha} ^{P22}\) for \(2\le \alpha \le 10\)

6 Conclusion and future work

A new construction non-iterative algorithm for nearly orthogonal maximin distance LHDs is given in this paper. The newly constructed LHDs are compared with the existing LHDs in Pang et al. (2022) and appear to perform better than them. The main results show that the newly constructed LHDs are asymptotically orthogonal maximin distance LHDs. Moreover, an iterative algorithm for further improvement of the performance of the newly constructed LHDs is given using a new criterion. Furthermore, the iterative algorithm using the SA2008 function in the R package LHD (cf. Joseph and Hung, 2008) is used to generate LHDs with the same sizes as those for the newly constructed LHDs and \(\rho _{max}\), \(\rho _{mean}\) and \(ML_2D\) are calculated in Table 5. The results show that, aside from the fact that the SA2008 function generated LHDs too slowly and takes too long time for large sizes, the LHDs that are generated by our new non-iterative algorithm are superior to those that are generated by the SA2008 function in view of orthogonality and \(L_2\)-distance.

Table 5 The orthogonality and \(ML_2D\) of the newly constructed LHDs \({{\L }}_{\alpha} ^{new}\) compared with the LHDs that are generated using the SA2008 function in the R package LHD

There are some other interesting ideas related to this idea for further study, such as: (i) This idea can be extended to generate large nearly orthogonal maximin distance LHDs by combining the technique in this paper with the multiple doubling technique in Elsawah (2021). The initial results show that this idea is a promising idea and the results will be given in the future paper. (ii) The bi-objective criterion can be extended to multi-objective criterion by combining more criteria. For example, Elsawah and Fang (2019) constructed uniform minimum aberration designs by minimizing the generalized word-length pattern (GWLP, cf. Ma and Fang, 2001) and then minimizing the used uniformity criterion. However, combining these two criteria as the above-mentioned bi-objective criterion needs more attention, where the uniformity criterion is a scalar and the GWLP is a sequence. In such cases, the dictionary cross-entropy loss function from Weng et al. (2021) can be used to transfer (simplify) the GWLP from sequence to scalar. The power of the combination of many criteria from this point of view will be investigated in our future work. (iii) Suppose that an experimenter begins the experimentation using the newly constructed LHDs. After the experiment is over or during the experimentation, some additional resources become available and the experimenter needs to afford more runs to the design. How does the experimenter pick the additional runs and augment the original LHD so as to get an optimal extended LHD? The future paper will provide an answer to this question by combining the technique in this paper with the augmented techniques in Elsawah et al. (2019)(cf. also Elsawah and Qin, 2016). (iv) Finally, the projection properties of the newly constructed LHDs need to be investigated and the relationship between a newly constructed LHD in the full-dimension and its low-dimensional projections will be studied in the light of the results in Elsawah et al. (2019).