1 Introduction

In the past few decades, meta-model and meta-model-based search methods have attracted many researchers’ attention. With meta-models, a majority of the designs can be evaluated without the expensive problems. Meta-model, also known as surrogate model or approximation model, is commonly-used in place of the expensive problems, usually the computer analysis and simulations. In the past, many representative meta-model fitting techniques have been developed. Kriging (Cressie 1988; Krige 1953; Sacks et al. 1989a, b), quadratic function (QF) (Myers and Montgomery 2002), and radial basis functions (RBF) (Dyn et al. 1986; Fang and Horstemeyer 2006; Hardy 1971) are the commonly selected models. Of the three meta-models, kriging is more accurate in fitting low-order nonlinear and large-scale problems; QF is less accurate than kriging but easy to use and recommended in fitting low-order nonlinear problems; RBF can interpolate sample points and ease to construct and is recommended in fitting high nonlinear problems (Fang et al. 2005; Jin et al. 2001; Wang and Shan 2007). Besides the three meta-models mentioned above, multivariate adaptive regression splines (MARS) (Friedman 1991) and support vector regression (SVR) (Clarke et al. 2005) are also the alternatives in fitting the given problems. However, MARS may provide poor performance when small or scarce sample sets are used (Jin et al. 2001). SVR outperforms kriging, MARS, QF, and RBF in accuracy with a large number of test problems (Jin et al. 2001), but the fundamental reasons that SVR performs best is not clear (Wang and Shan 2007).

Meta-model-based iterative algorithms have also been intensively studied for the optimal results. Jones et al. employed kriging in the search process and developed the efficient global optimization (EGO), which adds a single point per cycle for the improvement of the present best sample with the expected improvement criterion (Jones et al. 1998). Wang et al. used RBF and QF in the different search stages and developed the mode-pursuing sampling (MPS) method (Sharif et al. 2008; Wang et al. 2004). To get an accurate meta-model, dynamic meta-model methods haven been studied. Zhao et al. used genetic algorithm to obtain an optimal mean structure of kriging and developed the dynamic kriging method (Zhao et al. 2011). Volpi et al. extended standard RBF using stochastic kernel functions and developed the dynamic radial basis function (DRBF) (Volpi et al. 2015). To improve the search accuracy and efficiency, the space reduction strategies have also been intensively studied. Wang et al. used a given threshold to reduce the design space and developed the adaptive response surface method (ARSM) (Wang 2003; Wang et al. 2001). Shin et al. employed the interval method to reduce the design space (Shin and Grandhi 2001). Fadel et al. employed the move-limit strategies to reduce the design space (Fadel and Cimtalay 1993; Fadel et al. 1990; Wujek and Renaud 1998a, b). Celis et al. employed trust region strategy to change the design space (Byrd et al. 1987; Celis et al. 1985; Rodriguez et al. 1998). However, once the space is deleted, the global optimum may also be removed with the deleted space. The used single meta-model may provide poor accuracy due to the opaque nature of the practical problems.

In recent years, the researchers tried to use multiple meta-models together in the search process. An ensemble of meta-models is a relatively easy way to implement this idea. Acar et al. built an ensemble of meta-models with the optimized weight factors (Acar and Rais-Rohani 2009). Lee et al. proposed an ensemble of meta-models with varied weights according to the prediction points of interest (Lee and Choi 2014). Gu et al. determined the weight factors of the used meta-models using a heuristic method (Gu et al. 2015). Jie et al. adaptively selected the weight factors of hybrid model in the optimization process (Jie et al. 2015). Shi et al. decided the weights for the used radial basis functions through solving a quadratic programming subproblem (Shi et al. 2016). Ferreira et al. used least squares approach to create ensemble of meta-models and extended the strategy to efficient global optimization (Ferreira and Serpa 2016, 2018). Ye et al. used an ensemble of meta-models with optimized weight factors in the reduced design space by the fuzzy clustering technique (Ye and Pan 2017). Yin et al. divided the design space into multiple subdomains and constructed an ensemble of meta-models with a set of optimized weight factors for each subdomain (Yin et al. 2018). Some researchers tried to use multiple meta-models together in the optimization process without given explicit factors to the used meta-models. Gu et al. used kriging, QF, and RBF together and developed a so-called hybrid and adaptive meta-modeling (HAM) method (Gu et al. 2012, 2009). Cai et al. employed QF in the search of the important region and kriging in the whole design space (Cai et al. 2018). Viana et al. used multiple meta-models in EGO cycle and proposed the multiple surrogate efficient global optimization (MSEGO) algorithm (Viana et al. 2013). However, their performance still needs to be improved for the complex expensive problems in engineering.

In this work, a hybrid meta-model-based design space exploration (HMDSE) method is proposed. In this method, an important region is firstly constructed using a varied number of the current expensive points, which are evaluated by the expensive problems to be solved. Different from the conventional space reduction methods, the search process will be carried out both in the important region and the remaining region. To further demonstrate the global optimum, the whole design space will also be searched simultaneously by the meta-models. Through intensive test, the newly proposed HMDSE method shows excellent accuracy, efficiency, and robustness.

2 Hybrid meta-model-based design space exploration (HMDSE) method

Multiple meta-models used together in the search process can offer an insurance in solving a given problem and at least can improve robustness of the evaluations (Goel et al. 2007; Viana et al. 2010). In the proposed HMDSE method, an important region will be constructed and kriging, RBF, and QF will be used together both in the important region and the remaining region. In addition, the whole design space will be searched again for the global optimum. The procedures of the proposed HMDSE method are shown in Fig. 1.

Fig. 1
figure 1

Procedures of the HMDSE method

2.1 Procedures of the HMDSE method

2.1.1 Step 1: Sample initial points

In the proposed HMDSE method, the widely used Latin hypercube design (LHD) is employed to generate points. The math form of LHD is shown in (1).

$$ {S}_{i,j}=\frac{1}{n}\left({F}_{i,j}-{P}_{i,j}\right) $$
(1)

where n is the number of variables, Fi,j is the randomly permuted integers from 1 to n, and Pi, j is a random number in [0,1]. The detailed description of LHD can be found in the literature (Fang et al. 2006; Mckay et al. 1979). In this step, 14 initial points are generated and the number will not increase with the number of the design variable increasing; they are \( {x}_I^1,{x}_I^2,\cdots, {x}_I^{14} \). More initial points can also be defined by the users. The initial points will be evaluated using the original expensive problems and also called expensive points.

2.1.2 Step 2: Identify the important region

The important region is a relative smaller region inside the design space which may contain the global optimum and will be gradually reduced. In the proposed HMDSE method, the important region is constructed using a part of the expensive points with lowest function values. The number of the expensive points to construct the important region is defined as follows (Cai et al. 2018):

$$ {\displaystyle \begin{array}{l} ne=\left\{\begin{array}{l}\operatorname{int}\left({w_i}^{\ast } me\right),i=1,2,\mathrm{3....10}\\ {}10,i>10\end{array}\right.\\ {}{w}_i=\left[1.0-{0.1}^{\ast}\left(i-1\right)\right],i=1,2,\mathrm{3....10}\end{array}} $$
(2)

where me is the number of all the expensive points and i is the number of iterations. According to (2), the number of the points to construct the important region increases from 14 to a maximum value 40 and decreases to 10, if 13 new points are selected in each iteration (Table 1).

Table 1 The values of ne

The gradually varied number of points to construct the important region can make the constructed region gradually reduced in the first several iterations and then rapidly reduced to the global optimum (see Fig. 2).

Fig. 2
figure 2

An illustration of the important region

That can be seen from Fig. 2, the important region contains the global minimum in the whole search process. And the search process in the remaining region and the whole design space can demonstrate the global optimum, even if the global optimum is outside the important region. And that also can be seen the important region is gradually reduced in the first five iterations and then rapidly reduced.

2.1.3 Step 3: Fit the meta-models

In this step, three representative meta-models will be fitted; they are kriging, QF, and RBF.

Kriging

Kriging is a widely used meta-model, and its math form is shown below:

$$ \widehat{y}\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)+Z\left(\mathbf{x}\right) $$
(3)

where f(x) can be defined as an approximation function and a constant term is taken in this work; Z(x) is a random process with zero mean value and its non-zero covariance is Cov[Z(xi), Z(xj)]. So the kriging used in this work can be expressed below:

$$ \widehat{y}\left(\mathbf{x}\right)=\beta +Z\left(\mathbf{x}\right) $$
(4)
$$ Cov\left[Z\left({x}^i\right),Z\left({x}^j\right)\right]={\sigma}^2\mathbf{R}\left({x}_i,{x}_j\right) $$
(5)

where σ2 is the variance and R is the correlation. In this work, a Gaussian correlation function is employed (Simpson et al. 2001).

$$ \mathbf{R}\left({x}^i,{x}^j\right)=\exp \left[\sum \limits_{k=1}^{n_s}{\theta}_k{\left|{x}_k^i-{x}_k^j\right|}^2\right] $$
(6)

where θk is the correlation parameters to fit the model. The starting value of θ is defined as 10 with the bounds ranging from 0.1 to 20 (Lophaven et al. 2002). \( {x}_k^i \) and \( {x}_k^j \) represent the kth components of the points xi and xj. A detailed description of kriging can be found in the literature (Simpson et al. 2001).

RBF

The expression of RBF is shown in (7)

$$ \widehat{y}=\phi \left(\mathbf{x}\right)=\sum \limits_{i=1}^n{\beta}_i\left\Vert \mathbf{x}-{\mathbf{x}}_i\right\Vert $$
(7)

where ‖ • ‖ represents the Euclidean norm, βi is the coefficients, and xi is the input.

QF

The general form of QF is shown in (8).

$$ \widehat{y}\left(\mathbf{x}\right)={\beta}_o+\sum \limits_{i=1}^k{\beta}_i{x}_i+\sum \limits_{i=1}^k{\beta}_{ii}{x}_i^2+\sum \limits_i\sum \limits_j{\beta}_{ij}{x}_i{x}_j $$
(8)

where the coefficients β are evaluated by least squares method.

2.1.4 Step 4: Generate three set of large number of points

In this step, three sets of large number of N points will be generated in the three regions; N is 10,000 or more for each set (Wang et al. 2004). The points are \( {P}_{IR}^1,{P}_{IR}^2,...,{P}_{IR}^N \) generated in the important region, \( {P}_{RR}^1,{P}_{RR}^2,...,{P}_{RR}^N \) generated in the remain region, and \( {P}_{WDS}^1,{P}_{WDS}^2,...,{P}_{WDS}^N \) generated in the whole design space. These points will be evaluated by the meta-models and also called cheap points.

2.1.5 Step 5: Evaluate the points

In this step, the three sets of the points will be evaluated by the three meta-models and nine sets of function values will be obtained.

  • For the points generated in the important region, the function values are \( \widehat{f}\left({P}_{IR}^1\right),\widehat{f}\left({P}_{IR}^2\right),...,\widehat{f}\left({P}_{IR}^N\right) \) evaluated by kriging, \( \widehat{g}\left({P}_{IR}^1\right),\widehat{g}\left({P}_{IR}^2\right),...,\widehat{g}\left({P}_{IR}^N\right) \) evaluated by RBF, and \( \widehat{h}\left({P}_{IR}^1\right),\widehat{h}\left({P}_{IR}^2\right),...,\widehat{h}\left({P}_{IR}^N\right) \) evaluated by QF.

  • For the points generated in the remaining region, the function values are \( \widehat{f}\left({P}_{RR}^1\right),\widehat{f}\left({P}_{RR}^2\right),...,\widehat{f}\left({P}_{RR}^N\right) \) evaluated by kriging, \( \widehat{g}\left({P}_{RR}^1\right),\widehat{g}\left({P}_{RR}^2\right),...,\widehat{g}\left({P}_{RR}^N\right) \) evaluated by RBF, and \( \widehat{h}\left({P}_{RR}^1\right),\widehat{h}\left({P}_{RR}^2\right),...,\widehat{h}\left({P}_{RR}^N\right) \) evaluated by QF.

  • For the points generated in the whole design space, the function values are \( \widehat{f}\left({P}_{WDS}^1\right),\widehat{f}\left({P}_{WDS}^2\right),...,\widehat{f}\left({P}_{WDS}^N\right) \) evaluated by kriging, \( \widehat{g}\left({P}_{WDS}^1\right),\widehat{g}\left({P}_{WDS}^2\right),...,\widehat{g}\left({P}_{WDS}^N\right) \) evaluated by RBF, and \( \widehat{h}\left({P}_{WDS}^1\right),\widehat{h}\left({P}_{WDS}^2\right),...,\widehat{h}\left({P}_{WDS}^N\right) \) evaluated by QF.

2.1.6 Step 6: Select the n points with lowest function values

In each region, three sets of n points will be obtained. n is 100 in this work and also can be defined by the user.

  • In the important region: \( {P}_{IR}^{KL-1},{P}_{IR}^{KL-2},...,{P}_{IR}^{KL-n} \) according to the values evaluated by kriging, \( {P}_{IR}^{RL-1},{P}_{IR}^{RL-2},...,{P}_{IR}^{RL-n} \) according to the values evaluated by RBF, and \( {P}_{IR}^{QL-1},{P}_{IR}^{QL-2},...,{P}_{IR}^{QL-n} \) according to the values evaluated by QF

  • In the remaining region: \( {P}_{RR}^{KL-1},{P}_{RR}^{KL-2},...,{P}_{RR}^{KL-n} \), \( {P}_{RR}^{RL-1},{P}_{RR}^{RL-2},...,{P}_{RR}^{RL-n} \), and \( {P}_{RR}^{QL-1},{P}_{RR}^{QL-2},...,{P}_{RR}^{QL-n} \) are obtained

  • In the whole design space: \( {P}_{WDS}^{KL-1},{P}_{WDS}^{KL-2},...,{P}_{WDS}^{KL-n} \), \( {P}_{WDS}^{RL-1},{P}_{WDS}^{RL-2},...,{P}_{WDS}^{RL-n} \) and \( {P}_{WDS}^{QL-1},{P}_{WDS}^{QL-2},...,{P}_{WDS}^{QL-n} \) are obtained

2.1.7 Step 7: Group the points

  • In the important region: the points, \( {P}_{IR}^{KL-1},{P}_{IR}^{KL-2},...,{P}_{IR}^{KL-n} \), \( {P}_{IR}^{RL-1},{P}_{IR}^{RL-2},...,{P}_{IR}^{RL-n} \) and \( {P}_{IR}^{QL-1},{P}_{IR}^{QL-2},...,{P}_{IR}^{QL-n} \) are all obtained from \( {P}_{IR}^1,{P}_{IR}^2,...,{P}_{IR}^N \). So the selected points may appear in all the three small sets, in any two of the small sets or only in one of the small sets. And seven subsets of points can be obtained. If the set contains the points,\( {P}_{IR}^{KL-1},{P}_{IR}^{KL-2},...,{P}_{IR}^{KL-n} \), is named A, the set contains the points, \( {P}_{IR}^{RL-1},{P}_{IR}^{RL-2},...,{P}_{IR}^{RL-n} \), is named B and the set contains the points, \( {P}_{IR}^{QL-1},{P}_{IR}^{QL-2},...,{P}_{IR}^{QL-n} \), is named C. The seven subsets can be obtained using (9)(Gu et al. 2012).

$$ {\displaystyle \begin{array}{l}{S}_1=A\cap B\cap C;\\ {}{S}_2=A\cap B-{S}_1;{S}_3=A\cap C-{S}_1;{S}_4=B\cap C-{S}_1;\\ {}{S}_5=A-{S}_1-{S}_2-{S}_3;{S}_6=B-{S}_1-{S}_2-{S}_4;{S}_7=C-{S}_1-{S}_3-{S}_4;\end{array}} $$
(9)

The points generated in the other two regions will also be grouped using (9).

2.1.8 Step 8: Select new points

  • In the important region. If an average of one new point is selected, about seven new points will be obtained. The number of new points selected in each subset is defined using (10) (Gu et al. 2012):

$$ {\displaystyle \begin{array}{l}{k}_i=\operatorname{int}\left({w}_i\ast M\right),,i=1,\mathrm{2...},7\\ {}{w}_i=\frac{m_i\times {l}_i}{3^{\ast }n},i=1,\mathrm{2...},7\\ {}\kern0.50em \sum \limits_{i=1}^7{w}_i=1\end{array}} $$
(10)

where M is the total number of the newly selected points and M = 7 in the important region. n is the number of points in each set obtained in step 6 and n = 100 in this work. mi is the number of points in S1 to S7. li is the factor. The points in S1 are contained in sets A, B, and C, the points in S2 to S4 are contained in any two of the sets of A, B, and C, and the points in S5 to S7 are contained in any one of the set A, B and C. So l1 = 3, l2–4 = 2 and l5–7 = 1. And ki points with lowest function values in S1 to S7 will then be selected.

An illustration of steps 4 to 8 in the important region is shown in Fig. 3.

Fig. 3
figure 3

An illustration of new points’ selection in the important region

The new points in the other two regions are also selected using (9), and about M = 3 is used to save the computation time. More new points can also be selected by the users.

So about 13 new points will be selected in each iteration in the proposed HMDSE method. The 13 points are the expensive points and will then be evaluated by the original expensive problems.

2.1.9 Step 9: Check convergence

The number of the points to construct the important region becomes fixed since the 11th iteration and the stop criteria will start to work at the 10th iteration. To avoid the premature of the proposed algorithm, the program will stop when the improvement of the mean value of the five lowest function values can be ignored; see (11).

$$ {\displaystyle \begin{array}{c}\mid {\overset{\_\_}{F}}_{i+1}-{\overset{\_\_}{F}}_i\mid \le \varepsilon \\ {}\overline{F_i}=\frac{\sum \limits_{j=1}^5{f}_j}{5}\end{array}}\kern0.5em ,i=10,11,12,..... $$
(11)

where fj is the jth lowest function value and ε is a small value defined by the user. The users also can define other criteria based on needs.

3 Efficient global optimization algorithm for comparison

Efficient global optimization (EGO) algorithm starts with a kriging model and iteratively adds points to update the model based on the present best sample yPBS. Based on the definition in the literature (Jones et al. 1998; Viana et al. 2013), the improvement at a point x is shown in (12).

$$ I\left(\mathbf{x}\right)=\max \left({y}_{PBS}-Y\left(\mathbf{x}\right),0\right) $$
(12)

Where I(x) is a random variable because Y(x) is a random Gaussian process. The expected improvement EI(x) of I(x) can be expressed in (13) (Jones et al. 1998; Viana et al. 2013):

$$ EI\left(\mathbf{x}\right)=\left({y}_{PBS}-y\left(\mathbf{x}\right)\right)\varPhi \left(\frac{y_{PBS}-\widehat{y}\left(\mathbf{x}\right)}{s\left(\mathbf{x}\right)}\right)+s\left(\mathbf{x}\right)\phi \left(\frac{y_{PBS}-\widehat{y}\left(\mathbf{x}\right)}{s\left(\mathbf{x}\right)}\right) $$
(13)

where Φ(⋅) and ϕ(⋅) represent the distribution function and the standard normal density function, respectively. yPBS denotes the present best sample, \( \widehat{y}\left(\mathbf{x}\right) \) is the evaluation by kriging, and s(x) is the standard deviation of the prediction.

4 Tests of the approach

4.1 Math function

In this section, six benchmark math functions with the variables ranging from 10 to 24 are used to test the performance of the proposed method. For each function, 100 continuous runs will be carried out and the mean value of the obtained minimum (min), number of iterations (nit), and number of function evaluation (nfe) will be presented. The results by the EGO are also given for the comparison. The EGO method got a value of 63.4 in solving the Powell function, and the unrepresentative results are removed in statistics. The results are shown in Table 2.

Table 2 Results in solving the math functions (mean values)
  1. 1.

    Paviani function with n = 10 (PaF) (Adorio 2005)

$$ f\left(\mathbf{x}\right)=\sum \limits_{i=1}^n\left[{\ln}^2\left({x}_i-2.0\right)+{\ln}^2\left(10-{x}_i\right)\right]-{\left(\prod \limits_{i=1}^n{x}_i\right)}^{0.2},{x}_i\in \left[2.1,9.9\right] $$
(14)
  1. 2.

    Dixon & Price Function (DP) with N = 10(Lee 2007)

$$ f(x)={\left({x}_1-1\right)}^2+\sum \limits_{i=2}^{10}i{\left(2{x}_i^2-{x}_{i-1}\right)}^2,{x}_i\in \left[-5,5\right] $$
(15)
  1. 3.

    Trid Function (TF) with N = 10 (TF) (Hedar 2005)

$$ f(x)=\sum \limits_{i=1}^n{\left({x}_i-1\right)}^2-\sum \limits_{i=1}^n{x}_i{x}_{i-1},{x}_i\in \left[-100,100\right] $$
(16)
  1. 4.

    F16 function with N = 16 (Wang et al. 2004)

$$ f(x)=\sum \limits_{i=1}^{16}\sum \limits_{j=1}^{16}{a}_{ij}\left({x}_i^2+{x}_i+1\right)\left({x}_j^2+{x}_j+1\right),{x}_i,{x}_j\in \left[-5,5\right] $$
(17)

where

$$ \left[{a}_{ij}\right]=\left[\begin{array}{l}\kern1.25em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\\ {}\kern1.25em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}\kern1.25em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\end{array}\right] $$
  1. 5.

    Sum Squares function (SSF) with N = 20 (Adorio 2005)

$$ f(x)=\sum \limits_{i=1}^ni{x}_i^2,{x}_i\in \left[-10,10\right] $$
(18)
  1. 6.

    Powell function with n = 24 (PoF) (Adorio 2005)

$$ f\left(\mathbf{x}\right)=\sum \limits_{i=1}^{\frac{n}{4}}\left[{\left({x}_{4i-3}+10{x}_{4i-2}\right)}^2+5{\left({x}_{4i-1}-{x}_{4i}\right)}^2+{\left({x}_{4i-2}-2{x}_{4i-1}\right)}^4+10{\left({x}_{4i-3}-{x}_{4i}\right)}^4\right],{x}_i\in \left[-4,5\right] $$
(19)

That can be seen from Table 2, the HMDSE method outperforms the EGO in search accuracy in solving DP and SSF. And the two methods presented close accuracy in solving the other four functions. As to the search efficiency, the HMDSE method can save about 80% of the computation time for all the functions when the number of iterations is considered, compared with EGO.

That can be seen from Table 3, EGO outperforms HMDSE in solving TF and HMSED provides better robustness in solving DP and SSF. The small values of the standard deviation show the high robustness of the HMDSE method.

Table 3 Standard deviation of the results

We can see from Table 4 that the proposed HMDSE method can provide accurate results in solving PaF, TF, SSF, and PoF, which about 90% of the obtained minima are very close to their analytical minimum.

Table 4 Distribution of the obtained minima by HMDSE method

4.2 Vehicle lightweight design

People usually place heavy good on the rear frame of the vehicle. In vehicle development, the stiffness of the rear frame should meet the requirements. According to the company standard, the maximum displacement by 2KN goods representing 200-kg goods should be less than 2.00 mm. So a design optimization should be carried out to obtain a light structure with acceptable stiffness. The weight of the initial design is 73.7 kg, and the maximum displacement by the load is 2.05 mm. The material of the parts is steel. The elastic modulus is 210 Gpa and the density is 7.85e−6 kg/m3. The finite element model is shown in Fig.4. For the sheet metal parts, shell elements are used. Figure 5 shows the detailed mesh of the local parts. The stiffness is evaluated by the MSC.Nastran software with linear static analysis.

Fig. 4
figure 4

An illustration of the finite element model of the rear frame

Fig. 5
figure 5

The mesh of local view

In design optimization, 30 bigger parts are selected for the design optimization and their thicknesses are defined as the design variables (the thicknesses of the symmetrical parts are defined as one variable), and the optimization model is shown below:

$$ {\displaystyle \begin{array}{l}\min \kern1.25em mass(kg)\\ {}s.t.\kern1.5em dis<2.00 mm\\ {}0.6 mm\le {t}_i\le 2.5 mm,i=1,2,\cdots, 30\end{array}} $$
(20)

where mass is the objective representing the weight of the structure. dis is the constraint. ti is the design variable representing the thickness of the part. Some design variables are presented in Fig. 6. The optimization results are shown in Table 5, and the plots of the deformation under the load are shown in Fig. 7.

Fig. 6
figure 6

A few design variables

Table 5 Results of the lightweight design
Fig. 7
figure 7

Deformations under the load

The optimal design is obtained in the 22nd iteration with 296 function evaluations. The weight of the structure is reduced by 10.6 kg and the maximum displacement by the load is 1.96 mm, which can meet the requirements.

4.3 Pressure vessel problem

This problem was firstly introduced in the Ref. (Wilde 1978). The model is shown in Fig. 8. The design variables are radius (R) and length (L) of the cylindrical shell, shell thickness (T), and spherical head thickness (Th). The optimization is to reduce the cost, and the optimization model is shown in (19). A detail description can be found in the literature (Wang et al. 2004).

Fig. 8
figure 8

Pressure vessel

$$ \min \kern1em F=0.6224{T}_s RL+1.7781{T}_h{R}^2+3.1661{T}_s^2L+19.84{T}_s^2 Rs.t.\kern1.5em {\mathrm{g}}_1={T}_s-0.0193R\ge 0{\mathrm{g}}_2={T}_h-0.00954R\ge 0{\mathrm{g}}_3=\pi {R}^2L+\frac{4}{3}\pi {R}^3-1.296E6\ge 0\mathrm{R}\in \left[25,150\right],{T}_s\in \left[1.0,1.375\right],L\in \left[25,240\right],{T}_h\in \left[0.0625,1.0\right] $$
(21)

For this problem, 100 continuous runs have also been carried out and the results are shown in Table 6, where the abbreviations have the same meaning as those in Table 2.

Table 6 Results of pressure vessel problem

The obtained mean value is very close to the analytical minimum and a number of 16.1 iterations demonstrate its efficiency.

5 Conclusion

In this work, a hybrid meta-model-based method is proposed. In this method, the important region constructed using a part of the expensive points, the remaining region, and the whole design space will be searched simultaneously. Through test by six benchmark math functions, the proposed method shows excellent accuracy, efficiency, and robustness. The two engineering problems demonstrate its performance. In addition, the proposed HMDSE method is easy to use and few parameters need to be tuned for most problems. Overall, it has a great potential to be used in engineering.