1 Introduction

Meta-model and meta-model based global optimization methods have been commonly-used in solving the practical expensive problems in engineering. Meta-models, also called surrogate model or approximation model, are usually used to replace the expensive problems in optimization process. In the past years, many excellent meta-models have been developed. The famous kriging is presented originally in mining and geostatistics applications (Matheron, 1963; Cressie, 1989; Cresssie, 1993). Hardy originally developed radial basis function (RBF) to fit irregular topographic contours of geographical data using linear combinations of radially symmetric function based on Euclidean distance (Hardy, 1971). Polynomial response surface (PRS) models was originally developed for the analysis of physical experiments (Box and Wilson, 1951). The second-order polynomial response surface model, or called quadratic function (QF), is commonly selected in fitting various problems. Multivariate adaptive regression splines (MARS) was originally developed for flexible regression modeling of high dimensional data through a forward/backward iterative approach(Friedman, 1991). Support Vector Regression (SVR) was developed as an alternative technique for approximating complex engineering analyses (Clarke et al., 2005). To select a proper meta-model in fitting the given problem, the performance of the presented meta-models is intensively studied by the researchers. From the study, kriging is a good choice when fitting slightly nonlinear responses in high-dimension spaces (Simpson et al., 2001b); RBF is better in fitting high nonlinear responses (Fang et al., 2005); PRS is recommended when fitting slightly nonlinear and noisy responses(Jin et al., 2001); MARS can give accurate results in fitting cheap problems for its need of a large set of sample points (Simpson et al., 2001a); and the performance of SVR needs to be further studied for the unknown fundamental reason that it outperforms all other meta-models including kriging, PRS, RBF and MARS in the provided test (Wang and Shan, 2007).

Due to the request of high efficiency, many effective meta-model based iterative algorithms have been developed and widely used in engineering. Jones et al. developed a so-called efficient Global Optimization (EGO) to overcome the computational obstacles (Jones et al., 1998). Wang et al. used RBF to find the promising region, and then QF is used to detect the region containing the global optimum (Wang et al., 2004). To improve the performance of the methods, multiple meta-models based global optimization algorithms attracted many researchers’ attention. Gu et al. employed three different meta-models simultaneously in the search process and developed the hybrid and adaptive meta-modelling method(HAM), which greatly expands the scope of application of the algorithm (Gu et al., 2012). Viana et al. provided multiple surrogate efficient global optimization (MSEGO) (Viana et al., 2013). Chaudhuri et al. used multiple meta-models in constraint optimization (Chaudhuri et al., 2015). However, the efficiency of the algorithms mentioned above need further improved.

To get efficient algorithms, the strategy of reduction of the design space in the iterative process has been introduced and many excellent region reduction methods have been developed. Wang et al. used QF in the search process and gradually reduced the design space which reaches the given threshold (ARSM) (Wang et al., 2001). Shin and Grandhi removed the design using the interval method (Shin and Grankhi, 2001). Move-limit optimization strategies are also used to reduce the design space (Fadel et al., 1990; Fadel, and Cimtalay, 1993; Wujek and Renaud, 1998a; Grignon, and Fadel, 1994; Wujek and Renaud, 1998b). With the reduction of the design space, the search efficiency has been greatly improved. But the global optimum may be omitted with the removed space.

In this work, a multiple meta-model based design space differentiation method (MDSD) is proposed for the expensive problems. Unlike the conventional space reduction algorithms, the proposed algorithm employs QF in the important region and the kriging is used in the search of the whole design space simultaneously to avoid the situation that the global optimum is contained in the removed remaining region due to the coarse meta-models. And the test using a series of math functions and a real engineering expensive black-box problem demonstrates its performance.

The rest of the paper is organized as follows. Section 2 will introduce the basic knowledge about the meta-modeling techniques. The proposed MDSD method will be shown in Section 3. Section 4 will present the test results. And the conclusion will be shown in Section 5.

2 Technique approach

2.1 Latin hypercube design (LHD)

The Latin hypercube design is employed in this work for its “space filling” characteristic (Mckay et al., 1979). The simplest form of LHD can be defined as follows:

$$ {S}_{i,j}=\frac{1}{n}\left({F}_{i,j}-{P}_{i,j}\right) $$
(1)

where, n is the number of design variables; F i,j is an m*n matrix corresponding to the problems with n variables and m levels; P i,j is also an m*n matrix and each element is a random number in [0,1]. The detailed algorithm of LHD can be found in the literature (Mckay et al., 1979; Fang et al., 2006).

2.2 Quadratic function (QF)

Quadratic function is originally developed by Box and Wilson for the analysis of physical experiments(Box and Wilson, 1951). Its form can be expressed as follows:

$$ \widehat{y}\left(\mathbf{x}\right)={\beta}_o+\sum \limits_{i=1}^k{\beta}_i{x}_i+\sum \limits_{i=1}^k{\beta}_{ii}{x}_i^2+\sum \limits_i\sum \limits_j{\beta}_{ij}{x}_i{x}_j $$
(2)

where, β ij are the parameters and evaluated by the least squares method; y(x) is the approximation of the real function.

2.3 Kriging

Kriging is an intensively studied meta-model(Laslett, 1994; Simpson et al., 1998; Jin et al., 2001; Simpson et al., 2001c; Simpson et al., 2001b; Van Beers, 2005; Krige, 1953; Matheron, 1963). Its general form is shown as follows:

$$ y\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)+Z\left(\mathbf{x}\right) $$
(3)

where f(x) is a given zero order polynomial function and Z(x) is a Gaussian process with a zero mean value and a non-zero covariance Cov[Z(x i), Z(x j)]. So kriging can also be expressed in the following form:

$$ y\left(\mathbf{x}\right)=\sum \limits_{i=1}^n{\beta}_i{b}_i\left(\mathbf{x}\right)+Z\left(\mathbf{x}\right) $$
(4)
$$ Cov\left[Z\left({x}^i\right),Z\left({x}^j\right)\right]={\sigma}^2\mathbf{R}\left({x}_i,{x}_j\right) $$
(5)

where, σ2 is the process variance and R is the correlation matrix. A well written MATLAB toolbox of kriging can be downloaded at the website http://www.imm.dtu.dk/~hbn/dace.

3 Multiple Meta-models based design space differentiation (MDSD) method

The MDSD method is proposed especially for the expensive problems in practical design optimizations. Of all the current meta-models, the kriging is famous as it can predict highly nonlinear function more correctly (Wang, and Shan, 2007). And according to Taylor’s theorem, a QF can locally, accurately fit any smooth function. In this method, the QF is used in the search of the important region constructed using a part of the existing expensive points with lowest function values. And kriging is employed simultaneously in the search of the whole design space to avoid the local minima. The procedures of the proposed method are shown in Fig. 1.

Fig. 1
figure 1

Procedures of MDSD method

3.1 Steps of MDSD method

3.1.1 Step 1 Sample initial points

The number of initial sample points is usually small to save the computation cost. And from test, twelve initial points can meet the requirements and more initial points can’t noticeably increase the accuracy and efficiency, see Section 4.3. So, twelve initial sample points will be generated and evaluated in the proposed method, x 1, x 2, …, x 12, and y 1, y 2, …, y 12. Of course, more initial points can also be used.

3.1.2 Step 2 Identify the important region

In the proposed MDSD method, a varied number of expensive points with lowest function values are used to construct the important region. And this region is searched each iteration to improve the performance of the proposed method. To avoid the local minima, a global search in the whole design space will be carried out simultaneously. The search strategy will be introduced in next section and the number of expensive points used to construct the important region is defined as follows:

$$ {\displaystyle \begin{array}{l} ne=\left\{\begin{array}{l}\operatorname{int}\left({w_i}^{\ast } me\right),i=1,2,3,.\dots, 10\\ {} ne=12,i>10\end{array}\right.\\ {}{w}_i=\left[1.0-{0.1}^{\ast}\left(i-1\right)\right],i=1,2,3,.\dots, 10\end{array}} $$
(6)

where, me is the number of all the expensive points; ne is the number (the integer part) of expensive points to construct the important region; i is the number of iterations. In the proposed method, the important region is the smallest region to cover the used expensive points. Figure 2 shows a first important region.

Fig. 2
figure 2

An illustration of a first important region

According to (6), the distribution for ne reaches a peak at 36 then drops down to 12,see Table 1.

Table 1 The distribution for ne

In the proposed method, twelve new points are selected to update the meta-models in each iteration, so the points selected in the last several iterations are used to construct the important region at least before the tenth iteration, which can make the important region be gradually reduced to keep the global optimum as possible.

Figure 3 shows the change process of the important region in solving the Goldstein and Price (GP) function (The equation and plot are shown in Section 4). Since the tenth iteration, twelve points are used to construct the important region and the important region can be rapidly reduced to the area around the global minimum. Figure 4 shows the search process in solving the F16 function (The equation and plot are shown in Section 4). The obtained minimum value is rapidly decreased since the tenth iteration.

Fig. 3
figure 3

The important region in solving the GP function

Fig. 4
figure 4

The process in solving F16 function

In summary, the used strategy to construct the important region can make the obtained region gradually reduced before the tenth iteration and then rapidly decreased to the global minimum.

3.1.3 Step 3 Select new expensive points

Search in whole design space

Kriging is well-known for its high global accuracy and it is employed in the search of the whole design space. And the new expensive points are selected with the following steps:

  1. 1.

    Fit kriging using the initial sample points generated in step 1.

  2. 2.

    Generate a large number of points using LHD in the whole design space, 104 or more (Wang et al., 2004). These points will be evaluated by kriging and called cheap points, \( {x}_k^1,{x}_k^2,\dots, {x}_k^{10000} \), are obtained in this step. If the problems have constraints, the constraints will be fitted using both QF and kriging. And the 104 cheap points which can pass any one of the two meta-models’ test will be kept for next step.

  3. 3.

    Evaluate the generated cheap points using the fitted kriging meta-model and \( f\left({x}_k^1\right),f\left({x}_k^2\right),\dots, f\left({x}_k^{10000}\right) \) are obtained.

  4. 4.

    Selected six points with lowest function values evaluated by kriging as the new expensive points and \( {x}_e^1,{x}_e^2,\dots, {x}_e^6 \) are obtained.

Search in the important region

The gradually reduced important region will be searched using QF, because any smooth function can be locally, accurately fitted by a QF according to Taylor’s theorem. The procedure to select new expensive points in the important region is shown below:

  1. 1.

    Fit a QF using the initial sample points Generated in step 1 .

  2. 2.

    Generate a large number of points using LHD in the important region, 104 or more (Wang et al., 2004), \( {x}_q^1,{x}_q^2,\dots, {x}_q^{10000} \) are obtained in this step. These points will be evaluated by QF and also called cheap points. If the problems have constraints, the constraints will also be fitted using both QF and kriging. And the 104 cheap points to pass any one of the two meta-models’ test will be kept for next step.

  3. 3.

    Evaluate the generated cheap points using the fitted QF meta-model, and \( f\left({x}_q^1\right),f\left({x}_q^2\right),\dots, f\left({x}_q^{10000}\right) \) are obtained.

  4. 4.

    Selected six points as the new expensive points with lowest function values evaluated by QF and \( {x}_e^7,{x}_e^8,\dots, {x}_e^{12} \) are obtained.

Six new expensive points are selected both in the whole design space by kriging and in the important region by QF, so twelve new expensive points will be obtained in this step. The newly selected expensive points will be combined with the initial sample points to update the used meta-models until the program stops. Figure 5 shows a complete search process in solving the GP function (Its global minimum is at (0,-1).).

Fig. 5
figure 5

The search process in solving the GP function

3.1.4 Step 4 Check convergence

The program will stop when the convergence criteria are met. In HAM method, the mean value of five lowest function values of the expensive points is used to check the convergence and gained a great success (Gu et al., 2012). In the proposed method, the strategy is also employed and the program stops when the mean value of five lowest functions become negligible. Since the tenth iteration, the number of the expensive points is fixed and this convergence criterion become to work. See (7).

$$ {\displaystyle \begin{array}{l}\mid {\overset{\_\_}{F}}_{i+1}-{\overset{\_\_}{F}}_i\mid \le \varepsilon \\ {}\overline{F_i}=\frac{\sum \limits_{j=1}^5{f}_j}{5},i=10,11,12..\dots \end{array}} $$
(7)

where, ε is a small number given by the user and 0.5 is recommended for most situations; f j is the jth lowest function value of the obtained expensive points.

To save the computation time, the number of iterations is also used as the convergence criteria in the proposed method. When the number of iterations reaches the pre-defined number, the program will be forced to terminate. And the number of 25 is recommended.

4 Tests

In this section, four famous low-dimensional math functions with several local minima are used to test the ability to escape the trap of local minima of the proposed method. And five high-dimensional problems, from 6-D to 20-D, and a practical vehicle lightweight design problem are used to test the search efficiency and accuracy of the proposed method. The famous meta-model based algorithms, EGO, Mode-pursuing sampling method (MPS) and the previously developed HAM will also be employed for the comparison.

4.1 Low dimensional problems

All the four selected low-dimensional problems have several local minima, which are used to test the ability of the proposed method to escape the trap of local minima.

  1. 1.

    Beak Function(BF) (Younis et al., 2007)

Beak function is an exponential form, see (8) and Fig. 4.

$$ f\left({x}_1,{x}_2\right)=3{\left(1-{x}_1\right)}^2{e}^{\left(-{x}_1^2-{\left({x}_2+1\right)}^2\right)}-10\left(\frac{x_1}{5}-{x}_1^3-{x}_2^5\right){e}^{\left(-{x}_1^2-{x}_2^2\right)}-\frac{1}{3}{e}^{\left(-{\left({x}_1+1\right)}^2-{x}_2^2\right)},\left({x}_1,{x}_2\right)\in \left[-3,-4;3,4\right] $$
(8)
  1. 2.

    Alpine Function(AF) (Younis et al., 2007)

Alpine function is a sinusoidal function, see (9) and Fig. 5.

$$ f\left({x}_1,{x}_2\right)=\sin \left({x}_1\right)\times \sin \left({x}_2\right)\sqrt{x_1{x}_2},\left({x}_1,{x}_2\right)\in \left[0,0;10,10\right] $$
(9)
  1. 3.

    Goldstein and Price Function (GP) (Wang et al., 2004)

GP function is a high nonlinear function, see (10) and Fig. 6.

$$ f\left({x}_1,{x}_2\right)=\left[1+{\left({x}_1+{x}_2+1\right)}^2\Big(19-14{x}_1+3{x}_1^2-14{x}_2+6{x}_1{x}_2\right.+\left.3{x}_2^2\Big)\right]\left[{\left(30+2{x}_1-3{x}_2\right)}^2\Big(18-32{x}_1+12{x}_1^2+48{x}_2-\right.\left.36{x}_1{x}_2+27{x}_2^2\Big)\right],\left({x}_1,{x}_2\right)\in \left[-2,2\right] $$
(10)
Fig. 6
figure 6

Plot of Beak function

  1. 4.

    Six-hump camel-back (SC) function (Wang et al., 2004)

SC function is also a high nonlinear polynomial, (11) and Fig. 7.

$$ f\left({x}_1,{x}_2\right)=4{x}_1^2-\frac{21}{10}{x}_1^4+\frac{1}{3}{x}_1^6+{x}_1{x}_2-4{x}_2^2+4{x}_2^4,\left({x}_1,{x}_2\right)\in \left[-2,2\right] $$
(11)
Fig. 7
figure 7

Plot of Alpine function

In this test, 100 runs are carried out and the mean values of the number of the function evaluations (nfe), number of iterations (nit) and the obtained minima (min) will be given to show the search efficiency and accuracy of the proposed method, see Table 2. (The HAM method employs the same stop criteria as DSD method for the comparison.)

Table 2 Summary of low-dimensional problems

All the four low-dimensional math problems have several local minima and are mainly used to test the abilities of the used algorithms to escape the local minima. The results show that EGO has the best performance and the optimization results by EGO is identical to the analytical results. The developed MDSD method can obtain better results than the HAM method and the MPS method, just behaves worse than the EGO. The low accuracy of HAM and MPS shows their poor abilities to escape the local minima, see Table 3.

Table 3 Results in handling local minima

As shown in Table 2, EGO has the best ability to escape the trap of the local minima, and each run can get the global minimum. The proposed MDSD method is trapped 6 times in solving BF and once in solving AF. HAM has difficulties in solving BF and AF, which get the global minimum less than 90 times in 100 runs. MPS has the worst ability to escape the trap of the local minimum of the four methods and gets the global minimum less than 90 times for all the four problems.

4.2 High dimensional problems

  1. 5.

    Paviani function with n = 10 (PF) (Adorio, 2005)

$$ f(x)=\sum \limits_{i=1}^n\left[{\ln}^2\left({x}_i-2.0\right)+{\ln}^2\left(10-{x}_i\right)\right]-{\left(\prod \limits_{i=1}^n{x}_i\right)}^{0.2},{x}_i\in \left[2.1,9.9\right] $$
(12)
  1. 6.

    Trid Function with n = 10(TF) (Hedar 2005)

$$ f(x)=\sum \limits_{i=1}^n{\left({x}_i-1\right)}^2-\sum \limits_{i=1}^n{x}_i{x}_{i-1},{x}_i\in \left[-100,100\right] $$
(13)
  1. 7.

    F16 function with n = 16(F16) (Wang et al., 2004)

$$ f(x)=\sum \limits_{i=1}^n\sum \limits_{j=1}^n{a}_{ij}\left({x}_i^2+{x}_i+1\right)\left({x}_j^2+{x}_j+1\right),{x}_i,{x}_j\in \left[-5,5\right] $$
(14)

where

$$ \left[{a}_{ij}\right]=\left[\begin{array}{l}1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\\ {}0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\kern0.5em 0\\ {}0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 0\kern0.5em 1\end{array}\right] $$
  1. 8.

    Sum Squares function with n = 20(SSF) (Adorio, 2005)

$$ f(x)=\sum \limits_{i=1}^n{ix}_i^2,{x}_i\in \left[-10,10\right] $$
(15)

In this test, 100 runs are also carried out and the parameters min, nit and nfe are also used to test the efficiency and accuracy of the proposed MDSD method, see Table 4.

Table 4 Summary of the results in solving high-dimensional functions

Table 4 show that both the proposed MDSD method and the EGO method can provide close accurate results to the analytical minimum for all the four problems, but the MDSD method can save more than 50% of the computation time when nit is considered. HAM gives poor accuracy when solving TF and MPS fails to solve F16.

To further demonstrate the performance of the MDSD method, the distribution of the obtained minima of the 100 runs is given in Table 5.

Table 5 Distribution of the obtained minima by MDSD method

As shown in Table 5, more than 80% of the runs can get accurate results, which the obtained minima are close to their analytical minimum in solving all the four problems, separately.

In summary, the MDSD method can strike a good balance between search efficiency and accuracy. Overall, it is a good choice in solving the expensive problems.

4.3 Test of the different number of the initial points

To demonstrate the number of the initial sample points, 30 initial points are also used the four high dimensional problems and the results are shown in Table 6.

Table 6 results by MDSD method with different number of initial points

That can be seen from Table 6 that the performance of the MDSD method with 30 initial points has no noticeably improvements. So, twelve initial sample points are used to start the program.

4.4 Vehicle lightweight design

The lightweight design of the rear frame involves the finite element analysis and simulation and MSC. Nastran software is used to evaluate its stiffness. Figure 8 shows the FEA model.

Fig. 8
figure 8

Plot of GP function

Fig. 9
figure 9

Plot of SC function

Fig. 10
figure 10

FEA model of the rear frame

Fig. 11
figure 11

The displacements by the load

The weight of all the 43 parts of this model is 73.65 kg and one run of this FEA model containing more than 160,000 elements need about 3 min with Lenovo T420i installed a 64-bit operating system. The optimization model is shown below:

$$ {\displaystyle \begin{array}{l}\min \kern2.75em f(x)\\ {}s.t.\kern0.75em dis<2.0,{t}_{1-18}\in \left[0.6,2.5\right]\end{array}} $$
(16)

where, f(x)is the weight of the system and is the objective (unit: kg) of the optimization model; the thicknesses of the parts, t 1–18 (unit: mm), are the variables; the displacement by the load, dis (unit: mm), is the constraint. The constraint is also handled by the meta-models and the cheap points which can pass any of the meta-model will be kept for selection. The results is given in Table 6.

That can be seen from Table 7 the MDSD method yields an optimized result with 15 iterations and 192 function evaluations. The weight of the whole structure is reduced by 7.7 kg and the displacement by the load is decreased to 2.00 mm, which can meet the requirements in engineering. A number of 15 iterations and 192 function evaluations can also be accepted in engineering design. In summary, the MDSD method has a great potential in solving the practical problems.

Table 7 Summary of the lightweight design

5 Conclusion

In this work, a novel MDSD method is presented, which employs QF in the so-called important region and kriging in the whole design to avoid the local minima. Several benchmark math functions and a real engineering problem involving FEA show that the MDSD method can strike a good balance between search efficiency and accuracy and also has good ability to escape the local minima. The MDSD method preserves the advantages of high efficiency of the space reduction methods and avoids their shortcomings of removing the global optimum by mistake. It is real a good choice in solving the expensive problem in engineering.