1 Introduction

Uncertainties in practical engineering design are usually inevitable. RBDO considers uncertainties stemming from various sources (e.g., geometrical sizes (Du and Chen 2004), material properties (Chen et al. 2014), operational uncertainties (Sues et al. 2001), environmental uncertainties (Chan et al. 2010), numerical uncertainties (Du et al. 2005), etc.). And it can yield a reliable design (Aoues and Chateauneuf 2008; Hack et al. 2009; Kang et al. 2012); therefore it has received extensive attention in the past several decades.

The most challenging issue in RBDO is the unbearable computational cost while evaluating structure reliabilities. Probabilistic constraints are usually implicit and need to be evaluated through reliability analysis, in which simulation methods and analytical methods can be used. Simulation methods are accurate, such as Monte Carlo simulation (MCS) (Kuczera et al. 2010; Lee et al. 2008a; Lee et al. 2011; Papadrakakis and Lagaros 2002; Valdebenito and Schuëller 2011). However, this kind of method is computationally unaffordable, especially when the probability of failure is low. Analytical methods are commonly used due to their efficiency compared to simulation methods; They are generally gradient-based, such as the worst case analysis, the moment matching method (Du and Chen 2001) and the most probable point (MPP) based method (Hasofer and Lind 1974; Lee et al. 2008b). The worst case analysis method and the moment matching method are earlier reliability analysis methods which suffer from accuracy issue when random variables have large variations.

The strategy of searching the MPP is more efficient. Reliability index approach (RIA) (Enevoldsen and Sørensen 1994; Gasser and Schuëller 1997; Grandhi and Wang 1998; Jiang et al. 2011; Nikolaidis and Burdisso 1988; Reddy et al. 1994; Tu et al. 1999; Weiji and Li 1994) and performance measure approach (PMA) (Youn et al. 2005a) are the most commonly used MPP-based analytical methods. Many other efficient reliability analysis methods have also been developed, including the advanced mean value method (AMV) (Youn et al. 2005b), the hybrid mean value method (HMV) (Youn et al. 2005b), the arc search method (Du et al. 2004) and the dimension reduction method (DRM) (Lee et al. 2012; Lee et al. 2008a).

The integration strategies of reliability analysis and optimization also have great influence on the accuracy and efficiency of RBDO (Valdebenito and Schuëller 2010). Double-loop method has a nested structure, which is not efficient; Single-loop method (Agarwal et al. 2007; Chen et al. 1997; Kharmanda et al. 2002; Kirjner-Neto et al. 1998; Kogiso et al. 2012; Li et al. 2013; Liang et al. 2008; Shan and Wang 2008) only has one loop through removing the reliability analysis by its Karush–Kuhn–Tucker (KKT) optimality conditions, it is very efficient for the linear and mild nonlinear problems, but is inaccurate for highly nonlinear problems; Decoupled-loop method (Chen et al. 2013a; Li et al. 2013, ; Cheng et al. 2006; Ching and Hsu 2008; Cho and Lee 2011; Du and Chen 2004; Royset et al. 2001; Yi et al. 2008; Chen et al. 2013b; Zou and Mahadevan 2006) performs optimization and reliability analysis sequentially, it has a good balance between the accuracy and efficiency in solving RBDO problems.

Performance functions in practical engineering design are usually implicit and very expensive to evaluate. Therefore, metamodel-based RBDO methods have been developed to reduce the computational cost. Youn and Choi (2004b) used the response surface method (RSM) for RBDO; (Kim et al. 2008) proposed an RSM method with prediction interval estimation; Zhao et al. 2009) (Zhao et al. 2009) used the RSM and sequential sampling for probabilistic design; Papadrakakis and Lagaros proposed RBDO method using neural network and MCS (Papadopoulos et al. 2012; Papadrakakis and Lagaros 2002); Choi, Youn et al. (2001) used the moving least square (MLS) method based on selective interaction sampling for RBDO; (Ching and Li 2008) proposed the reliability analysis method using artificial neural network (ANN) based genetic algorithms; Mourelatos (2005) adopted symmetric optimal Latin hypercube sampling and Kriging model for RBDO; Pretorius, Craig et al. (2004) used Kriging model as both local and global approximation for continuous casting design optimization; Ju and Lee (2008) proposed a RBDO method using the moment method and Kriging model; Basudhar and Missoum (2009) proposed an adaptive sampling technique for RBDO based on support vector machine (SVM);

Metamodels are fitted from space-filling sampling, whose efficiency and accuracy directly depend on how to select the sample points. Several advanced sampling strategies have been developed: Lee and Jung (2008) suggested a constraint boundary sampling (CBS) method and Kriging model for RBDO; (Bect et al. 2012) proposed the sequential design of computer experiments for the estimation of a probability of failure (Bect et al. 2012); (Lee et al. 2011) introduced a sampling-based RBDO method using the stochastic sensitivity analysis and dynamic Kriging method; (Bichon et al. 2008) proposed the efficient global reliability analysis for nonlinear implicit performance functions; Kim and Lee (2010) suggested an improvement of Kriging-based sequential approximate optimization method via extended use of design of experiments; (Echard et al. 2011) proposed an active learning reliability method combining Kriging and Monte Carlo simulation; (Picheny et al. 2010) presented the adaptive designs of experiments for accurate approximation of a target region; Huang and Chan (2010) developed a modified efficient global optimization algorithm for maximal reliability in a probabilistic constrained space; Zhuang and Pan (2012) introduced a new sequential sampling method for design under uncertainty.

Sampling strategies that locate sample points evenly within the whole design domain are not efficient, such as Latin hypercube sampling (LHS) and full factorial design which are usually applied in the initialization stage. Sequential sampling strategies are more efficient, such as the constraint boundary sampling (CBS) (Lee and Jung 2008) which selects sample points sequentially on the limit state constraint boundaries within the design domain. However, the target of an optimization problem is to find the minimum/maximum objective function value, while the optimal design satisfies the constraints. Therefore, there is no need to keep all the parts of the limit state constraints within the design region being accurate. (Chen et al. 2014) proposed the local adaptive sampling (LAS) method for RBDO using Kriging model. The LAS method uses the local sampling window strategy, which can improve the accuracy of the local design region. However, the LAS method may fall into the local optimum.

In this paper, the importance boundary sampling (IBS) approach will be proposed to enhance the efficiency of Kriging-model-based RBDO. However, the IBS method in this paper is different from classical importance sampling (IS) method, which is used to calculate the probability of failure P f in the reliability analysis process. The classical IS method is widely applied to theoretical research and engineering applications to overcome the difficulties in the crude MC method. The basic idea is to carry out MC simulation with those sample points having a higher rate of falling in the failure region, because only these sample points contribute to the evaluation of P f (Glynn and Iglehart 1989; Siegmund 1976; Tang et al. 2012). The proposed IBS, by contrast, is used to improve the efficiency of the sampling process in the Kriging-model-based RBDO.

In the proposed IBS, the sampling and optimization processes will be conducted alternatively; and rather than fitting all the parts of the limit state constraints precisely within the design domain, IBS will firstly ensure the accuracy of the critical limit state constraint parts. Two importance coefficients are proposed to select sample points. The first one is calculated using the objective function value, and the second one is determined by the joint probability density value of the design variables. The optimization results from previous iteration are used in the subsequent sampling process to locate the sample points more rationally. The sequential optimization and reliability assessment (SORA) method (Du and Chen 2004) is applied in this paper to solve RBDO problems.

This paper assumes that the original performance functions are implicit, such as expensive physical test and complex finite element simulation. The searching of the MPP is based on the cheap Kriging models. When compared with the implicit performance function calls, the computational effort to find the MPP is very cheap.

The rest of the paper is organized as follows: the commonly used methods in RBDO and Kriging model will be reviewed in Section 2; the proposed IBS method will be explained in Section 3; Section 4 then uses illustrative examples to demonstrate the application of IBS with four comparison experiments; in Section 5, the conclusion will be drawn.

2 Commonly used methods in RBDO and kriging model

2.1 RBDO model

RBDO problem is typically formulated as follows:

$$ \begin{array}{l} find:\boldsymbol{d},{\boldsymbol{\mu}}_{\boldsymbol{X}}\hfill \\ {} \min :f\left(\boldsymbol{d},{\boldsymbol{\mu}}_{\boldsymbol{X}},{\boldsymbol{\mu}}_{\boldsymbol{P}}\right)\hfill \\ {}s.t.:\kern0.5em Prob\left({g}_j\left(\boldsymbol{d},\boldsymbol{X},\boldsymbol{P}\right)\ge 0\right)\ge {R}_j,\ j=1,\cdots, N\hfill \\ {}{\boldsymbol{d}}^{Lower}\le \boldsymbol{d}\le {\boldsymbol{d}}^{Upper},{{\boldsymbol{\mu}}_{\boldsymbol{X}}}^{Lower}\le {\boldsymbol{\mu}}_{\boldsymbol{X}}\le {{\boldsymbol{\mu}}_{\boldsymbol{X}}}^{Upper}\hfill \end{array} $$
(1)

where f(d, μ X , μ P ) is the objective function, Prob(g j (d, X, P) ≥ 0) is the probability function which denotes the probability of satisfying the jth performance function g j (d, X, P); N is the number of probabilistic constraints. d is the vector of the deterministic design variables; X and P are the vectors of random design variables and random parameters; μ X and μ P denote the mean vectors of X and P; R j denotes the desired design probability of satisfying the jth performance function.

2.2 Reliability analysis methods RIA and PMA

Reliability analysis is usually performed in the standard normal space. When the random variables X and P are not statistically independent or do not have normal distributions, transformation: u = T(X, P), and (X, P) = T − 1(u) will be needed, such as the Rosenblatt transformation (Rosenblatt 1952) or Nataf transformation (Liu and Der Kiureghian 1986), u is the vector of random variables in the standard normal space. Then the probabilistic constraint g(d, X, P) becomes G(u). The vector d of deterministic design variables will be ignored since it is regarded as constant during the reliability analysis process.

RIA searches the most probable point (MPP) u RIA in the standard normal space that has the minimum distance from the origin to the limit state constraint. The RIA model can be expressed as follows:

$$ \begin{array}{l} find:\boldsymbol{u}\\ {} \min \kern0.1em :\kern0.1em \left\Vert \boldsymbol{u}\right\Vert \\ {}s.t.\kern0.22em G\left(\boldsymbol{u}\right)=0\end{array} $$
(2)

Here, u is vector of the random variables in the standard normal space.

PMA searches the minimum value of the constraint function, while the design satisfies the target reliability index (e.g. ‖u‖ = β t).

$$ \begin{array}{l} find:\kern0.22em \boldsymbol{u}\\ {} \min \kern0.1em :\;G\left(\boldsymbol{u}\right)\\ {}s.t.\kern0.22em \left\Vert \boldsymbol{u}\right\Vert ={\beta}^t\end{array} $$
(3)

PMA is also called the inverse reliability analysis method. The optimal design u PMA obtained from (3) is called the inverse most probable point (IMPP).

2.3 Sequential optimization and reliability assessment (SORA)

For simplicity, the vector of random parameters P in (1) is ignored, that is because the vector P is more easily to be evaluated than these random variables. In this paper, the Kriging models are used to substitute the constraint functions, and the IBS method can be easily extended to account for random parameters P.

SORA is one of the most promising decoupled approaches. It is widely used in practical applications. In SORA, deterministic optimization and reliability analysis are performed sequentially. The probabilistic optimization is converted into a deterministic optimization by shifting the limit state constraints to the feasible region as seen in (4). The shifting vector s k + 1 is obtained from reliability analysis.

$$ \begin{array}{c}\hfill find:\;d,{\mu}_X\hfill \\ {}\hfill \min f\left(d,{\mu}_X\right)\hfill \\ {}\hfill s.t.{g}_j\left(d,{\mu}_X-{s}_X^{k+1}\right)\ge 0,\;j=1,2,\cdots, N\hfill \\ {}\hfill {d}^{Lower}\le d\le {d}^{Upper},{\mu}_X^{{}^{Lower}}\le {\mu}_X\le {\mu}_{{}_X}^{Upper}\hfill \end{array} $$
(4)

where

$$ \begin{array}{l}{s}_x^{k+1}={\mu}_X^k-{X}_{IMPP}^k\\ {}{X}_{{}_{IMPP}}^k={T}^{-1}\left({u}_{IMPP}^k\right)\end{array} $$

where N is the number of probabilistic constraints, μ k X denotes the optimal design in the kth iteration, X k IMPP is the IMPP in the original design space corresponding to the IMPP u k IMPP obtained from (3).

2.4 Kriging model

Kriging model (also called Gaussian process model) is proposed by a South African geostatistician (Matheron 1963; Sacks et al. 1989). In Kriging model, the response at a certain sample point not only depends on the design parameters but is also affected by the points in its neighborhood. The spatial correlation between design points is considered (Zhuang and Pan 2012).

The notations for constructing Kriging models of the constraint function g(x) are used in the description. The corresponding Kriging approximations are denoted as \( \widehat{g}\left(\boldsymbol{x}\right) \). Kriging is based on the assumption that the response function g(x) is composed of a regression model F(x)T β and stochastic process Z(x) as follows (Huang and Chan 2010; Kim et al. 2009a; Picheny et al. 2010):

$$ g\left(\boldsymbol{x}\right)=\boldsymbol{F}{\left(\boldsymbol{x}\right)}^T\boldsymbol{\beta} +Z\left(\boldsymbol{x}\right) $$
(5)

where F(x) is the trend function which consist of a vector of regression functions; β is the trend coefficient vector; Z(x) is assumed to have a zero mean and a spatial covariance function between Z(x) and Z(w) as follows:

$$ Cov\left[Z\left(\boldsymbol{x}\right),Z\left(\boldsymbol{w}\right)\right]={\boldsymbol{\sigma}}_Z^2\boldsymbol{R}\left(\boldsymbol{\theta}, \kern0.3em \boldsymbol{x},\kern0.3em \boldsymbol{w}\right) $$
(6)

where w is a point different from the point x; σ 2 Z is the process variance and R is the correlation function defined by its set of parameters θ (Echard et al. 2011; Kim et al. 2009b).

Several models exist to define the correlation function, but the squared-exponential function (also called anisotropic Gaussian model) is commonly used (Bichon et al. 2008; Rasmussen and Carl Edward 2004; Sacks et al. 1989), and is selected here for R:

$$ \boldsymbol{R}\left(\boldsymbol{\theta}, \boldsymbol{x},\boldsymbol{w}\right)={\displaystyle \prod_{i=1}^n \exp \left[-{\theta}_i{\left({x}_i-{w}_i\right)}^2\right]} $$
(7)

where x i and w i are the ith coordinates of the points x and w, n is the number of coordinates in the points x and w, and θ i is a scalar which gives the multiplicative inverse of the correlation length in the ith direction. An anisotropic correlation function is preferred here, as in reliability studies the random variables are often of different natures (Echard et al. 2011).

Given a set of sample points [x 1, ⋯ x m], x i ∈  n, and the responses gg i = g(x i) ∈ , m is the number of samples, then the expected value \( \widehat{g}\left(\boldsymbol{x}\right) \) and variance σ 2 g (x) of the Kriging model prediction at point x are (Bichon et al. 2008): σ 2 Z

$$ \widehat{g}\left(\boldsymbol{x}\right)=\boldsymbol{F}{\left(\boldsymbol{x}\right)}^T\boldsymbol{\beta} +\boldsymbol{r}\left(\boldsymbol{x}\right){\boldsymbol{R}}^{-1}\left(\boldsymbol{g}-\boldsymbol{H}\boldsymbol{\beta } \right) $$
(8)
$$ {\boldsymbol{\sigma}}_g^2\left(\boldsymbol{x}\right)={\boldsymbol{\sigma}}_Z^2-\left[\boldsymbol{F}\left(\boldsymbol{x}\right)\boldsymbol{r}\left(\boldsymbol{x}\right)\right]\kern0.2em {\left[\begin{array}{l}\boldsymbol{0}\kern1em {\boldsymbol{H}}^T\\ {}\boldsymbol{H}\kern1em \boldsymbol{R}\kern0.2em \end{array}\right]}^{-1}\kern0.2em \left[\begin{array}{l}\boldsymbol{F}\left(\boldsymbol{x}\right)\\ {}\kern0.2em \boldsymbol{r}\left(\boldsymbol{x}\right)\end{array}\right] $$
(9)

where r(x) is a vector containing the covariance between x and each of the m training points (defined by σ 2 Z (6)), R is a m × m matrix containing the correlations between each pair of training points, g is the vector of response outputs at each of the training points, and H is a m × q matrix with rows F(x i)T (the trend function for the ith training point containing q terms; for a constant trend, q = 1). This form of the variance accounts for the uncertainty in the trend coefficients β, but assumes that the parameters governing the covariance function (and θ) have known values (Bichon et al. 2008).

The trend coefficients β is estimated according to the references (Lophaven et al. 2002) by:

$$ \boldsymbol{\beta} ={\left({\boldsymbol{H}}^T{\boldsymbol{R}}^{-1}\boldsymbol{H}\right)}^{-1}{\boldsymbol{H}}^T{\boldsymbol{R}}^{-1}\boldsymbol{g} $$
(10)

The parameters σ 2 Z and θ are determined through maximum likelihood estimation. This involves taking the log of the probability of observing the response values g given the covariance matrix R, which can be written as (Bichon et al. 2008; Jones et al. 1998; Lophaven et al. 2002; Sacks et al. 1989):

$$ \log \left[ Prob\left(\boldsymbol{g}\left|\boldsymbol{R}\right.\right)\right]=-\frac{1}{m} \log \left|\boldsymbol{R}\right|- \log \left({\widehat{\boldsymbol{\sigma}}}_{\mathrm{Z}}^2\right) $$
(11)

where |R| indicates the determinant of R, and \( {\widehat{\boldsymbol{\sigma}}}_{\mathrm{Z}}^2 \) is the optimal value of the variance given \( \widehat{g}\left({\boldsymbol{x}}^i\right) \) an estimate of θ and is defined by

$$ {\widehat{\boldsymbol{\sigma}}}_{\mathrm{Z}}^2=\frac{1}{m}{\left(\boldsymbol{g}-\boldsymbol{H}\boldsymbol{\beta } \right)}^T{\boldsymbol{R}}^{-1}\left(\boldsymbol{g}-\boldsymbol{H}\boldsymbol{\beta } \right) $$
(12)

maximizing (11) gives the maximum likelihood estimate of θ, which in turn defines \( {\widehat{\boldsymbol{\sigma}}}_{\mathrm{Z}}^2 \) (Bichon et al. 2008).

Kriging model is an exact interpolation method. The prediction in a point x i of the design of experiments is exact, i.e. \( \widehat{g}\left({\boldsymbol{x}}^i\right)=g\left({\boldsymbol{x}}^i\right) \). Therefore, the Kriging variance is null in these points and it becomes important in unexplored areas. This enables to quantify the uncertainty of local predictions with an easily computable analytical function. These properties are interesting in reliability studies and metamodels as the Kriging variance represents a good index to improve a design of experiments (Echard et al. 2011).

3 Importance boundary sampling method

In this paper, the proposed IBS uses the important coefficients to identify the critical parts of limit state constraint boundaries. The objective function values and the joint probability density values of the design variables in the standard normal space are also applied to calculate the important coefficients. In addition, the sampling and optimization processes are conducted alternately in the proposed IBS method.

The proposed sampling criteria in IBS will be discussed in Section 3.1; then the procedures and flowcharts of IBS will be introduced in Section 3.2.

3.1 Importance boundary sampling (IBS) criteria

The proposed IBS method mainly selects sample points on the important parts of the limit state constraints which have relatively smaller objective function values or shorter distances to the current design point in RBDO. So in IBS, two sampling criteria will be proposed; and they will work together to improve the efficiency of Kriging-model-based RBDO. The constraint boundary sampling criterion will be first discussed, and then IBS criteria for RBDO will be introduced.

3.2 Constraint boundary sampling (CBS) criterion

In the RBDO processes, the design points and MPPs are usually located on the limit state constraints. Therefore, the accuracy of the limit state constraint boundaries should first be ensured.

The CBS criterion was proposed by Lee and Jung (Lee and Jung 2008). When there is sufficient sample data to construct the Kriging model, the prediction of approaches the normal distribution with mean \( \widehat{g}\left(\boldsymbol{x}\right) \) and standard deviation σ g (x).

If the limit state constraint is defined as g(x) = 0, then the probability of the Kriging prediction satisfying the constraint g(x) ≥ 0 is as follows:

$$ Prob\left(\boldsymbol{x}\right)=1-\varPhi \left(\frac{0-\widehat{g}\left(\boldsymbol{x}\right)}{\sigma_g\left(\boldsymbol{x}\right)}\right)=\varPhi \left(\frac{\widehat{g}\left(\boldsymbol{x}\right)}{\sigma_g\left(\boldsymbol{x}\right)}\right) $$
(13)

where, Φ is the cumulative density function of the standard normal distribution.

Then the standard normal probability density function \( \phi \left(\widehat{g}\left(\boldsymbol{x}\right)/{\sigma}_g\left(\boldsymbol{x}\right)\right) \) can be used to measure the closeness of the Kriging prediction \( \widehat{g}\left(\boldsymbol{x}\right) \) to the limit state constraint g(x) = 0, as seen in Fig. 1. The CBS criterion is defined as follows:

Fig. 1
figure 1

Feasible probability

$$ \mathrm{C}BS=\left\{\begin{array}{cc}\hfill {\displaystyle \sum_{j=1}^N\phi \left(\frac{{\widehat{g}}_j\left(\boldsymbol{x}\right)}{\sigma_{g_j}\left(\boldsymbol{x}\right)}\right).D\left(\boldsymbol{x}\right)}\hfill & \hfill if\kern0.5em {\widehat{g}}_j\left(\boldsymbol{x}\right)\ge 0,\kern0.5em \forall j\hfill \\ {}\hfill 0\hfill & \hfill otherwise\hfill \end{array}\right. $$
(14)

where, N is the number of constraints, D(x) is the minimal distance from the current sample point x to the existing sample points as defined in (15), m is the number of existing sample points.

$$ D\left(\boldsymbol{x}\right)=\frac{ \min \left\Vert \boldsymbol{x}-{\boldsymbol{x}}^i\right\Vert }{ \max \left\Vert {\boldsymbol{x}}^j-{\boldsymbol{x}}^i\right\Vert },\kern1em i,j=1,\cdots, m $$
(15)

When all the constraints are satisfied, the CBS criterion will have a large value at \( {\widehat{g}}_j\left(\boldsymbol{x}\right)/{\sigma}_{g_j}\left(\boldsymbol{x}\right)\approx 0 \), where the sample point x is close to the limit state constraints \( \left({\widehat{g}}_j\left(\boldsymbol{x}\right)\approx 0\right) \) or the Kriging prediction has large variance \( {\sigma}_{g_j}\left(\boldsymbol{x}\right) \). Therefore, CBS can locate sample points efficiently along the limit state constraint boundaries.

This paper constrains samples to lie in the predicted feasible region in CBS and IBS, as seen in (14), that is because when the design space is very large and constraint boundries are very long, but the feasible region is relatively small; then the constraint boundaries lacated outside the feasible region are not IBS 1(I 1) necessaryly to be refined accurately.

3.3 IBS criterion 1 for RBDO

Assume that in RBDO, the target is to find the minimum objective function value. Then the design regions that have relatively smaller objective function values will be more important than other regions, and the limit state constraint boundaries within these design regions will be more critical. Therefore, these critical limit state constraint boundaries should first be fitted accurately.

The proposed IBS uses the objective function value to evaluate the importance coefficient for the limit state constraint boundaries as follows:

$$ {I}_1\left(\boldsymbol{x}\right)=\frac{ \max {\left\{f\left({\boldsymbol{x}}^i\right)\right\}}_{i=1}^m-f\left(\boldsymbol{x}\right)}{ \max {\left\{f\left({\boldsymbol{x}}^i\right)\right\}}_{i=1}^m- \min {\left\{f\left({\boldsymbol{x}}^i\right)\right\}}_{i=1}^m} $$
(16)

where f(x) is the objective function, x i denotes the ith sample point, m is the number of existing sample points.

From (16), it can be seen that the smaller the objective function value f(x) at the sample point x is, the higher the importance coefficient I 1(x) will be; And if the value f(x) is in the range of [min(f(x i)), max(f(x j))], i, j = 1, ⋯, m, then the importance coefficient I 1(x) will be in the range of [0, 1].

The IBS criterion 1 for RBDO f(x) can be expressed as follows:

$$ IB{S}_1\left({I}_1\right)=\left\{\begin{array}{l}{\displaystyle \sum_{j=1}^N\phi \left(\frac{{\widehat{g}}_j\left(\boldsymbol{x}\right)}{\sigma_{g_j}\left(\boldsymbol{x}\right)}\right)\cdot D\left(\boldsymbol{x}\right)\cdot {I}_1\left(\boldsymbol{x}\right)\kern1em if\kern0.5em {\widehat{g}}_j\left(\boldsymbol{x}\right)\ge 0,\kern1em \forall j}\\ {}0 otherwise\end{array}\right. $$
(17)

where I 1(x) is the importance coefficient I 1(x) obtained from (16).

The IBS 1(I 1) in (17) combines the probability density value of the Kriging predictor \( {\widehat{g}}_j\left(\boldsymbol{x}\right) \), the minimum distance from the current design point to the existing sample points and the importance coefficient I 1(x). If the objective function f(x) has a small value at the sample point x, then the value of the IBS criterion 1 IBS 1(I 1) will be enlarged, and this point x will be more likely to be chosen as the next new sample point.

However, because the importance coefficient I 1(x) in (16) has a linear relation with the objective function value, its effect on the value IBS 1(I 1) is limited. In other words, the importance coefficient I 1(x) may not clearly identify the critical parts of the limit state constraint boundaries. In this paper, this problem will be overcome through using the importance coefficient I 2(x) as follows:

$$ {I}_2\left(\boldsymbol{x}\right)={e}^{\tau \bullet {I}_1\left(\boldsymbol{x}\right)} $$
(18)

where τ is a constant, “e” is the "na f(x) tural" exponential.

In (18), the importance coefficient I 2(x) has an exponential relation with the objective function value . When using I 2(x), the critical parts of the design region which have relatively small objective function values will be clearly identified, and the other parts will be filtered. As mentioned above, I 1(x) is in the range of [0, 1], so the value of I 2(x) will be in the interval [1, e τ], the constant τ in this paper is chosen as 2 ~ 4, which can make I 2(x) possess enough effect on . This paper has tried to choose different values to study the sensitivity of the results to its variations. It found that the value of the parameter τ had very weak effect to the results of IBS 1. Unless the value of τ was too small as 0.01, or too large as 100, the IBS 1 (I 2 ) would be able to find the new sample point. This paper uses the nonlinear relationship between the objective function value and the importance coefficient in (18), so the value of the parameter τ is not very critical in the sampling criterion of IBS 1(I 2).

The IBS criterion 1 using I 2(x) can be expressed as follows:

$$ IB{S}_1\left({I}_2\right)=\left\{\begin{array}{l}{\displaystyle \sum_{j=1}^N\phi \left(\frac{{\widehat{g}}_j\left(\boldsymbol{x}\right)}{\sigma_{g_j}\left(\boldsymbol{x}\right)}\right)\cdot D\left(\boldsymbol{x}\right)\cdot \kern0.3em {I}_2\left(\boldsymbol{x}\right)\kern1.5em if\kern0.5em {\widehat{g}}_j\left(\boldsymbol{x}\right)\ge 0,\kern0.9em \forall j}\\ {}0 otherwise\end{array}\right. $$
(19)

To illustrate the behavior of IBS criterion 1, a mathematical problem (Lee and Jung 2008) is tested as follows:

$$ \begin{array}{l} \min :\kern0.4em f\left(\boldsymbol{x}\right)={\left({x}_1\hbox{-} 2\right)}^2\hbox{-} {\left({x}_2\hbox{-} 1\right)}^2\\ {}s.t.\kern1.5em g\left(\boldsymbol{x}\right)\ge 0\\ {}\kern2.8em g\left(\boldsymbol{x}\right)=-2{x}_1^2+1.05{x}_1^4-\frac{1}{6}{x}_1^6+{x}_1{x}_2\hbox{-} {x}_2^2+0.5\\ {}\kern2.8em \hbox{-} 2.5\le {x}_1\le 2.5,\kern1.7em \hbox{-} 1.5\le {x}_2\le 1.5\end{array} $$
(20)

The constraint g(x) is a three-hump-camelback function as seen in Fig. 2 (a). The dotted-lines are the limit state constraint g(x) = 0, and there are three disconnected feasible regions g(x) > 0. The objective function value f(x) decreases up and to the right within the design region, x is the optimal design point.

Fig. 2
figure 2

Mesh and contour plot of the three-hump-camelback function and the CBS criterion

Figure 2 (b) shows the values of the CBS criterion based on the Kriging model \( \widehat{g}\left(\boldsymbol{x}\right) \) which is constructed using 50 uniform sample points. The CBS values in all the three feasible regions are noticeably large, and the maximum value CBS max is located in the middle feasible region, which means this point w I 1(x) ill be chosen as a new sample in the CBS method. However, the target of this example is to find the minimum objective function value, so the limit state constraint boundary around the upper right feasible region should first be fitted accurately.

Figure 3 shows the mesh plot of the IBS criterion 1. When using the importance coefficient I 1(x) as seen in Fig. 3 (a), the value of IBS 1(I 1) decreases down and to the left in the design region, and the maximum value IBS 1max is located in the upper right feasible region. However, the IBS 1(I 1) value in the middle feasible region is still very large, which shows that the effect of the importance coefficient I 1(x) on the IBS 1(I 1) is limited. When using the importance coefficient I 2(x) as seen in Fig. 3 (b), the IBS values IBS 1(I 2) in the upper right feasible region are extremely high; but in the other two feasible regions, the IBS values are very small. Therefore, these unimportant feasible regions are filtered, and the sample points will be mainly selected on the limit state constraint boundary around the upper right feasible region which is more critical in the optimization process.

Fig. 3
figure 3

Mesh plot of the IBS criterion 1 a IBS criterion 1 using b IBS criterion 1 using I 2(x)

3.4 IBS criterion 2 for RBDO

When using analytical method in RBDO, the target of reliability analysis will be to find the minimum distance from the current optimal design to the limit state constraints in the standard normal space, and this point is called MPP as described in (2). Therefore, the region in the vicinity of the current design point is more critical, and the limit state constraint boundaries within this region should first be fitted accurately (Wang et al. 2005). In this paper, the joint probability density value of the random variables is applied to calculate the importance coefficient.

If the random variables have normal distributions: x ~ N(μ X , σ X ), then the importance coefficient is defined as follows:

$$ {I}_3\left(\boldsymbol{x}\right)=\phi \left(\frac{\boldsymbol{x}\hbox{-} {\boldsymbol{\mu}}_{\boldsymbol{X}}}{{\boldsymbol{\sigma}}_{\boldsymbol{X}}}\right) $$
(21)

where “ϕ” is the joint probability density function of standard normal distribution. Then the IBS criterion 2 for RBDO can be expressed as follows:

$$ IB{S}_2\left(\boldsymbol{x}\right)=\left\{\begin{array}{l}{\displaystyle \sum_{j=1}^N\phi \left(\frac{{\widehat{g}}_j\left(\boldsymbol{x}\right)}{\sigma_{g_j}\left(\boldsymbol{x}\right)}\right)\cdot D\left(\boldsymbol{x}\right)\cdot \kern0.3em {I}_3\left(\boldsymbol{x}\right)\kern1.5em if\kern0.5em {\widehat{g}}_j\left(\boldsymbol{x}\right)\ge 0,\kern0.9em \forall j}\\ {}0 otherwise\end{array}\right. $$
(22)

If the random variables x do not have normal distributions, then the transformation: u = T(x), and x = T − 1(u) will be needed, such as Rosenblatt transformation (Rosenblatt 1952) or Nataf transformation (Liu and Der Kiureghian 1986). The transformed random variables u have normal distributions u ~ N(μ U , σ U ), then the importance coefficient can be calculated as follows:

$$ {I}_3\left(\boldsymbol{x}\right)={I}_3\left({T}^{-1}\left(\boldsymbol{u}\right)\right)=\phi \left(\frac{\boldsymbol{u}-{\boldsymbol{\mu}}_{\boldsymbol{U}}}{{\boldsymbol{\sigma}}_{\boldsymbol{U}}}\right) $$
(23)

As seen in Fig. 4, μ X is the current optimal design, x MPP denotes the MPP, x 1 ‐ x 4 are four potential sample points on the limit state constraint boundaries. In order to get the MPP x MPP more accurately, the limit state constraint boundary \( \widehat{g}\left(\boldsymbol{x}\right)=0 \) in the vicinity of the MPP should first be fitted precisely. However, the MPP x MPP is unknown until the reliability analysis process is finished. This issue can be overcome through using the importance coefficient I 3(x), which will have a large value when x is close to the current optimal design μ X . In Fig. 4, the potential sample points x 1 and x 2 will first be selected, but the sample points x 3 and x 4 which are far away from the current optimal design μ X will not be chosen. Also by using the minimum distance D (x) in (15), the sample points will not be too close to each other. So the proposed importance coefficient I 3(x) in (23) can make the selection of the sample points more rational for RBDO.

Fig. 4
figure 4

Importance coefficient I 3(x) for RBDO

To illustrate the behavior of IBS criterion 2, a mathematical problem is tested as follows.

$$ \begin{array}{l}g\left(\boldsymbol{X}\right)=-{X}_1 \sin \left(4{X}_1\right)-1.1{X}_2 \sin \left(2{X}_2\right)\\ {}{X}_i\sim N\left({\mu}_i,{0.1}^2\right),\kern1em i=1,2\\ {}\boldsymbol{\mu} ={\left[3.00,\kern0.2em 2.50\right]}^{\mathrm{T}}\end{array} $$
(24)

here, the solid line circles are the contour lines for the IBS criterion 2; triangle and fork points denote the initial sample points and new sample points using the IBS criterion 2 respectively.

In Fig. 5, nine samples are selected as initial sampling, denoted by triangles; 27 sequential samples are selected by using the IBS criterion 2. It can be seen that the samples that are close to the design variables are firstly selected. Although due to the lack of accuracy of the Kriging models in the early stage of the sequential sampling process, the first four samples 1–4 are not selected strictly according to the distances to the design variables; the rest 23 samples are chosen according to their contributions to the refining the critical MPP searching region. So, the IBS criterion 2 is very effective in the reliability analysis process.

Fig. 5
figure 5

Sampling with the IBS criterion 2

3.5 Procedures and flowchart o (d 0, μ 0) f the IBS method

The flowchart of the IBS method is shown in Fig. 6, and the procedures are described as follows:

Fig. 6
figure 6

Flowchart of the IBS method

  1. 1)

    Initialize design variables for RBDO.

  2. 2)

    Initialize the sample point set s 0 using Latin Hypercube sampling method. The number of samples required to define a quadratic polynomial (n + 1)(n + 2)/2 is a convenient rule of thumb (Bichon et al. 2008). So in this paper, this initial selection is a little larger than the required value (n + 1)(n + 2)/2. Then evaluate the responses of the constraint functions g j (x), j = 1, ⋯, N. For simplicity, the vector of design variables (d, μ) is replaced by x.

  3. 3)

    Construct the Kriging models \( {\widehat{g}}_j\left(\boldsymbol{x}\right),\kern0.4em j=1,\cdots, N \) for the constraint functions using the above sample points s 0 and the corresponding responses.

  4. 4)

    In the kth iteration of the RBDO, select new sample points x k using the sampling criterion IBS 1 in (19), and then evaluate the responses of the constraint functions g j (x k), j = 1, ⋯, N at the selected points s k. Add these new sample points x k to the sample set s k, then reconstruct the Kriging models \( {\widehat{g}}_j\left(\boldsymbol{x}\right),\kern0.4em j=1,\cdots, N \) for constraint functions based on all the existing sample points s k and the corresponding responses.

  5. 5)

    Conduct optimization using the above constructed Kriging models g j (x k), j = 1, ⋯, N.

  6. 6)

    Select new sample points x k using the sampling criterion IBS 2 in (22), and then evaluate the responses of the constraint functions g j (x k), j = 1, ⋯, N at the selected points s k. Add these new sample points x k to the sample set s k, then reconstruct the Kriging models \( {\widehat{g}}_j\left(\boldsymbol{x}\right),\kern0.4em j=1,\cdots, N \) for constraint functions based on all the existing sample points s k and the corresponding responses.

  7. 7)

    Conduct reliability analysis using the above constructed Kriging models g j (x k), j = 1, ⋯, N. This paper assumes that the original performance functions are implicit, such as expensive physical test and complex finite element simulation. The searching of the IMPP is based on the cheap Kriging models. When compared with the implicit performance function calls, the computational effort to find the MPP is very cheap.

  8. 8)

    If the design (d k + 1, μ k + 1) converges, then the whole process will stop, (d , μ )RBDO = (d k + 1, μ k + 1); otherwise, k = k + 1, and the process will go back to step (4).

4 Application

In order to verify the accuracy and efficiency of the proposed IBS method, five examples are tested and compared to analytical method, Latin Hypercube sampling (LHS) method and constraint boundary sampling (CBS) method. The SORA is chosen as the RBDO method which directly calls the true performance functions and can yield a relatively accurate design. The optimal results μ obtained from Kriging-model-based RBDO methods are assessed through the relative error ‖μ  ‐ μ A ‖/‖UB − LB‖, μ A is the optimal design obtained from SORA, LB and UB are the lower and upper boundaries of the design variables respectively. The results of the mathematical examples are also evaluated by MCS with ten million sample size. These examples are tested in the Matlab, and the tool “fmincon” is used as the optimizer.

4.1 Numerical example 1

The model of this example is the same as the (20), however, its design variables become random variables x i  ~ N (μ i , 0.05), i = 1,2, β = 2.0, and the initial design point is located at (0.5, 0.5). This example is used to compare the behaviors of the proposed IBS method and the local adaptive sampling (LAS) method (Chen et al. 2014).

The comparison results are shown in Fig. 7. After initialization with 18 uniform sample points, sequential sampling is conducted using the LAS and IBS methods separately. The LAS method chooses 15 sample points, as seen in Fig. 7 (a), most of these samples are located in the middle feasible region, and only the local optimum is obtained. The IBS method selected 19 sample points, and with the advantage of the IBS 1 (I 2), most of the sample points are located in the right upper feasible region, where the optimum is located. So, the proposed IBS method is more applicable than the LAS method.

Fig. 7
figure 7

Limit state constraints and sample points in LAS and IBS for example 1 a LAS method b IBS method

here, lines are the predicted boundaries by Kriging; shaded area is the feasible region; triangle and fork denote the sample points from LAS and IBS methods respectively.

4.2 Numerical example 2

This is a non-linear mathematical problem (Youn and Choi 2004). There are two random design variables X 1, X 2, and three probabilistic constraints g 1, g 2, g 3. No deterministic design variable or random parameter exists.

$$ \begin{array}{c}\hfill find\kern0.5em \boldsymbol{\mu} ={\left[{\mu}_1,{\mu}_2\right]}^{\mathrm{T}}\hfill \\ {}\hfill \min \kern0.4em f\left(\boldsymbol{\mu} \right)=10-{\mu}_1+{\mu}_2\hfill \\ {}\hfill s.t.\kern0.6em prob\kern0.3em \left[{g}_j\left(\boldsymbol{X}\right)>0\right]\le \Phi \left(\mathit{\hbox{-}}{\beta}_1^t\right)\kern0.8em j=1,2\hfill \\ {}\hfill {g}_1\left(\boldsymbol{X}\right)=1-\frac{X_1^2{X}_2}{20}\hfill \\ {}\hfill {g}_2\left(\boldsymbol{X}\right)=1-\frac{{\left({X}_1+{X}_2-5\right)}^2}{30}-\frac{{\left({X}_1-{X}_2-12\right)}^2}{120}\hfill \\ {}\hfill {g}_3\left(\boldsymbol{X}\right)=1-\frac{80}{\left({X}_1^2+8{X}_2+5\right)}\hfill \\ {}\hfill 0.0\le {\mu}_i\le 10.0,\kern0.6em {X}_i\sim N\left({\mu}_i,{0.3}^2\right),\kern1em i=1,2\hfill \\ {}\hfill {\beta}_1^t={\beta}_2^t={\beta}_3^t=3.0,\kern0.8em {\boldsymbol{\mu}}^{(0)}={\left[5.00,\kern0.2em 5.00\right]}^{\mathrm{T}}\hfill \end{array} $$
(25)

The objective function value decreases down and to the right in design region. The third constraint g 3 (X) = 0 is highly nonlinear. The optimal design μ opt for RBDO is located at (5.8545, 3.4236) as seen in Fig. 8 (a). The shaded area is the feasible region. The circle around the optimal design is the β-circle.

Fig. 8
figure 8

Plot of the feasible region and the LHS for example 2 a True constraint boundaries and the feasible region b Latin hypercube sampling

here, dashed lines are the true constraint boundaries; solid lines are the predicted boundaries by Kriging; shaded area is the feasible region and square denotes the sample point from LHS method.

The Latin hypercube sampling method is shown in Fig. 8 (b). 50 points are evenly selected within the design region. However, many sample points are located out of the feasible region. The limit state constraints 1 and 3 are not accurate.

The CBS method adopts grid sampling with nine points, 3-level full factorial design, as initial sampling. It applies 33 sample points to approximate the three constraints as seen in Fig. 9 (a). Most of the sample points are selected on the limit state constraints within the feasible region. The boundaries of the feasible region are well fitted. However, only a few sample points are located in the vicinity of the optimal design μ opt, which means that most of the samples are not well explored in improving the accuracy of the results.

Fig. 9
figure 9

Sample points in CBS and IBS for example 2 a Constraint boundary sampling b Importance boundary sampling

The proposed IBS method also adopts grid sampling with nine points as initial sampling. And it uses 21 sample points to approximate the three constraints as seen in Fig. 9 (b). Most of the sample points are located on the important parts of the limit state constraint boundaries. The region in the vicinity of the optimal design μ opt is accurately fitted, and this will ensure the accuracy of the RBDO results.

here, dashed lines are the true constraint boundaries; solid lines are the predicted boundaries by Kriging; shaded area is the feasible region; triangle and fork denote the sample points from CBS and IBS methods respectively.

The comparison results for example 1 are shown in Table 1. It can be seen that analytical method is not efficient. The CBS method is more efficient than LHS. The number of constraint function calls in IBS is equal to 21. So IBS is the very efficient.

Table 1 Summary of the optimization results for example 2

For the second constraint, the reliability indexes in IBS (β 2 = 3.0252) and CBS (β 2 = 3.0251) are almost equal to each other. However, for the third constraint, the reliability index in IBS (β 3 = 2.9946) is closer to the target value (β 3 t = 3.0) than that in CBS (β 3 = 2.9839). The relative error in IBS also has the smallest value 0.0003 % as seen in Fig. 10. Therefore, the proposed IBS method is very accurate.

Fig. 10
figure 10

Relative errors of the LHS、CBS、IBS methods for example 2

In this example, the reliability indexes for the optimal design are (Inf., 3.0247, 2.9937), β 3 = 2.9946 is samller than the target value β 3 t = 3.0, that is because the MPP-based RBDO method has a common issue that it will have some error for highly nonlinear problems. This paper is based on the MPP-based RBDO method, although it is roboust, but it may have some error.

4.3 Numerical example 3

This example is a multi-dimensional problem. There are sixteen random design variables X i , i = 1,…,16, which are statistically independent and have normal distributions. The constraint function is highly nonlinear (Shan et al. 2009). This problem can be expressed as follows:

$$ \begin{array}{l} find\kern0.5em \boldsymbol{\mu} ={\left[{\mu}_1,\cdots, {\mu}_{16}\right]}^{\mathrm{T}}\\ {} \min \kern0.4em f\left(\boldsymbol{\mu} \right)={\displaystyle \sum_i^{16}{\left({\mu}_i-{c}_i\right)}^2}\\ {}s.t.\kern0.6em prob\kern0.3em \left[{g}_1\left(\boldsymbol{X}\right)>0\right]\ge \Phi \left({\beta}^t\right)\\ {}{g}_1\left(\boldsymbol{X}\right)={\displaystyle \sum_i^{16}{X}_i\left(\tau + \ln \left(\frac{X_i}{X_1+\cdots +{X}_{16}}\right)\right)}\\ {}\kern0.1em {X}_i\sim N\left({\mu}_i,{0.3}^2\right),\kern0.6em 1e-6\le {\mu}_i\le 10,\kern0.6em i=1,\cdots, 16\\ {}\boldsymbol{c}=\Big[2.20,\kern0.5em 4.10,\kern0.5em 5.20,\kern0.5em 4.80,\kern0.75em 2.90,\kern0.5em 4.10,\kern0.5em 6.20,\kern0.5em 2.90,\\ {}4.30,\ 1.70,\kern0.5em 5.20,\ 3.40,\kern0.5em 2.70,\ 6.10,\ 3.50,\ 4.10\Big]\\ {}\tau =-17.164,\kern0.7em {\beta}_1^t=3.0,\kern0.8em {\boldsymbol{\mu}}^{(0)}={\left[4.0,\cdots, 4.0\right]}^{\mathrm{T}}\end{array} $$
(26)

The comparison results for example 2 are shown in Table 2. Analytical method is not efficient. CBS is more efficient than the LHS. The total number of function calls in the proposed IBS method is equal to 219. So IBS is very efficient.

Table 2 Summary of the optimization results for example 3

LHS is not accurate, for its relative error is the largest one as seen in Fig. 11, and the reliability index evaluated by MCS is 3.5624 which is far away from the target value 3.0. The optimal design from the IBS method is very close to the result of the analytical method; the relative error is 0.0235 %; also the reliability index in IBS is closer to the target value 3.0 than the result of CBS. So the proposed IBS method is very accurate.

Fig. 11
figure 11

Relative errors of the LHS、CBS、IBS methods for example 3

4.4 A speed reducer

A speed reducer shown in Fig. 12 is used to rotate the engine and propeller with efficient velocity in light plane (Ju and Lee 2008). This problem has seven random variables and 11 probabilistic constraints. The objective function is to minimize the weight, and probabilistic constraints are related to physical quantities such as bending stress, contact stress, longitudinal displacement, stress of the shaft, and geometry constraints. The random design variables are gear width (X 1), gear module (X 2), the number of pinion teeth (X 3), distance between bearings (X 4, X 5), and diameter of each shaft (X 6, X 7)

Fig. 12
figure 12

A speed reducer

The description of the RBDO model of the speed reducer is shown as follows:

$$ \begin{array}{c}\hfill \begin{array}{l} find\kern.5em \boldsymbol{\mu} ={\left[{\mu}_1,\cdots, {\mu}_7\right]}^{\mathrm{T}}\\ {} \min \kern.5em f\left(\boldsymbol{\mu} \right)=0.7854{\mu}_1{\mu}_2^2\left(3.3333{\mu}_3^2+14.9334{\mu}_3-43.0934\right)-1.508{\mu}_1\left({\mu}_6^2+{\mu}_7^2\right)\\ {}\kern6.8em +7.477\left({\mu}_6^3+{\mu}_7^3\right)+0.7854\left({\mu}_4{\mu}_6^2+{\mu}_5{\mu}_7^2\right)\end{array}\hfill \\ {}\hfill \begin{array}{l}s.t.\kern2.1em prob\left[{g}_j\left(\boldsymbol{X}\right)>0\right]\le \varPhi \left(-{\beta}_j^t\right),\kern0.8em j=1,\cdots, 11\\ {} where\kern0.6em {g}_1\left(\boldsymbol{X}\right)=\frac{27}{X_1{X}_2^2{X}_3}-1,\kern0.6em {g}_2\left(\boldsymbol{X}\right)=\frac{397.5}{X1{X}_2^2{X}_3^2}-1,\\ {}\kern3.6em {g}_3\left(\boldsymbol{X}\right)=\frac{1.93{X}_4^3}{X_2{X}_3{X}_6^4}-1,\kern0.6em {g}_4\left(\boldsymbol{X}\right)=\frac{1.93{X}_5^3}{X_2{X}_3{X}_7^4}-1,\\ {}\kern3.6em {g}_5\left(\boldsymbol{X}\right)=\frac{\sqrt{{\left(745{X}_4/\left({X}_2{X}_3\right)\right)}^2+16.9\times {10}^6}}{0.1{X}_6^3}-1100,\\ {}\kern3.6em {g}_6\left(\boldsymbol{X}\right)=\frac{\sqrt{{\left(745{X}_5/\left({X}_2{X}_3\right)\right)}^2+157.5\times {10}^6}}{0.1{X}_7^3}-850,\\ {}\kern3.6em {g}_7\left(\boldsymbol{X}\right)={X}_2{X}_3-40,\kern1.1em {g}_8\left(\boldsymbol{X}\right)=5-\frac{X_1}{X_2},\\ {}\kern3.6em {g}_9\left(\boldsymbol{X}\right)=\frac{X_1}{X_2}-12,\kern2.1em {g}_{10}\left(\boldsymbol{X}\right)=\frac{1.5{X}_6+1.9}{X_4}-1,\end{array}\hfill \\ {}\hfill \begin{array}{l}\kern3.6em {g}_{11}\left(\boldsymbol{X}\right)=\frac{1.1{X}_7+1.9}{X_5}-1,\\ {}\kern3.6em {\beta}_1^t=\cdots ={\beta}_{11}^t=3.0,\\ {}\kern3.6em 2.6\le {X}_1\le 3.6,\kern0.8em 0.7\le {X}_2\le 0.8,\kern0.8em 17\le {X}_3\le 28,\kern0.2em \\ {}\kern3.5em 7.3\le {X}_4\le 8.3,\kern0.8em 7.3\le {X}_5\le 8.3,\kern0.7em \\ {}\kern3.5em 2.9\le {X}_6\le 3.9,\kern0.8em 5.0\le {X}_7\le 5.5,\\ {}\kern3.5em {X}_i\sim N\left({\mu}_i,\kern0.4em {0.005}^2\right),\kern2.2em i=1,\cdots, 7,\\ {}\kern3.6em {\boldsymbol{\mu}}^{(0)}={\left[3.2,\kern0.3em 0.75,\kern0.5em 23.0,\kern0.5em 8.0,\kern0.3em 8.0,\kern0.5em 3.6,\kern0.45em 5.0\right]}^{\mathrm{T}}\end{array}\hfill \end{array} $$
(27)

The comparison results for the speed reducer are shown in Table 3. The number of constraint function calls in analytical method is 1218, it is not efficient. CBS is more efficient than the LHS. The number of function calls in the proposed IBS method is equal to 50. So IBS is very efficient.

Table 3 Summary of the optimization results for the speed reducer

The optimal design from the IBS method is almost identical with the result of the analytical method; the relative error is 0.0121 % as seen in Fig. 13. Table 4 shows the MCS results. In IBS, the reliability indexes for constraints 5, 6, 8 and 11 are closer to the target value (β i t = 3.0, i = 1,…,11) than that in CBS. Therefore the proposed IBS method is very accurate during the Kriging-model-based RBDO methods.

Fig. 13
figure 13

Relative errors of the LHS、CBS、IBS methods for the speed reducer

Table 4 Summary of the MCS results for the speed reducer

4.5 Box girder design

As shown in Fig. 14, it is a large scale box girder; the total length is 14,620 mm. The target of the box girder design is to reduce its total weight. There are six random design variables d i , i = 1,…,6, which denote the thicknesses of the stiffeners at different parts of the box girder. The main loads applied to the box girder are F 1F 2F 3F 4F 5 and F 6, as seen in Fig. 15. There are eight probabilistic constraints g i (X), i = 1, ⋯, 8, which denotes the displacements along x and z axes at 1, 2, 3 and 4 places.

Fig. 14
figure 14

Box girder design

Fig. 15
figure 15

Force analysis of the box girder

These probabilistic constraints are implicit and need computer simulations, each finite element simulation will take about 2 min, the total computational cost is unaffordable when these computer simulations are directly applied in RBDO. The optimization problem is defined as below:

$$ \begin{array}{l} find\kern0.6em \left({d}_1,\kern0.1em {d}_2,{d}_3,{d}_4,{d}_5,{d}_6\right)\kern0.3em \\ {} \min \kern0.2em :\kern0.5em total\kern0.3em weight\\ {}s.t.\kern1.6em prob\left({g}_j\left(\boldsymbol{X}\right)<1.059\right)\ge \varPhi \left({\beta}^t\right)\kern1em j=1,\cdots, 8\\ {}\kern3em {X}_i\sim N\left({d}_i,\ 0{.2}^2\right),\kern0.5em {\beta}_i^t=3.0\\ {}\kern3em 16.00\le {d}_i\le 22.00\kern0.7em i=1,\cdots, 6\kern0.95em \end{array} $$
(28)

The comparison results for the box girder design are shown in Table 5. The number of function calls in analytical method is 4,736, which is not efficient. Though the CBS method uses a smaller number of function calls than LHS, it cannot yield an accurate result and its relative error is 0.1589 % as seen in Fig. 16. In the proposed IBS method, the number of function calls is 193, which means the computational cost is reduced significantly and IBS is very efficient. The optimal design obtained from IBS is almost identical with the result of the analytical method; the relative error is 0.0007 %. Therefore, IBS is also very accurate.

Table 5 Summary of the optimization results for the box girder design
Fig. 16
figure 16

Relative errors of the LHS、CBS、IBS methods for the box girder design

5 Conclusion

In this paper, the importance boundary sampling method is proposed to improve the efficiency of Kriging-model-based RBDO method. Rather than fitting all the parts of the limit state constraint boundaries precisely within the whole design region, the proposed IBS method mainly selects sample points on these important constraint boundary parts. In the RBDO process, two importance coefficients are proposed to identify the critical parts of the boundaries. Also, in IBS, the sampling and optimization processes are conducted alternatively so that the optimization results can be used in the following sampling process to make the selection of the sample points more appropriate.

Several examples are tested to verify the accuracy and efficiency of the proposed method. Through the examples it can be seen that the proposed IBS method is very efficient. By using the importance coefficients, the IBS identifies the critical parts of the limit state constraint boundaries, and then chooses sample points mainly on these important boundary parts. Most of sample points could give useful information to the optimization process, and the Kriging models in the vicinity of the optimal design are well fitted. The results from IBS are almost identical with that of analytical method, which verifies that the proposed IBS is very accurate. Also, the IBS method uses the smallest number of constraint function calls, which confirms that the computational cost is reduced significantly and the proposed IBS method is very efficient.

However, there are still many challenges need to be overcome to improve the accuracy of kriging model. For example, selecting a small number of initial sample points can lead to loss of information in certain regions of the design space, while selecting a large number of initial sample points might lead to more computational times. The initial samples will also decide whether the global solution can be found always. So it is a significant research to find an efficient method for deciding the initial raining set size. In the future work, we will study this issue to make the IBS method more applicable to RBDO problems.