1 Introduction

Various methods have been developed to treat uncertainties related to reliability analysis and reliability-based design optimization (RBDO) of engineering systems (Au et al. 2003; Chen et al. 1997; Du and Chen 2004; Wang et al. 2011; Youn et al. 2005, Liu et al. 2016; Liu and Wen 2018). In RBDO, reliability is defined as the probability that system performance will meet its marginal value while taking uncertainties into account. While uncertainty can be probabilistic and/ or non-probabilistic. In the former case, the uncertainty is generally characterized using random variable and random field. At the academic level, tremendous efforts have focused on improving the accuracy and efficiency of RBDO techniques. For example, new reliability estimation methods have been developed to improve the efficiency and stability of numerical algorithms (Most and Knabe 2010; Youn et al. 2005; Youn et al. 2008), including two main optimization strategies, namely, the double-loop and decoupled approaches (Chen et al. 1997; Cheng et al. 2006; Du and Chen 2004; Liang et al. 2007). In traditional RBDO, reliability analysis requires the precise probability distribution of uncertain variables, which may be impossible to obtain for some uncertainties due to limited experimental data. To overcome the deficiency of the probability approach, in the last two decades a number of attempts have been made to apply non-probabilistic models (the convex model or interval model) for RBDO. Ben-Haim (1994) and Ben-Haim and Elishakoff (1995) first proposed the concept of non-probabilistic reliability based on convex model theory. Since then, non-probabilistic reliability-based design optimization using convex models has been addressed by a number of studies(such as anti-optimization technique (Elishakoff et al. 1994), the interval analysis method (Qiu and Elishakoff 1998), the convex model superposition method (Qiu and Elishakoff 1998). Kang and his colleagues (Kang and Luo 2009; Luo et al. 2009b) proposed a non-probabilistic reliability-based continuum topology optimization method based on convex models, in which reliability constraints are reformulated into equivalent constraints on the concerned performance. They further extended this method to address nonlinear topology optimization problems (Kang and Luo 2009). Cheng (2009) proposed a novel robust reliability optimization method for designing mechanical components based on a non-probabilistic interval model. Jiang et al. (2011a) proposed a correlation analysis technique of convex models when non-probabilistic uncertainties were involved. In addition, RBDO method considering dependent interval variables was also investigated via using the sequence optimization strategy (Du 2012).

The researches discussed previously consider only one kind of uncertainties; in other words, either probabilistic uncertainty or non-probabilistic uncertainty are involved. However, it is well-known that practical engineering normally consists of various kinds of uncertainties. Some uncertainties can be characterized with probabilistic models with precise distributions, while others need to be treated with non-probabilistic models with bounded information. New strategies should be further developed to deal with such circumstances. To this end, several structural reliability analysis methods have been emerged recently in the presence of mixed uncertain variables. Guo and Lu (2002) developed a hybrid probabilistic and non-probabilistic reliability approach based on interval analysis for linear problems via a two-stage limit-state function. Penmetsa and Grandhi (2002) used a function approximation method to improve the efficiency of reliability analysis with random and interval variables. For direct reliability analysis and inverse reliability analysis, Du et al. (2005), and Guo and Du (2009) proposed a semi-analytic method for hybrid reliability sensitivity analysis with both random and interval variables. Qiu and Wang (2010) and Wang and Qiu (2010) put forward two hybrid models through the use of interval arithmetic, namely, a probabilistic and non-probabilistic hybrid model and probabilistic and interval hybrid model. An iterative procedure was developed by Luo et al. (2009a) for obtaining the worst-case point and the most likely point from a probability and convex set model. Later, different forms of limit state strips resulting from the interval uncertainties were considered and summarized, based on which some more efficient hybrid reliability analysis methods (Jiang et al. 2013; Jiang et al. 2011a) were further formed.

Compared to reliability analyses, research on RBDO of hybrid uncertainty systems remains in its infancy. Du et al. (2005) addressed a hybrid RBDO problem in which reliability was determined with the worst-case combination of interval variables. Kang and Luo (2010) presented an RBDO approach based on a probability and convex set hybrid model. Yao et al. (2013) quantified mixed uncertainties from statistical laws and evidence theory and then attempted to formulate an optimization design obtained from the given hybrid reliability index. Wang et al. (2016) investigated a new formulation and numerical solution for the RBDO of structures exhibiting random and uncertain-but-bounded (interval and convex) mixed uncertainties. Xia et al. (2015) proposed a hybrid perturbation random moment method and hybrid perturbation inverse mapping method that transform nested loop optimizations into single loop optimizations. The sequential optimization and reliability assessment (SORA) method is one of the most famous strategy to efficiently solve the RBDO problems. However, algorithms that use SORA methods to solve RBDO problems with hybrid models have seldom been studied.

The difficulty of direct evaluation of probabilistic models of RBDO with complex practical problems using expensive black-box simulation models renders the computational cost extremely prohibitive. To improve practical utility, a preferable approach is to employ approximation models, which are often referred to as metamodels, to replace the actual limit-state functions with expensive simulation models during reliability analysis and RBDO. To improve the efficiency of RBDO, a variety of metamodelling techniques have been applied to replace true constraint function evaluations. Youn and Choi (2004) and Lee and Song (2011) used moving least square methods for RBDO. Kim and Choi (2008) proposed an a response surface method (RSM) with prediction interval estimation. Lee and Jung (2008) suggested a constraint boundary sampling (CBS) method and Kriging model for RBDO. Bect et al. (2012) proposed the sequential design of computer experiments for the estimation of the probability of failure. Basudhar and Missoum (2008) proposed adaptive explicit decision functions for probabilistic design and optimization using support vector machines (SVM). Hu and Youn (2009) applied adaptive-sparse polynomial chaos expansion to the reliability analysis and design of complex engineering systems. Bichon et al. (2008) proposed an efficient global reliability analysis method for nonlinear implicit performance functions. Echard et al. (2011) proposed an active learning reliability method combining Kriging and Monte Carlo simulation. Cheng and Li (2008) proposed a reliability analysis method using artificial neural network (ANN)-based genetic algorithms. Zhao et al. (2011b) used the RSM and sequential sampling for probabilistic design. Basudhar and Missoum(2008) proposed an adaptive sampling technique for RBDO based on support vector machine (SVM). Picheny et al. (2010) addressed the issue of designing metamodel experiments with sufficient accuracy for a certain level of the response value. Zhao et al. (2011a) proposed a dynamic Kriging approach for design optimization. Chan et al. (2010) developed a modified efficient global optimization algorithm for measuring maximal reliability in a probabilistic constrained space; Zhuang and Pan (2012) proposed a new sequential sampling method for design under uncertainty. Wang and Wang (2013) developed a maximum confidence enhancement (MCE)-based sequential sampling approach for RBDO using surrogate models. Chen et al. (2014) proposed a local adaptive sampling method for RBDO using the Kriging model. Li et al. (2016) presented a new local approximation method that employs the most probable point (MPP) for RBDO. Applications of various metamodelling techniques have been extensively studied in support of conventional probability-based structural reliability analysis and design. In the conventional probability model, the limit state defines a unique surface referred to as the limit state surface. However, when random and uncertain-but-bounded variables coexist, there are a cluster of limit-state surfaces that form a critical region of the band in the standard space for all possible values of bounded uncertainties. Due to these characteristics, it is quite difficult to accurately construct a cluster of limit-state surfaces from an approximating model, and thus some computational costs are wasted in reliability assessments of an inactive probabilistic constraint. The proposed method involves local sampling around the minimum performance target point (MPTP) and worst-case point (WCP) (i.e., worst-case realization of bounded uncertainties) rather than requiring the whole limit state boundaries within the design region to be accurate. Since probability and convex set theory is fundamentally different from conventional probability theory, the applicability of different metamodelling techniques to RBDO with the hybrid model may vary considerably, and it seems very important for developing corresponding efficient algorithms.

In this paper, a sequential sampling method is proposed to solve RBDO problem based on the hybrid model of probability and convex set, which significantly improves the computational efficiency and accuracy of RBDO. The remainder of this paper is organized as follows. A definition of reliability based on probability and convex set hybrid models is introduced in Section 2. A sequential optimization reliability assessment method is introduced in Section 3. The Kriging technique employed and a sequential improving sampling strategy are proposed in Section 4. Three numerical examples are investigated in Section 4. Finally, conclusions are given in Section 5.

2 Reliability definition based on probability and convex set hybrid models

2.1 Classical probabilistic-based reliability model

In a reliability analysis where only random variables X are involved, the failure probability can be given as

$$ {P}_f=\Pr \left[G\left(\mathbf{X}\right)\le 0\right], $$
(1)

where X denotes an m-dimensional vector of independent random variables, G(⋅) is a limit-state function, G(X) ≤ 0 defines the failure event, and Pfdenotes the failure probability, which can be computed from the following equation:

$$ {P}_f=\int \cdots {\int}_{G\left(\mathbf{X}\right)\le 0}{p}_{\mathrm{X}}\left(\mathbf{X}\right)d\mathbf{X}, $$
(2)

where pX(X) denotes the joint probability density function (PDF) of X. It appears to be difficult to directly solve the above integral, and the first-order reliability method (FORM) is widely used in uncertainty analysis. When applying FORM, random variables X = (X1, X2, ⋯, Xm) in X-space can be changed to standard normal random variables u = (u1, u2, ⋯, um) in u-space:

$$ \Phi \left({u}_i\right)={F}_{X_i}\left({X}_i\right), $$
(3)
$$ {u}_i={\Phi}^{-1}\left[{F}_{X_i}\left({X}_i\right)\right],i=1,2,,\cdots, m, $$
(4)

where \( {F}_{X_i} \) is the cumulative distribution function (CDF) of the random variable Xi and Φ−1denotes the inverse CDF of the standard normal distribution. A limit-state function G in u-space can be written as

$$ g\left(\mathbf{X}\right)=g\left(T\left(\mathbf{u}\right)\right)=G\left(\mathbf{u}\right) $$
(5)

where T is a probability transformation function.

The integral included in Eq. (2) can be rewritten in u-space:

$$ \Pr \left[G\left(\mathbf{X}\right)\le 0\right]={\int}_{G\left(\mathbf{u}\right)\le 0}{p}_{\mathbf{u}}\left(\mathbf{u}\right)d\mathbf{u} $$
(6)

wherepu(u)is the joint CDF of u. Eq. (6) can ultimately be solved using the reliability index approach (RIA) (Liu and Der Kiureghian 1991; Tu et al. 1999)where an equivalent optimization problem should be constructed as follows:

$$ {\displaystyle \begin{array}{l}\beta =\underset{\mathbf{u}}{\min }\ \left\Vert \mathbf{u}\right\Vert \\ {}\mathrm{s}.t.\kern0.5em g\left(\mathbf{u}\right)=0\end{array}} $$
(7)

where ‖ ⋅ ‖ denotes the vector norm. The optimum solutionu of the above problem is called the most probable point (MPP) or the design point.

Due to its efficiency, computational robustness and convenience in sensitivity analysis, the performance measurement approach (PMA) is widely used for reliability-based design optimization(Lee et al. 2002; Tu et al. 1999). The basic premise of PMA is to determine the minimum performance target point (MPTP) rather than the MPP. Using PMA, the feasibility of a given design can be evaluated by solving the following optimization problem:

$$ {\displaystyle \begin{array}{l}\underset{\mathbf{u}}{\min\ }\kern0.5em g\left(\mathbf{u}\right)\kern1em \\ {}\mathrm{s}.t.\left\Vert \mathbf{u}\right\Vert ={\beta}_t\end{array}} $$
(8)

where βt denotes the target reliability index. The optimum point \( {\mathbf{u}}_{\mathrm{MPTP}}^R \) is the MPTP.

2.2 Non-probabilistic reliability model based on a convex model

Convex models are used to describe bounded uncertain variables without using a precise probability distribution. The two convex models that are most widely used are the “interval model” and the “ellipsoidal model”. In practical situations, it might be suitable to classify bounded uncertain variables into different groups such that variables in the same group are correlated, whereas variables of different groups are uncorrelated. Denote the vector of convex variables as Y ∈ Rn. Suppose the uncertain variables are divided into NE groups; then, Y can be written as

$$ {\mathbf{Y}}^{\mathrm{T}}=\left[{\mathbf{Y}}_1^{\mathrm{T}},{\mathbf{Y}}_2^{\mathrm{T}},...,{\mathbf{Y}}_{N_E}^T\right] $$
(9)
$$ \mathrm{Y}\in E=\left\{\mathrm{Y}|{\left({\mathrm{Y}}_i-{\widehat{Y}}_i\right)}^{\mathrm{T}}{\mathrm{W}}_i\left({\mathrm{Y}}_i-{\widehat{Y}}_i\right)\le {\varepsilon}_i^2,i=1,2,...,{N}_E\right\} $$
(10)

where \( \widehat{{\boldsymbol{Y}}_i} \) is the nominal value of Yi, Wi is a symmetric positive-definite matrix called the characteristic matrix of an ellipsoid, and εi is a real number defining the magnitude of parameter variability. ni is the number of bounded uncertainties in the i-th group, and \( {\sum}_{i=1}^{N_E}{n}_i=n \). As shown in Fig. 1 (a)–(c), there are three specific multi-ellipsoid cases of problems involving three uncertain parameters in which the parameters may be divided into three groups (Y = {[Y1], [Y2], [Y3]}T), two groups (Y = {[Y1, Y2], [Y3]}T) and one group (Y = {[Y1, Y2, Y3]}T). We direct the reader to research articles by Luo et al. (2009a).

Fig. 1
figure 1

Specific multi-ellipsoid cases for uncertain parameters (a) Three-dimensional interval model. (b) Multi-ellipsoid model defined by an ellipsoid (for Y1 and Y2) and interval (for Y3) (c) Three-dimensional single-ellipsoid model

In Eq.(10), the ith characteristic matrix can be decomposed by solving the following eigenvalue problem:

$$ {\mathbf{Q}}_i^T{\mathbf{W}}_i{\mathbf{Q}}_i={\varLambda}_i,\kern1em {\mathbf{Q}}_i^T{\mathbf{Q}}_i=\mathbf{I}, $$
(11)

where Qi is an orthogonal matrix, Λi is the diagonal matrix of eigenvalues of Wi and Ι is the identity matrix.

By introducing the vectors defined as

$$ {\mathbf{q}}_i=\left(1/{\varepsilon}_i\right){\boldsymbol{\Lambda}}_i^{1/2}{\mathbf{Q}}_i^{\mathrm{T}}\left({Y}_i-{\widehat{Y}}_i\right) $$
(12)

The convex model in (9) becomes

$$ \mathbf{E} =\left\{\mathbf{q}|\sqrt{{\mathbf{q}}_i^{\mathrm{T}}{\mathbf{q}}_i}\le 1\kern0.5em \left(i=1,2,...,{N}_E\right)\right\} $$
(13)

where qi is the normalized or standard vector of the ith group of uncertain variablesYi.Here, the ellipsoids are transformed into spheres of unit radius in the normalized space (q-space).E is the ellipsoid set.

For the more general case of a multi-ellipsoid model for k groups of uncertainties, the non-probabilistic reliability index η can be defined as

$$ {\displaystyle \begin{array}{l}\eta =\min \left(\max \left(\sqrt{{\mathbf{q}}_1^T{\mathbf{q}}_1},\sqrt{{\mathbf{q}}_2^T{\mathbf{q}}_2},...,\sqrt{{\mathbf{q}}_{NE}^T{\mathbf{q}}_{NE}}\right)\right)\\ {}\mathrm{s}.\mathrm{t}.\kern0.5em g\left(\mathbf{q}\right)=0\end{array}} $$
(14)

2.3 Reliability definition based on probability and convex set mixed model

In practical engineering problems, random and bounded variables may coexist. In order to assess structural reliability, a quantitative definitions of failure states is required. We assume that the expected performance of the structure is denoted by G(X, Y)>0.After the normalization of original uncertain variables XandYinto u and q as described in (4) and (12), the limit-state functionG(X, Y) is transformed into the corresponding normalized limit-state function g(u, q). Due to the presence of convex variables, the limit state g(u, q) = 0will form a cluster of limit-state surfaces in probability space, as shown in Fig. 2. Its lower boundary surface SL and upper boundary surface SR can be expressed as:

$$ {S}_L:\underset{\mathbf{q}\in \mathbf{E}}{\min }g\left(\mathbf{u},\mathbf{q}\right)=0,\kern2.25em {S}_R:\underset{\mathbf{q}\in \mathbf{E}}{\max }g\left(\mathbf{u},\mathbf{q}\right)=0 $$
(15)
Fig. 2
figure 2

Limit-state strip caused by convex variables

In fact, SL and SR are two limit-state surfaces with different q values, which have the nearest and furthest distances to the origin, respectively. We can calculate the reliability indices of the two bounding limit-state surfaces, respectively, whereby form the mixed reliability index βm (Kang and Luo 2010; Luo et al. 2009a):

$$ {\beta}^m\in \left[{\beta}^L,{\beta}^R\right] $$
(16)

where βL and βR represent the reliability indices of the lower and upper bounding limit state surfaces. Correspondingly, the failure probability of the structure is also an interval:

$$ {p}_f\in \left[{p}_f^L,{p}_f^R\right]=\left[\phi \left(-{\beta}^R\right),\phi \left(-{\beta}^L\right)\right] $$
(17)

where\( {P}_f^L \)and \( {P}_f^U \)are the minimum and maximum failure probability values, respectively, corresponding to the minimum limit state function.

With respect to practical problems, the maximal failure probability is generally our greatest concern. Thus, to ensure design safety, we use the maximal failure probability (or the minimal reliability) to measure the design’s reliability. The following double-layer optimization problem can be obtained:

$$ {\displaystyle \begin{array}{l}\kern1.75em {\beta}^L=\min {\mathbf{u}}^T\mathbf{u}\ \\ {}\mathrm{s}.\mathrm{t}.\kern0.75em \underset{\mathbf{q}\in \mathbf{E}}{\min }\ g\left(\mathbf{u},\mathbf{q}\right)=0\end{array}} $$
(18)

where βLrepresents the minimum reliability index, from which the maximum failure probability \( {P}_f^U=\phi \left(-{\beta}^L\right) \) can be approximated. Obviously, Eq.(18) is a very complex optimization problem. However, it is equivalent to the following single-layer optimization problem:

$$ {\displaystyle \begin{array}{l}\kern2em {\beta}^L=\underset{\mathbf{u},\mathbf{q}}{\min }{\mathbf{u}}^T\mathbf{u}\ \\ {}\mathrm{s}.\mathrm{t}.\kern1.00em g\left(\mathbf{u},\mathbf{q}\right)=0\\ {}\kern2.25em {{\mathbf{q}}_i}^{\mathrm{T}}{\mathbf{q}}_i\le 1\kern0.5em \left(i=1,2,...,{N}_E\right)\end{array}} $$
(19)

A study (Lee et al. 2002) has shown that PMA can be easily applied. The equivalent optimization problem for PMA can be expressed as

$$ {\displaystyle \begin{array}{l}\underset{\mathbf{u},\mathbf{q}}{\min }g\left(\mathbf{u},\mathbf{q}\right)\\ {}\mathrm{s}.\mathrm{t}.{\mathbf{u}}^T\mathbf{u}={\beta}_t^2\\ {}\kern1.5em {{\mathbf{q}}_i}^{\mathrm{T}}{\mathbf{q}}_i\le 1\kern0.5em \left(i=1,2,...,{N}_E\right)\\ {}\kern2em \end{array}} $$
(20)

where optimal solutions \( {\mathbf{u}}_{PMPP}^R \) and q shown in Eq.(20) are the minimum performance target point (MPTP) and worst-case point, (WCP), respectively.\( g\left({\mathbf{u}}_{PMPP}^R,{\mathbf{q}}^{\ast}\right) \) is the performance value of the limit state function andβtis the prescribed target reliability index.

3 Reliability-based design optimization under probability and convex set hybrid models

The general reliability-based design optimization problem based on probability and convex set hybrid models can be expressed as

$$ {\displaystyle \begin{array}{l}\kern1.75em \underset{\mathbf{d}}{\min}\kern0.5em f\left(\mathbf{d}\right)\\ {}\mathrm{s}.\mathrm{t}.\kern1.00em {\beta}^L\left[{g}_j\left(\mathbf{d},\mathbf{u},\mathbf{q}\right)\ge 0\right]\ge {\beta}_{t,j}\kern0.75em \left(j=1,2,...,{n}_g\right)\\ {}\kern2.5em {\mathbf{d}}_L\le \mathbf{d}\le {\mathbf{d}}_R\kern1.25em \end{array}} $$
(21)

wheref(d) is the objective function;d = {d1, d2, ..., dn}T denotes the vector of design variables, which take the mean (or nominal) value when their variations are modelled as random (or bounded) uncertainties; βL[gj(d, u, q) ≥ 0] is the mixed reliability index corresponding to the jth performance function; and βt, j(j = 1, 2, ..., ng) is the prescribed target value of the reliability index.

The optimization problem equivalent to (21) in PMA terms is expressed as

$$ {\displaystyle \begin{array}{l}\kern1.75em \underset{\mathbf{d}}{\min}\kern0.5em f\left(\mathbf{d}\right)\\ {}\mathrm{s}.\mathrm{t}.\kern1.00em {Z}_j\left(\mathbf{d}\right)\ge 0\kern1.75em \left(j=1,2,...,{n}_g\right)\\ {}\kern2em {\mathbf{d}}_L\le \mathbf{d}\le {\mathbf{d}}_R\kern1.25em \end{array}} $$
(22)

where Zj(d)is the minimum target performance value.

As shown in Eq.(22), this is a typical nested optimization problem. The inner reliability analysis loop is embedded within the overall optimization loop. Thus, numerous function evaluations may be required.

$$ {\displaystyle \begin{array}{l}{Z}_j\left(\mathbf{d}\right)=\min \kern0.5em {g}_j\left(\mathbf{d},\mathbf{u},{\mathbf{q}}^{\ast}\right)\\ {}s.t.\kern0.5em {\mathbf{u}}^T\mathbf{u}={\beta}_{t,j}^2\ \end{array}} $$
(23)

Among them, the WCP q is defined as

$$ {\displaystyle \begin{array}{l}\underset{\mathbf{q}}{\min}\kern0.5em {g}_j\left(\mathbf{d},\mathbf{u},\mathbf{q}\right)\\ {}s.t.\kern0.5em {\mathbf{u}}^T\mathbf{u}={\beta}_{t,j}^2\ \\ {}\kern1.5em {\mathbf{q}}_i^T{\mathbf{q}}_i\le 1\kern0.5em \left(i=1,2,...,{N}_E\right)\end{array}} $$
(24)

Obviously, the formula (23) and (24) can be integrated into a single level problem:

$$ {\displaystyle \begin{array}{l}{Z}_j\left(\mathbf{d}\right)=\underset{\mathbf{u},\mathbf{q}}{\min}\kern0.5em {g}_j\left(\mathbf{d},\mathbf{u},\mathbf{q}\right)\\ {}\kern3.25em \mathrm{s}.\mathrm{t}.{\mathbf{u}}^T\mathbf{u}={\beta}_{t,j}^2\\ {}\kern4.75em {\mathbf{q}}_i^T{\mathbf{q}}_i\le 1\kern0.5em \left(i=1,2,...,{N}_E\right)\ \end{array}} $$
(25)

Various methods have been proposed as the means to improve computational efficiency levels. Du and Chen (2004) developed the sequential optimization and reliability assessment (SORA) method for efficiently solving RBDO problems. SORA performs RBDO with sequential cycles of deterministic optimization and reliability analysis. After an optimal design point is identified from the deterministic optimization loop, PMA is performed on this point in the reliability analysis loop. The result of the reliability analysis is then used to formulate a new deterministic optimization model for the following cycle to improve the reliability. In this paper, SORA is extended to solve the proposed reliability-based optimization problem using the hybrid uncertainty model. The deterministic optimization problem of the (k + 1)th cycle is formulated as

$$ {\displaystyle \begin{array}{l}\underset{\mathbf{d}}{\min}\kern0.5em f\left(\mathbf{d}\right)\\ {}\mathrm{s}.\mathrm{t}.\kern1.00em {Z}_j\left(\mathbf{d}\right)={g}_j\left(\mathbf{d},{\mathbf{u}}_{MPTP,j}^{R,(k)},{\mathbf{q}}^{\ast, (k)}\right)\ge 0\kern1.75em \left(j=1,2,...,{n}_g\right)\\ {}\kern2.5em {\mathbf{d}}_L\le \mathbf{d}\le {\mathbf{d}}_R\kern1.25em \end{array}} $$
(26)

where \( {\mathbf{u}}_{MPTP,j}^{R,(k)} \)andq∗, (k)are the jth MPTP and WCP, respectively, from the reliability analysis conducted in the kth cycle. This process is repeated until \( {g}_j\left(\mathbf{d},{\mathbf{u}}_{MPTP,j}^R,{\mathbf{q}}^{\ast}\right)\ge 0, \) (j = 1, 2, ..., ng) and the objective function becomes stable.

4 A sequential sampling method for reliability-based design optimization

4.1 Kriging theory

In this study, the Kriging model (Sakata et al. 2003; Simpson et al. 2001) is used. Kriging assumes that the deterministic response of a system is a stochastic process y(x)(x = {X Y}T) comprising a regression model and a stochastic error as

$$ y\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)+Z\left(\mathbf{x}\right) $$
(27)

wheref(x) is the known regression function, and Z(x) is a stationary stochastic process with zero mean, zero variance σ2 and non-zero covariance. The f(x) term provides a global approximation of the design response. Z(x)defines ‘localized’ deviations that can be formulated as

$$ \operatorname{cov}\left[Z\left({\mathbf{x}}^i\right),Z\left({\mathbf{x}}^j\right)\right]={\sigma}^2\mathbf{R}\left[R\left({\mathbf{x}}^i,{\mathbf{x}}^j\right)\right] $$
(28)

whereRis the n0 × n0 symmetric positive definite matrix with a unit diagonal, and R(xi, xj) is the correlation function between sample points xiand xj. The correlation function R(xi, xj) used in the study is

$$ R\left({\mathbf{x}}^i,{\mathbf{x}}^j\right)=\exp \left[-\sum \limits_{k=1}^{n_p}{\theta}_k{\left|{x}_k^i-{x}_k^j\right|}^2\right] $$
(29)

where θk is an unknown correlation parameter, npis the number of design variables, and \( \mid {x}_k^i-{x}_k^j\mid \) is the distance between the kth components of points \( {x}_k^i \) and \( {x}_k^j \). The estimator \( \tilde{y}\left(\mathbf{x}\right) \) for the response y(x) made at an untried point x can be formulated by

$$ \tilde{y}\left(\mathbf{x}\right)=\widehat{\beta}+{\mathbf{r}}^T\left(\mathbf{x}\right){\mathbf{R}}^{-1}\left(\mathbf{y}-\mathbf{f}\beta \right) $$
(30)

whererT(x) is the correction vector between a predicted x and n0sample points, vector y represents the responses at each sampling point, and fis an n0 vector. Vector rT(x) and the scalar parameter \( \widehat{\beta} \) are given by

$$ {\mathbf{r}}^T\left(\mathbf{x}\right)=\Big(R\left[\mathbf{x},{x}_1\right],R\left[\mathbf{x},{x}_2\right],\cdots, R{\left[\mathbf{x},{x}_{n_0}\right]}^{\mathrm{T}} $$
(31)
$$ \widehat{\beta}={\left({\mathbf{f}}^{\mathrm{T}}{R}^{-1}\mathbf{f}\right)}^{-1}{\mathbf{f}}^{\mathrm{T}}{R}^{-1}\mathbf{y} $$
(32)

The estimation of variance\( {\widehat{\sigma}}^2 \) for Eq. (28) is given by

$$ {\widehat{\sigma}}^2={\left(\mathbf{y}-\mathbf{f}\beta \right)}^{\mathrm{T}}{R}^{-1}\left(\mathbf{y}-\mathbf{f}\beta \right)/{n}_0 $$
(33)

The maximum likelihood estimate of θk included in Eq. (29) can be obtained by maximizing the following expression:

$$ \underset{\theta >0}{\max}\kern0.5em \frac{-\left[{n}_0\mathrm{In}\left({\overset{\frown }{\sigma}}^2\right)+\mathrm{In}\left(\mathbf{R}\right)\right]}{2} $$
(34)

More information on the calculation method can be found in the relevant literature (Sakata et al. 2003; Simpson et al. 2001). When constructing Kriging models, the selection of sampling points can be very important. The design of experiment (DoE) method can be used for the effective selection of a minimum number of sampling points from the design space. Many different DoE methods, such as the factorial, Koshal, composite, Latin hypercube, and D-optimal design methods, have been proposed (Myers and Montgomery 1995). In this paper, the Latin hypercube sampling (LHS) scheme (Morris and Mitchell 1995) is adopted to generate sample points from non-deterministic design spaces. LHS performs considerably well in generating a representative distribution of sample points from a design space with uncertain variables (Jiang et al. 2008; Zhao et al. 2010; Zhuang and Pan 2012). In this paper, LHS is used for the initial sampling to construct Kriging models.

4.2 Sequential improvement sampling

Since being proposed by Jones (1998), the efficient global optimization (EGO) has been widely employed in various areas. EGO involves first constructing a Kriging model based on the initial set of small samples and then adding new samples that maximize the expected improvement function (EI).The improvement of a sample X for global minimum of G(X) is given as

$$ I\left(\mathbf{X}\right)=\max \left({G}_{\mathrm{min}}-G\left(\mathbf{X}\right),0\right) $$
(35)

whereGminis the best solution obtained from all sampled training points. Suppose there are currently n training points; Gminis equal tomin{G(X(1)), ..., G(X(n))};G(X)follows a normal distribution, \( N\left(\tilde{G}\left(\mathbf{X}\right),{s}^2\left(\mathbf{X}\right)\right) \); and \( \tilde{G}\left(\mathbf{X}\right) \)ands(X)represent the Kriging predictor and its standard error, respectively. The expected improvement function is

$$ E\left[I\left(\mathbf{X}\right)\right]=\left({G}_{\mathrm{min}}-\tilde{G}\right)\varPhi \left(\frac{G_{\mathrm{min}}-\tilde{G}}{s}\right)+ s\phi \left(\frac{G_{\mathrm{min}}-\tilde{G}}{s}\right) $$
(36)

whereΦ(⋅) and ϕ(⋅)are the cumulative distribution function and probability density function, respectively, of a standard Gaussian variable. In a previous study (Zhuang and Pan 2012), to compare EI with different constraints, the relative improvement criterion (ERI) is expressed as follows:

$$ E\left[ RI\left(\mathbf{X}\right)\right]=E\left[\frac{I\left(\mathbf{X}\right)}{G_a}\right]=\frac{1}{G_a}\left[\left({G}_{\mathrm{min}}-\tilde{G}\right)\varPhi \left(\frac{G_{\mathrm{min}}-\tilde{G}}{s}\right)+ s\phi \left(\frac{G_{\mathrm{min}}-\tilde{G}}{s}\right)\right] $$
(37)

Therefore, the sequence sampling criterion for RBDO optimization with hybrid models can be expressed as

$$ {\displaystyle \begin{array}{l}E\left[ RI\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)\right]=E\left[\frac{I\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)}{G_a\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)}\right]\\ {}\kern5.25em =\frac{1}{G_a\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)}\Big[\left({G}_{\mathrm{min}}\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)-\tilde{G}\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)\right)\varPhi \left(\frac{G_{\mathrm{min}}\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)-\tilde{G}\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)}{s}\right)\\ {}\kern6.25em + s\phi \left(\frac{G_{\mathrm{min}}\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)-\tilde{G}\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)}{s}\right)\Big]\end{array}} $$
(38)

where Y is the WCP, which is converted from the q space by (12) and (24).

Thus, a sampling criterion is used to select a new sample:

$$ {\displaystyle \begin{array}{l}\underset{\mathbf{X}}{\max}\kern0.75em ERI\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right)\\ {}\;\mathrm{s}.\mathrm{t}.\left\Vert \mathbf{u}\right\Vert ={\beta}_t\end{array}} $$
(39)

By maximizing ERI, we can find a new training point.

$$ \left({\mathbf{X}}^{\left(n+1\right)},{\mathbf{Y}}^{\ast}\right)=\arg\ \max\ ERI\left(\mathbf{X},{\mathbf{Y}}^{\ast}\right) $$
(40)

A flowchart of the sequential sampling process is shown in Fig. 3, and the procedures are as follows:

  1. (1)

    Initialize the sampling space Eval, which is a combination of the current design and uncertainty spaces. In other words, (d, X, Y) corresponds to a sample point forming the sampling space for the simulation, and LHD is used to sample points in Eval. Set the number of samples as Ns and the iterative step as k = 0. Use expensive high fidelity computational models to calculate the objective and constraint functions for all sample points.

  2. (2)

    The responses of objective and constraint functions are computed for these points. Kriging models \( \tilde{f}\left(\mathbf{d},\mathbf{X},\mathbf{Y}\right) \) and \( \tilde{G}\left(\mathbf{d},\mathbf{X},\mathbf{Y}\right) \)are constructed. Where d, X and Y are used as input variables for the Kriging model.

  3. (3)

    A deterministic optimization is performed through (26), and the design variables can then be obtained. In the first optimization cycle, set \( {\mathbf{u}}_{MPTP,j}^{R,(0)}=0,{\mathbf{q}}^{\ast, (0)}=0 \).

  4. (4)

    Given d, the original X and Y spaces are transformed into standard normalizedu and qspaces, respectively. The reliability analysis is performed according to Eq.(20) the MPTP is found, the variables are transformed back to the X and Y space, evaluated through the computer experiment,and it becomes the current minimum Gmin, If there are multiple constraints, each constraint will produce Gi, min, (j = 1, 2, ..., ng).The currentGi, min, (j = 1, 2, ..., ng)can be obtained and added to update the Kriging model.

  5. (5)

    The point which maximizes ERI values on the \( \tilde{G}\left(\mathbf{X},\mathbf{Y}\right) \) function is chosen as the next sampling point, and is added to the original Eval to update the Kriging model\( \tilde{G} \). Repeat step (5) to choose the maximum ERI in the constraint until the maximum ERI is less thanε.ε is a small positive number used as the convergence criterion for the maximization of ERI.

Fig. 3
figure 3

Sequential sampling flowchart

If there is only one constraint, the point with maximum ERI will be added to Eval; if there are multiple constraints, the point with the largest ERI will be added to Eval. this sample is used to update only the surrogate of the constraint that has the largest ERI value.

  1. (6)

    The MPTP is found based on the updated Kriging model. If all the \( {\tilde{g}}_{i,\mathrm{MPTP}}^{(k)}\ge 0 \), the design variables d can be obtained. If any \( {\tilde{g}}_{i,\mathrm{MPTP}}^{(k)}<0 \), proceed to step (3) and perform another deterministic optimization cycle.

In Step (1), a basic requirement for an initial design is that the entire input space should be sampled uniformly so that all regions of the whole space have same opportunities to be searched. In addition, the number of points in the initial design is dependent on the dimension of the input space, that is, the higher the dimension of the input space, the greater the number of sampling points in the initial design.

The i-th original ellipsoid model in Eq. (13) becomes a unit model. By virtue of the spherical coordination transformation

$$ {\displaystyle \begin{array}{l}{q}_{i,1}<{\rho}_i\cos {\alpha}_{i1}\\ {}{q}_{i,2}<{\rho}_i\sin {\alpha}_{i1}\cos {\alpha}_{i2}\\ {}\vdots \\ {}{q}_{i,{n}_i-1}<{\rho}_i\sin {\alpha}_{i1}\sin {\alpha}_{i2}\sin {\alpha}_{i3}\cdots \sin {\alpha}_{i,{n}_i-2}\cos {\alpha}_{i,{n}_i-1}\\ {}{q}_{i,{n}_i}<{\rho}_i\sin {\alpha}_{i1}\sin {\alpha}_{i2}\sin {\alpha}_{i3}\cdots \sin {\alpha}_{i,{n}_i-2}\sin {\alpha}_{i,{n}_i-1}\end{array}} $$
(41)

With \( {q}_{i,{n}_i} \) the nth component of qi, the uncertain space of Eq. (13) becomes

$$ {\varDelta}_i=\left\{\left({\rho}_i,{\alpha}_{ij}\right)|{\rho}_i\in \left[0,1\right],{\alpha}_{ij}\in \left[0,2\pi \right]\right\},\left(j=1,2,...,{n}_i-1\right) $$
(42)

where ρi is the radical coordinate and αi, jis the jth angular coordinate for the ith ellipsoid.

For convex variables, the samples are chosen with a uniform distribution in the space defined by Eq. (13) and mapped into the original space defined in Eq. (12) from Eq. (40).

In Step (5), the optimal solution of eq. (39) is the point on the circle whose origin is located in the u space with a radius of βt. This point is considered to bring the maximum improvement to the approximation model \( \tilde{G} \) function estimation subject to the MPTP and WCP constraints. The corresponding point in the X-space is depicted in Fig. 4. The limit state \( \tilde{G} \)= 0 forms a critical failure region, the two bounds of this region can be represented as \( \underset{\mathbf{Y}}{\min }G\left(\mathbf{X},\mathbf{Y}\right)=0 \) and \( \underset{\mathbf{Y}}{\max }G\left(\mathbf{X},\mathbf{Y}\right)=0 \),respectively. The plus signs represent the initial sample points, and point (\( {\mu}_{X_1} \),\( {\mu}_{X_2} \)) is the optimal solution obtained via deterministic optimization in step (3).

Fig. 4
figure 4

Max ERI sample point in design space. The initial samplesare marked by “+,” additional samples are marked by “×,” and “□” MPTPˊis the latest additional sample selected by the ERI criterion

As the current MPTP and WCP may not be sufficiently accurate due to the prediction error of metamodel \( \tilde{G}\left(\mathbf{X},\mathbf{Y}\right) \), the ERI criterion is employed to find a new sampling point (denoted by the square mark) on the ellipsis. Then, the Kriging metamodel is reconstructed and the prediction error in the neighbourhood of MPTP and WCP will decrease. It is not necessary to perform uniform sampling on the two bounds of the limit state surface(i.e. \( \underset{\mathbf{Y}}{\min }G\left(\mathbf{X},\mathbf{Y}\right)=0 \)and \( \underset{\mathbf{Y}}{\max }G\left(\mathbf{X},\mathbf{Y}\right)=0 \)). The sampling criterion can help identify the regions that should be densified, such as the boundary of a critical failure region (i.e.\( \underset{\mathbf{Y}}{\min }G\left(\mathbf{X},\mathbf{Y}\right)=0 \)) which we most care about and neighborhood of MPTP. The precision of the approximation of the bounds of \( \underset{\mathbf{Y}}{\min}\tilde{G}\left(\mathbf{X},\mathbf{Y}\right)=0 \) is gradually improved at each step. The major focus is to improve the precision of the local critical regions rather than the whole hybrid space. In other words, the samples are distributed uniformly in the entire hybrid space by the LHD method, which will make the accuracy of Kriging model low.

5 Numerical examples

This section presents three examples to illustrate capacities of the proposed method. The results of the nested optimization method, which calls the true probabilistic constraint functions, are considered standard reference values. The results of the Kriging-based methods are compared with those of the analytical method to evaluate the errors. All programme codes were tested in MATLAB. DACE toolbox developed by Lophaven et al. (2002) was used in the numerical examples and engineering application.

5.1 Numerical example 1

Two random design variables, two non-probabilistic parameters and two probabilistic constraints are considered in numerical example 1:

$$ {\displaystyle \begin{array}{l}\begin{array}{cc}\underset{\mathbf{d}}{\min }& f={\left({d}_1+3\right)}^2+{\left({d}_2+3\right)}^2\\ {}\mathrm{s}.\mathrm{t}.& {\beta}^L\left({G}_1\left(\mathbf{x},\mathbf{y}\right)\ge 0\right)\ge {\beta}_{t,1}\\ {}& {\beta}^L\left({G}_2\left(\mathbf{x},\mathbf{y}\right)\ge 0\right)\ge {\beta}_{t,2}\\ {}& 1\le {d}_1\le 10,\kern0.75em 0.01\le {d}_2\le 10\end{array}\\ {}\kern0.5em \\ {}\kern1em \\ {}\kern2em \\ {}\kern2.25em \end{array}} $$
(43)
$$ {\displaystyle \begin{array}{l}\mathrm{where}\\ {}\kern1.75em {G}_1\left(\mathrm{x},\mathrm{y}\right)={x}_1\left({x}_2+{y}_1\right)-{y}_2\\ {}\kern1.75em {G}_2\left(\mathrm{x},\mathrm{y}\right)={x}_1-{\left({x}_2+{y}_1\right)}^2{y}_2\end{array}} $$

where the design variables d = {d1, d2}T represent the mean values of x1 and x2,respectively. The coefficients of variation (COVs) for x1 and x2 are both 0.03. The variation ranges of y1 and y2 are given by \( y\in \left\{\mathrm{y}|{\left({\mathrm{y}}_i-{\widehat{y}}_i\right)}^{\mathrm{T}}{\mathrm{W}}_1\left({y}_i-{\widehat{y}}_i\right)\le {0.5}^2\right\}, \) where their nominal values are \( \widehat{y}={\left\{{\widehat{y}}_1,{\widehat{y}}_2\right\}}^T={\left\{0.25,2\right\}}^T \) and the characteristic matrix \( {\mathrm{W}}_1=\left[\begin{array}{cc}4& 0\\ {}0& 1\end{array}\right] \). The target reliability index βt, 1 = βt, 2 = 3.0, which denotes the structure’s probability of failure, must be valued at less than 0.135%.

In Fig. 5, the optimal solution is in the narrow neighbourhood between the limit-state surfaces G1 = 0 and G2 = 0. The initial Latin hypercube sampling is shown in Fig. 6 with 20 initial sample points. Kriging models \( {\tilde{G}}_1 \) and \( {\tilde{G}}_2 \) are created from these sample points. The sample points are evenly distributed throughout the entire sampled space. The limit-state surface \( {\tilde{G}}_2=0 \) is inexact around the region of the optimal design point. The results of the sequential sampling method are shown in Fig. 7. Additional sample points appear in both feasible and infeasible regions and are clustered around the optimal solution of d1 and d2.

Fig. 5
figure 5

True constraint boundaries for example 1

Fig. 6
figure 6

Initial Latin hypercube sampling

Fig. 7
figure 7

Sequential sampling

The iteration history is shown in Table 1. Initially, the optimal solution based on the approximate model is close to the solution of the real function, but the approximate response\( {\tilde{G}}_2 \) is very different from the accurate response G2 under the same design variables. The relative error between the approximate response and accurate response decreases with each iteration, which indicates that the initial approximation model is not sufficiently accurate. However, the boundaries of constraints 1 and 2 around the optimal solution are well fitted after several sequential sampling steps.

Table 1 Iteration history of the sequential ERI sampling strategy

The summary results of the optimization for example 1 are shown in Table 2. The nested optimization involves 558 function calls. However, using the present method, the total number of function calls is reduced to only 54. Thus, the total number of function calls required is less than that of the conventional nested optimization for test function 1.

Table 2 The summary results of the optimization for example 1

We repeated the RBDO based on Kriging 10 times, and summarized the results of relative error. It can be clearly seen from Fig. 8 that all relative errors are very small, revealing that the effect of the randomness on optimization results could be negligible.

Fig. 8
figure 8

The Effect of the randomness of the Latin hypercube sampling technique for generating initial samples on the optimization results for Numerical Example 1

5.2 Numerical example 2

Test function 2 includes two random design variables, two non-probabilistic parameters and three probabilistic constraints:

$$ {\displaystyle \begin{array}{l}\underset{\mathbf{d}}{\min}\kern1em f\left(\mathbf{d}\right)={d}_1+{d}_2\\ {}\mathrm{s}.\mathrm{t}.\kern1em {\beta}^L\left({G}_j\left(\mathbf{x},\mathbf{y}\right)\ge 0\right)\ge {\beta}_{t,j}\kern0.5em \left(j=1,2,3\right)\\ {}\kern2.25em 0\le {d}_1\le 10,0\le {d}_2\le 10\\ {}\mathrm{where}\\ {}\kern1.25em {G}_1\left(\mathbf{x},\mathbf{y}\right)=-1+{x}_1^2{x}_2/\left(2{y}_1+{y}_2\right)\\ {}\kern1.25em {G}_2\left(\mathbf{x},\mathbf{y}\right)=-2{y}_1{y}_2+4{\left({x}_1+{x}_2-5\right)}^2\\ {}\kern5.24em +{\left({x}_1-{x}_2-12\right)}^2\\ {}\kern1.25em {G}_3\left(\mathbf{x},\mathbf{y}\right)=-1+\left(50+3{y}_2\right)/\left({x}_1^2+8{x}_2+{y}_1\right)\\ {}\kern2em \end{array}} $$
(44)

where design variables d = {d1, d2}T represent the mean values of x1 and x2, respectively. The standard deviations of x1 and x2 are σ1 = σ2 = 0.3.The variation ranges of y1 and y2are given by \( \mathbf{y}\in \left\{\mathbf{y}|{\left({\mathbf{y}}_i-{\widehat{\mathbf{y}}}_i\right)}^{\mathrm{T}}{\mathrm{W}}_1\left({\mathbf{y}}_i-{\widehat{\mathbf{y}}}_i\right)\le 1\right\}, \) where their nominal values are \( \widehat{y}={\left\{{\widehat{y}}_1,{\widehat{y}}_2\right\}}^T={\left\{5,10\right\}}^T \) and the characteristic matrix is \( {\mathrm{W}}_1=\left[\begin{array}{cc}4& 0\\ {}0& 1\end{array}\right] \). The target reliability index is βt, j = 3.0(j = 1,2,3).

The objective function decreases down and to the left in the design region. The third constraint G3 = 0 is highly nonlinear. The optimal design point ‘*’ for RBDO is located at (3.5878, 3.1678) as shown in Fig. 9. The area enclosed by the three constraint functions is the feasible region. This circle surrounding the optimal design point is the βt-circle.

Fig. 9
figure 9

True limit state functions for example 2

The initial 30 LHS sample points are denoted by ‘+’ (plus) signs in Fig. 10, and the additional 52 sample points selected by the sequential ERI strategy for constraints G1, G2 and G3 are represented by ‘×’ signs in Fig. 11. As shown in Fig. 10, the approximate limit-state surfaces of \( {\tilde{G}}_1 \)=0 and \( {\tilde{G}}_3 \)=0 are far from the accurate limit-state surfaces initially. As shown in Fig. 11, additional samples are positioned in the feasible and infeasible regions and cluster around the optimal design point. Consequently, the approximate limit-state surfaces \( {\tilde{G}}_1 \)=0, \( {\tilde{G}}_2 \)=0, and \( {\tilde{G}}_3 \)=0 are more accurate in the area surrounding the optimal solution and limit state function boundaries.

Fig. 10
figure 10

Initial Latin hypercube sampling

Fig. 11
figure 11

Sequential sampling

The iteration history of the present method is shown in Table 3. Initially, the optimal solution based on the approximate model agrees with the solution of the real function, but the approximate response\( {\tilde{G}}_1 \) and \( {\tilde{G}}_3 \) is far from the accurate response G1 and G3 under the same design variables. The difference between the approximate response and accurate response decreases as the iterations proceed. This result shows that the initial approximation model has globally low accuracy, and approximations in the boundary regions can achieve high accuracy after several sequential sampling steps.

Table 3 Iteration history of the sequential ERI sampling strategy

As shown in Table 4, in terms of efficiency, the present method requires 12 iterations and a total of 82 sample evaluations, whereas the nested optimization method requires 264 evaluations. Thus, for this problem, our method again exhibits high precision and efficiency.

Table 4 Summary of optimization results for example 2

We repeated the Kriging-based RBDO 10 times and summarized the various results of relative error. Figure 12 clearly shows that all relative errors are small, revealing that the effect of randomness on the optimization results is negligible.

Fig. 12
figure 12

The Effect of the randomness of the Latin hypercube sampling technique for generating initial samples on the optimization results for Numerical Example 2

5.3 Numerical example 3

This example is modified from Refs.(Hu and Du 2015; Jiang et al. 2011b; Wang et al. 2016). As shown in in Fig. 13, a ten-bar truss is subjected to forcesF1,F2and F3. The vertical displacement on joint 2,dy, must be less than the allowable value dymax = 0.004 m.The Young’s modulus E of the material is treated as a normal random variable, whereas F1,F2and F3 are assumed to be interval variables. The details of the uncertainty properties are listed in Table 5.

Fig. 13
figure 13

Ten-bar truss structure

Table 5 Uncertainty properties for ten-truss structure

In this problem, the truss members’ cross-sectional areasAi(i = 1, 2, ..., 10) are taken as design variables, and the design objective is to minimize the total area of the structure. According to the above displacement constraint, the target reliability index βt is 2.5, i.e., the target failure probability value is 6.21 × 10−3. This reliability-based design optimization problem can be expressed as

$$ {\displaystyle \begin{array}{l}\underset{\mathbf{d}}{\min}\kern0.5em f\left(\mathbf{d}\right)=\sum \limits_{i=1}^{10}{d}_i\\ {}s.t.\kern0.75em {\beta}_{d_y}\left(A,{F}_1,{F}_2,{F}_3,E\right)\ge {\beta}_t\\ {}\kern1.75em 0.0001{\mathrm{m}}^2\le {d}_i\le 0.00{2\mathrm{m}}^2\kern0.5em \left(i=1,2,...,10\right)\end{array}} $$
(45)

in which d is computed as described previously (Au et al. 2003)

$$ d=\left(\sum \limits_{i=1}^6\frac{N_i^0{N}_i}{A_i}+\sqrt{2}\sum \limits_{i=1}^{10}\frac{N_i^0{N}_i}{A_i}\right)\frac{L}{E} $$

where

$$ {\displaystyle \begin{array}{l}{N}_1={F}_2-\frac{\sqrt{2}}{2}{N}_8,{N}_2=-\frac{\sqrt{2}}{2}{N}_{10}\\ {}{N}_3=-{F}_1-2{F}_2+{F}_3-\frac{\sqrt{2}}{2}{N}_8\\ {}{N}_4=-{F}_2+{F}_3-\frac{\sqrt{2}}{2}{N}_{10}\\ {}{N}_5=-{F}_2-\frac{\sqrt{2}}{2}{N}_8-\frac{\sqrt{2}}{2}{N}_{10},{N}_6=-\frac{\sqrt{2}}{2}{N}_{10}\\ {}{N}_7=\sqrt{2}\left({F}_1+{F}_2\right)+{N}_8,{N}_8=\frac{a_{22}{b}_1-{a}_{12}{b}_2}{a_{11}{a}_{22}-{a}_{12}{a}_{21}}\\ {}{N}_9=\sqrt{2}{F}_2+{N}_{10},{N}_{10}=\frac{a_{11}{b}_2-{a}_{21}{b}_1}{a_{11}{a}_{22}-{a}_{12}{a}_{21}}\end{array}} $$
(46)

whereai, j(i = 1, 2; j = 1, 2) and bi(i = 1, 2)can be defined as

$$ {\displaystyle \begin{array}{l}{a}_{11}=\sqrt{2}\left({F}_1+{F}_2\right)+{N}_8,{a}_{12}={a}_{21}=\frac{L}{2{A}_5E}\\ {}{a}_{22}=\left(\frac{1}{A_2}+\frac{1}{A_4}+\frac{1}{A_6}+\frac{2\sqrt{2}}{A_9}+\frac{2\sqrt{2}}{A_{10}}\right)\frac{L}{2E}\\ {}{b}_1=\left(\frac{F_2}{A_1}-\frac{F_1+2{F}_2-{F}_3}{A_3}-\frac{F_2}{A_5}-\frac{2\sqrt{2}\left({F}_1+{F}_2\right)}{A_7}\right)\frac{\sqrt{2}L}{2E}\\ {}{b}_2=\left(\frac{\sqrt{2}\left({F}_3-{F}_2\right)}{A_4}-\frac{\sqrt{2}{F}_2}{A_5}-\frac{4{F}_2}{A_9}\right)\frac{L}{2E}\end{array}} $$
(47)

The results obtained from the two different methods are given in Table 6. There are ten random design variables, one random parameter and three non-probabilistic parameters in this illustrative example, which will increase the number of calculations compared to the previous examples. The total number of evaluations conducted for the actual constraint function is 544, with 350 initial sample points and 194 additional samples. The total number of functions requires nested optimization with a true function of 3706. Thus, the proposed method is highly efficient compared to conventional nested optimization. In this high-dimension hybrid space, high-accuracy approximation models are very difficult to construct from limited samples with a uniform distribution; nevertheless, accurate results can be obtained by supplementing the approach with the sequential sampling method. The example shows that the proposed method is highly efficient and accurate for the problem of reliability-based optimization with many variables.

Table 6 Design optimization results

The randomness of the Latin hypercube sampling technique for generating initial samples as shown in Fig. 14 has some effects on the optimization result, specifically, the relative errors are generally smaller than 0.7%. Nevertheless, it may be safe to say that the accuracy of the Kriging approximation model can be guaranteed by the sequence sampling method.

Fig. 14
figure 14

The Effect of the randomness of the Latin hypercube sampling technique for generating initial samples on the optimization results for Numerical Example 3

6 Engineering application

This section focuses on a family of practical engineering applications, namely, vehicle crashworthiness designs. Thin-walled beam structures are typically used to absorb energy during vehicle frontal and rear collisions. Therefore, it is important to investigate the design optimization of thin-walled beam structures with regard to vehicle crashworthiness (Xiang et al. 2006; Zhao et al. 2010).

The thin-walled beam design problem described herein is taken from numerical studies in the literature (Xiang et al. 2006; Zhao et al. 2010). Here, the crashworthiness of the closed-hat beam shown in Fig. 15 is investigated. The closed-hat beam is impacting a rigid wall with an initial velocity of 13.8 m/s. The beam is formed by a hat beam and a web plate connected with uniformly distributed spot-welding points along the two rims of the hat beam.

Fig. 15
figure 15

Closed-hat beam impacting a rigid wall and its cross section (mm)

The FE model comprises 5760 shell elements. A 300-kg mass is attached to the free end of the beam during the crash analysis to supply crushing energy. The impact duration time is 30 ms. The FEM simulation is performed using the explicit non-linear finite element software package LS-DYNA. Typical deformation behaviours of the numerical model are shown in Fig. 16.

Fig. 16
figure 16

Typical deformation behaviour of the finite element model

Based on previous research (Xiang et al. 2006; Zhao et al. 2010), the closed-hat beam will be optimized to maximize the absorbed energy when subjected to an average normal impact force on the rigid wall. The plate thickness t, height h, width w, and spacing d between each spot-welding point have prominent effects on the crashworthiness of a closed-hat beam, and hence these four parameters are used as design variables in this application. Due to manufacturing variability and measurement errors, Young’s modulus and yield stress values are treated as bounded uncertainties described by a two-dimensional ellipsoid model. The plate thickness t, height h, width w, and spacing d between each spot-welding point are assumed to be normally distributed random variables. The uncertainty properties are given in Table 7.

Table 7 Uncertainty properties for the crashworthiness problem

Thus, the hybrid RBDO problem can be formulated as

$$ {\displaystyle \begin{array}{l}\underset{d_1,{d}_2,{d}_3,{d}_4}{\min\ }\ f\left({d}_1,{d}_2,{d}_3,{d}_4,\widehat{E},\widehat{\sigma}\right)=-E\\ {}\kern0.75em s.t.\kern0.75em {\beta}_F\left(F\left(t,d,w,h,E,\sigma \right)\le 120\ \mathrm{kN}\right)\ge {\beta}_t\\ {}\kern2.5em 0.8\ \mathrm{mm}\le {d}_1\le \kern0.5em 2.5\ \mathrm{mm}\\ {}\kern2.5em 10\ \mathrm{mm}\le {d}_2\le 40\ \mathrm{mm}\\ {}\kern2.5em 80\ \mathrm{mm}\le {d}_3\le 120\ \mathrm{mm}\\ {}\kern2.5em 80\ \mathrm{mm}\le {d}_4\le 120\ \mathrm{mm}\end{array}} $$
(48)

where the objective function f and constraint F represent the absorbed energy of the closed-hat beam and the axial impact force, respectively. Both parameters are obtained via FEM simulation. The target reliability index βt is set to 3.0 for the average force constraints. The starting point of the optimization is set at (\( {d}_1^0,{d}_2^0,{d}_3^0,{d}_4^0 \))T = (1.5 mm, 30 mm, 70 mm, 70 mm)T.

Initially, 50 sampling points in the design and uncertainty spaces are generated through LHS to construct Kriging model approximations of the objective functions. The stopping criterion of the sequential ERI sampling strategy is set to a maximum of ERI < 0.005. The results are given in Table 8. The optimal design vector is found to be (0.81 mm, 19.07 mm, 92.99 mm, 101.94 mm) T, with 62 additional sample points selected by sequential ERI and 112 FEM evaluations.

Table 8 Optimization results for the closed-hat beam

7 Conclusion

In this article, a new sequential sampling method is introduced and integrated into the SORA to solve kriging-based RBDO problems with both stochastic and uncertain-but-bounded uncertainties. Based on the kriging approximation model, an ERI criterion for selecting additional samples and improving RBDO solutions is proposed. The sampling strategy focuses on the neighborhood of current RBDO solution and maximally improves the MPTP and WCP estimation, while ignore other areas of the constraint function that are not important to the RBDO solution.

Several examples are tested to verify the accuracy and efficiency of the proposed method. The optimization results of the sequence sampling method are almost the same as those of the nested optimization, which confirms that the proposed sequence sampling method is very accurate. In addition, the sequential sampling method uses the smallest number of samples, which indicates that the computational cost can be significantly reduced.The present method is also applied to a practical engineering problem, namely, thin-walled beam crashworthiness analysis. A stable solution is obtained using only a small number of iterations and FEM evaluations, illustrating the fine practicability of the proposed method. Additional work is required to improve the accuracy of RBDO methods. For example, different approximation models (e.g., response surface methods and radial basis functions) may be used to fit implicit constraint functions. In addition, the method may be extended and applied to more complex problems, such as multi-objective reliability optimization.