1 Introduction

Uncertainties widely exist in practical engineering systems, such as manufacturing tolerances, changing environmental and operating conditions, and incomplete information. Currently, reliability-based design optimization (RBDO) gains great attentions in the field of design optimization under uncertainty. Generally, the uncertainty can be classified into two types: aleatory and epistemic (Oberkampf et al. 2001; Kiureghian and Ditlevsen 2009; Zhang and Huang 2010). Aleatory uncertainty is objective and irreducible, while epistemic uncertainty is subjective and reducible. Probability theory is effective and usually employed to quantify the aleatory uncertainty when sufficient data are available. So far, many well-developed probability methods have been found in this research field (Du and Chen 2004; Shan and Wang 2008; Youn et al. 2008; Wang and Wang 2014; Meng et al. 2015a; Huang et al. 2016). For epistemic uncertainty, due to insufficient data, incomplete information and lack of knowledge about the design variables and parameters, it will be inappropriate that the epistemic uncertainty is simply treated as the aleatory one with assumed and inaccurate probability distributions (Yao et al. 2013). In this paper, the RBDO under epistemic uncertainty is focused on.

In RBDO, three important essentials need to be handled: uncertainty quantification, reliability analysis and optimization. Uncertainty quantification is the basis of reliability analysis and optimization. Many theories have been adopted to quantify epistemic uncertainty, such as evidence theory (Shafer 1976; Yang et al. 2016), fuzzy sets (Zadeh 1965; Huang and Zhang 2009; Li et al. 2015), convex models (Jiang et al. 2013a; Yang et al. 2015), and interval method (Moore 1966; Wu et al. 2013; Li et al. 2013). Among these theories, evidence theory seems to be more flexible and of wider applicability than the others in the quantification of epistemic uncertainty (Oberkampf and Helton 2002). It employs belief (Bel) and plausibility (Pl) to measure the likelihood of events. Specifically, the Bel and Pl are lower and upper bounds of the probability of an event. When they are equal, equivalent descriptions can be provided by evidence theory compared to the conventional probability theory. In fact, probability theory can be considered as a special case of evidence theory, and evidence theory can be equivalent to fuzzy sets, convex models, and possibility theory under some other special circumstances (Bae et al. 2004a).

Due to the above merits, evidence theory has been extensively used for epistemic uncertainty analysis. Oberkampf and Helton (2002) investigate the reliability analysis by using evidence theory for epistemic uncertainty. Bae et al. (2004a, 2004b) adopt evidence theory to handle epistemic uncertainty analysis for engineering structures. And a computational efficient method based on multi-point approximation (MPA) is proposed to alleviate the computational effort. Agarwal et al. (2004) utilize evidence theory to quantify epistemic uncertainty in multidisciplinary systems analysis. Du (2006, 2008) investigates the unified uncertainty analysis framework to handle both aleatory and epistemic uncertainties by probability and evidence theories. Additionally, if limit-state functions are implicit or computationally expensive, e.g., running computer-based simulations to obtain system state responses, metamodels can be constructed to replace the actual limit-state functions in reliability analysis using evidence theory. Bai et al. (2012) utilize quadratic polynomial without cross terms, radial basis functions (RBF) and high-dimensional model representation combined with moving least square to replace the implicit limit-state function in reliability analysis using evidence theory. Jiang et al. (2013b) adopt a uniformity approach to deal with the evidence variables, and then the most probable focal element is searched for constructing the approximate model of the limit-state function. Zhang et al. (2014) develop a sequential method to establish the response surface of the limit-state function based on control points in uncertainty domain of evidence variables, which have a significant contribution to the accuracy of response surface. Zhang et al. (2015) propose the first and second order approximate reliability analysis methods based on the most probable focal element using evidence theory. Xiao et al. (2015) propose an efficient method for reliability analysis under epistemic uncertainty based on evidence theory and support vector regression. Yang et al. (2017) perform the structural reliability analysis under evidence theory using the active learning kriging model.

Meanwhile, evidence theory is also used in the reliability optimization under epistemic uncertainty. The evidence-based design optimization (EBDO) has attracted attentions. Alyanak et al. (2008) propose an EBDO method based on the gradient information of the limit-state function. The method assumes that the gradient information can be determined by the finite difference method, and realizes the optimization by calculating the sensitivity of the limit-state function. Salehghaffari et al. (2013) utilize a response surface model to decrease the computational cost in the EBDO of a circular tube structure under axial impact load. Srivastava et al. (2013) propose a bi-objective evolutionary algorithm based approach to perform EBDO and make the first attempt to combine evidence theory with an evolutionary algorithm. Huang et al. (2013) conduct a review of general topics about the possibility and evidence-based reliability analysis and design optimization. Based on the first-order approximate reliability analysis method, Huang et al. (2017) propose a decoupling approach for EBDO.

Besides, Mourelatos and Zhou (2006) propose a computationally efficient two-stage method for EBDO under epistemic uncertainty. In the first stage of this method, RBDO is implemented to quickly identify the vicinity of the EBDO optimum with assuming that evidence variables and parameters obey the normal distribution. In the second stage, a derivative-free optimizer, i.e., the DIRECT, is used to calculate the EBDO optimum, starting from the obtained RBDO optimum in the first stage. If the basic probability assignments (BPA) of evidence variables and parameters are similar to the hypothetical normal distributions, a RBDO optimum in the vicinity of the final EBDO optimum can be easily obtained in the first stage of the method. Then, the EBDO optimum can be quickly found with few iterations in the second stage. However, if the BPAs are obviously different from normal distributions, under the assumption in the first stage, the obtained RBDO optimum may be far away from the final EBDO optimum. Then, the second stage still requires many iterations to search for the EBDO optimum. It is noteworthy that the Pl of constraint violation must be calculated every time when the optimizer evaluates a constraint in the second stage. The increased iterations in the second stage will incur more computational cost. In practical engineering problems, the BPAs are complex and diverse, which are often different from the normal distributions. Therefore, the assumption in the first stage may make the two-stage EBDO method lose the expected computational efficiency.

To ensure a design point in the vicinity of the final EBDO optimum for all cases of BPAs, it is necessary to get rid of the assumption that unknown evidence variables and parameters obey the normal distribution. Thus, the first stage of the EBDO method proposed by Mourelatos and Zhou (2006) is modified in this work. The equal areas method (Xiao et al. 2015) is utilized to transform evidence variables into random variables. Then, a RBDO problem is defined and solved by the well-known sequential optimization and reliability assessment (SORA) method in the first stage.

On the other side, as mentioned above, the Pl of constraint violation must be calculated every time in the second stage of the method in Mourelatos and Zhou (2006). The efficiency of the corresponding algorithm used in their method is focused on in this work. An improved algorithm is presented, which can calculate the Pl of constraint violation more efficiently in the second stage by continuously recording the minimum and maximum values of limit-state functions.

Thus, based on the two-stage EBDO method in Mourelatos and Zhou (2006), the two above modifications are made in this paper. Namely, an improved two-stage framework of EBDO is presented. Numerical and engineering examples are provided to test the improved two-stage framework of EBDO. Results show that the improved framework can eliminate the limitation of the aforementioned hypothetical normal distribution and obtain a good EBDO optimum. Meanwhile, the improved framework is computationally more efficient, especially in terms of the reduction of evaluation times of limit-state functions during calculating the Pl of failure.

2 Common concepts in evidence theory and the EBDO model

In this section, evidence theory and reliability analysis using evidence theory are introduced in Sections 2.1 and 2.2, respectively. Section 2.3 describes the EBDO mathematical model.

2.1 Evidence theory

Evidence theory employs two measures to quantify the uncertainty of one proposition: Bel and Pl, which can be regarded as the lower and upper bounds of a probability measure and together structure the true probability instead of assigning a precise probability for a proposition (Yager et al. 1994). Some significant concepts of evidence theory are afforded as follows:

  1. (1)

    Frame of discernment (FD): It is a set of mutually independent and disjoint elementary propositions. If a FD X = {x1, x2}, all the possible subsets of X will form a power set Ω(X) = 2X = {Φ,  {x1},  {x2},  {x1,  x2}}, where Φ denotes the empty set. When X has n elements, Ω(X) will contain 2n elements.

  2. (2)

    Basic probability assignment (BPA): It is a mapping function m : Ω(X) → [0, 1]. Evidence theory assigns a belief mass to each element in the Ω(X) by BPA, which has to satisfy the following three conditions:

$$ m(A)\ge 0\kern3.5em \mathrm{for}\kern0.5em \mathrm{any}\kern0.5em A\kern0.5em \in \Omega (X) $$
(1)
$$ m\left(\Phi \right)=0 $$
(2)
$$ \sum m(A)=1\kern1.5em \mathrm{for}\kern0.5em \mathrm{any}\kern0.5em A\kern0.5em \in \Omega (X) $$
(3)

where A is the focal element in evidence theory and m (A) is the BPA for the focal element A.

  1. (3)

    Belief (Bel) and plausibility (Pl): The Bel of an event refers to the sum of belief masses of all the propositions that totally support the event. It can be evaluated by the sum of BPAs of all the subsets of the event. The Pl of an event refers to the sum of belief masses of the propositions that agree with the event totally and partially. It is calculated by the sum of BPAs of all the sets which intersect with the event. For an event A, the Bel(A) and Pl(A) can be gained as

$$ Bel(A)=\sum \limits_{B\subseteq A}m(B) $$
(4)
$$ Pl(A)=\sum \limits_{B\cap A\ne \Phi}m(B) $$
(5)
  1. (4)

    Combination rules: If there are multiple evidence sources to evaluate the Bel(A) and Pl(A), such as experts’ opinions and experimental data, these sources should be combined by a certain rules. Dempster’s rule is most popular rule for the combination (Sentz and Ferson 2002). For two events B and C with BPAs m1(B) and m2(C), the BPA of the combined evidence can be computed as

$$ m(A)=\left\{\begin{array}{l}\frac{1}{1-k}\sum \limits_{B\cap C=A}{m}_1(B){m}_2(C),\kern2em \mathrm{for}\kern0.5em A\ne \Phi \\ {}\kern6em 0,\kern9.25em \mathrm{for}\kern1.00em A=\Phi \end{array}\right. $$
(6)
$$ k=\sum \limits_{B\cap C=\Phi}{m}_1(B){m}_2(C) $$
(7)

where k refers to the conflict between two independent evidence sources. There are also other combination rules that can be found in (Yager et al. 1994; Sentz and Ferson 2002).

2.2 Reliability analysis using evidence theory

Consider the reliability analysis of the following limit-state function:

$$ y=g\left({x}_1,{x}_2\right),\kern2.5em {x}_1\in {X}_1,{x}_2\in {X}_2 $$
(8)

where x1 and x2 are two independent uncertain variables and g is the limit-state function. Assume the failure region F is defined as

$$ F=\left\{g:g\left({x}_1,{x}_2\right)<0\right\} $$
(9)

According to the evidence theory, the failure probability pf = P(g < 0) will be bracketed by Bel(F) and Pl(F).

$$ Bel(F)\le {p}_f=P\left(\mathrm{g}<0\right)\le Pl(F) $$
(10)

In this study, we assume that a combined BPA for each variable and parameter is provided, and each focal element is a closed interval. For two independent uncertain variables x1 and x2, a joint BPA in evidence theory can be gained by (11), which is similar to the joint probability in probability theory.

$$ m(C)=m(A)\times m(B)\kern3em \mathrm{when}\ C\in A\times B $$
(11)

where A∈Ω(X1) and B∈Ω(X2), and C is a focal element of the Cartesian product A × B, which is defined by:

$$ A\times B=\left\{x=\left[{x}_1,{x}_2\right],{x}_1\in A,{x}_2\in B\right\} $$
(12)

Based on the joint BPA m(C), Bel(F) and Pl(F) can be calculated by determining whether C ⊆ F or C ∩ F ≠ Φ. For the failure region F, if the focal element C is wholly in the domain g(x1, x2) < 0, C ⊆ F; if the focal element C is entirely or partially within the domain g(x1, x2) < 0, C ∩ F ≠ Φ. These judgments can be made by calculating the extreme value of the limit-state function g over each focal element, namely

$$ \left[{g}_{\mathrm{min}},{g}_{\mathrm{max}}\right]=\left[\underset{X\in {C}_k}{\min}\kern0.5em g(X),\underset{X\in {C}_k}{\max}\kern0.5em g(X)\right],\kern2.5em k=1,\dots, n $$
(13)

where Ck represents a focal element and n is the total number of focal elements. gmin and gmax can be gained by the vertex method or the gradient-based optimization method. Figure 1 schematically shows three types of focal elements for the calculation of Bel(F) and Pl(F) (Mourelatos and Zhou 2006). For type 1, gmin is negative and gmax are positive, the focal element will only contribute to Pl(F). For type 2, both gmin and gmax are positive, the focal element will not contribute to Bel(F) or Pl(F). For type 3, both gmin and gmax are negative, the focal element will contribute to Bel(F) and Pl(F).

Fig. 1
figure 1

Contribution of a focal element to Bel and Pl

2.3 EBDO model

The mathematical model of the EBDO problem can be typically formulated as follows:

$$ {\displaystyle \begin{array}{l} find:\kern1em \boldsymbol{d},{\boldsymbol{e}}_{\boldsymbol{X}}\\ {}\min :\kern0.5em f\left(\boldsymbol{d},{\boldsymbol{e}}_{\boldsymbol{X}},{\boldsymbol{e}}_{\boldsymbol{P}}\right)\\ {}s.t.\kern1em Pl\left({g}_i\left(\boldsymbol{d},\boldsymbol{X},\boldsymbol{P}\right)<0\right)\le {p}_{f_i},\kern1.25em i=1,2,\cdots, N\\ {}\kern3em {\boldsymbol{d}}^L\le \boldsymbol{d}\le {\boldsymbol{d}}^U,{\boldsymbol{e}}_{\boldsymbol{X}}^L\le {\boldsymbol{e}}_{\boldsymbol{X}}\le {\boldsymbol{e}}_{\boldsymbol{X}}^U\end{array}} $$
(14)

where f is the objective function, gi is the ith constraint, and N is the number of constraints. d is the vector of deterministic variables, X is the vector of evidence variables, P is the vector of evidence parameters, eX and eP are the nominal value vectors of evidence variables and parameters, respectively. Pl denotes the plausibility of constraint violation. \( {p}_{f_i} \) is an acceptable value of the plausibility of the ith constraint violation. dL, dU, \( {\boldsymbol{e}}_{\boldsymbol{X}}^L \), and \( {\boldsymbol{e}}_{\boldsymbol{X}}^U \) are the lower and upper bounds of deterministic variables and evidence variables, respectively.

3 The improved two-stage framework of EBDO

To solve the EBDO problem with epistemic uncertainty in (14), Mourelatos and Zhou (2006) propose a two-stage method. In this method, an initial design point is moved to the vicinity of the EBDO optimum in the first stage before calculating the Pl of constraint violation in the second stage. To alleviate the computational effort in the first stage, the movement of a hyperellipse which contains the FD is employed and realized by solving a RBDO problem. The center of the hyperellipse is an approximate design point. In their method, it needs to be assumed that each dimension of the FD is equal to some times the standard derivation of a hypothetical normal distribution, such as six times. To avoid this assumption, the equal areas method (Xiao et al. 2015) is introduced to transform evidence variables into random variables. Then, a RBDO problem is defined and solved by SORA in the first stage. On the other hand, an improved algorithm for more efficiently calculating the Pl of constraint violation in the second stage is presented to decrease the evaluation times of the limit-state function, therefore saving the computational cost. The improved two-stage framework of EBDO will be elaborated in the following sections.

3.1 The first stage of the improved EBDO framework

In the first stage of the improved two-stage framework, the equal areas method is used to transform evidence variables into random variables. Using this method, the assumption is avoided that each dimension of the FD is equal to some times the standard derivation of a hypothetical normal distribution. In the remainder of this section, we will briefly introduce the key points involved in the first stage of the improved framework.

3.1.1 Reliability analysis based on hybrid mean value

At the beginning, evidence variables are transformed into random variables by the method based on equal areas (Xiao et al. 2015). As illustrated in Fig. 2, the probability density f(y = L1) is set as follows:

$$ f\left(y={L}_1\right)=\frac{m\left({A}_1\right)}{2\left({U}_1-{L}_1\right)} $$
(15)

where y denotes the non-normal random variable from the transformation of the evidence variable x, L1 and U1 are the lower and upper bounds of the interval A1. In Fig. 2, the area \( {S}_{A_{11}} \) of the domain A11 is equal to the area \( {S}_{A_{12}} \) of the domain A12. Then, the probability density f(y = U1) can be computed. Equation (16) can be used to calculate the probability densities f(y = Ui, i = 1, 2, …, 6). Thus, the random variable and its probability density function (PDF) can be obtained.

$$ f\left(y={U}_i\right)=\frac{2m\left({A}_i\right)}{U_i-{L}_i}-f\left(y={L}_i\right),\kern2em i=1,2,\dots, 6 $$
(16)
Fig. 2
figure 2

Transformation of an evidence variable into a random variable

For ease of calculation of the probability density and cumulative probability density of a random variable during reliability analysis, the metamodel is created to replace the piecewise linear PDF of the random variable. Each interval bound of the evidence variable and its corresponding probability density can be viewed as the sample data, i.e., the sample points Bi(i = 1, 2, …, 7) in Fig. 2. Based on the sample data, the metamodel can be constructed. In this study, the RBF metamodeling technique is employed. RBF is originally developed to implement the interpolation of scattered multivariate data by using the linear combination of radially symmetric functions (Hardy 1971). The RBF metamodel used in this paper is described as

$$ y={w}_0+\sum \limits_{i=1}^n{w}_i\varphi \left(\left\Vert x-{x}_i\right\Vert \right) $$
(17)

where w0 is a polynomial function and wi denotes the coefficient determined by least squares estimation. xi is a central point obtained from the sample data and φ is a basis function which has multiple forms such as linear function, multiquadric, inverse multiquadric, thin plate spline and Gaussian. In this study, the linear basis function is employed and it takes the following form:

$$ \varphi =\left\Vert x-{x}_i\right\Vert $$
(18)

After the aforementioned transformation, the reliability analysis can be conducted. Many methods have been developed for the reliability analysis with random variables, such as the hybrid mean value (HMV; Youn et al. 2003), modified chaos control (Meng et al. 2015b), step length adjustment (Yi and Zhu 2016), and enhanced chaos control (ECC; Hao et al. 2017) methods. In this research, the traditional HMV method is used. Using this method, the minimum performance target point of the reliability analysis with random variables can be obtained by solving the optimization problem as below:

$$ {\displaystyle \begin{array}{l}\min :\kern1em G\left(\boldsymbol{U}\right)\\ {}s.t.\kern2.5em \left\Vert \boldsymbol{U}\kern0.5em \right\Vert ={\boldsymbol{\beta}}^t\end{array}} $$
(19)

where G(U) denotes the limit-state function, U is the vector of the standard normal random variables, and βt is the reliability index vector of G(U) at the MPTP. In this paper, the non-normal random variables are converted into the standard normal random variables by the Rosenblatt transformation method (Rosenblatt 1952).

3.1.2 Searching for the RBDO optimum based on SORA

After the reliability analysis with transformed random variables, in this paper, the SORA method (Du and Chen 2004) is employed to solve the RBDO problem. In SORA, the solving process of RBDO is a sequential work of deterministic optimization and reliability analysis. As seen in (20), the probabilistic optimization is transformed into a deterministic optimization by shifting the limit-state function to the feasible region based on the offset vector and the minimum performance target point (MPTP) obtained by the reliability analysis in Section 3.1.1.

$$ {\displaystyle \begin{array}{l} find:\boldsymbol{d},{\boldsymbol{e}}_{\boldsymbol{X}}\\ {}\min :\kern0.5em f\left(\boldsymbol{d},{\boldsymbol{e}}_{\boldsymbol{X}},{\boldsymbol{e}}_{\boldsymbol{P}}\right)\\ {}s.t.\kern1.5em {g}_i\left(\boldsymbol{d},{\boldsymbol{e}}_{\boldsymbol{X}}-{\boldsymbol{s}}_i^{\left(k+1\right)},{\boldsymbol{P}}_{i\boldsymbol{MPTP}}^{(k)}\right)\ge 0,\kern1.25em i=1,2,\cdots, N\\ {}\kern-8.25em {\boldsymbol{s}}_i^{\left(k+1\right)}={\boldsymbol{e}}_{\boldsymbol{X}}^{(k)}-{\boldsymbol{X}}_{i\boldsymbol{MPTP}}^{(k)}\\ {}\kern2.25em {\boldsymbol{d}}^L\le \boldsymbol{d}\le {\boldsymbol{d}}^U,\kern1em {\boldsymbol{e}}_{\boldsymbol{X}}^{\boldsymbol{L}}\le {\boldsymbol{e}}_{\boldsymbol{X}}\le {\boldsymbol{e}}_{\boldsymbol{X}}^{\boldsymbol{U}}\end{array}} $$
(20)

where d is the vector of deterministic variables, X and P are the vectors of random variables and parameters after the transformation. eX and eP are the nominal value vectors of random variables and parameters, respectively. \( {\boldsymbol{s}}_i^{\left(k+1\right)} \) is the offset vector of the ith constraint at the (k + 1)th iteration of SORA. \( {\boldsymbol{e}}_{\boldsymbol{X}}^{(k)} \) is the optimum of the deterministic optimization at the kth iteration. \( {\boldsymbol{X}}_{i\boldsymbol{MPTP}}^{(k)} \) and \( {\boldsymbol{P}}_{i\boldsymbol{MPTP}}^{(k)} \) are the MPTPs of the reliability analysis with random variables and parameters for the ith constraint at the kth iteration of SORA, respectively.

3.2 An improved algorithm for calculating the Pl of failure in the second stage

In the second stage of the improved two-stage framework, the Pl of failure needs to be calculated every time the optimizer evaluates a constraint. Therefore, the efficiency of calculating the Pl of failure has a crucial effect on the efficiency of EBDO. Based on the algorithm in Mourelatos and Zhou (2006) for calculating the Pl of failure, an improved algorithm that records the minimum and maximum values of limit-state functions during optimization iterations is presented in this paper. Using this improved algorithm, the evaluation times of limit-state functions or constraints are reduced and the computational cost is saved. The pseudocode of the improved algorithm is shown in Fig. 3.

Fig. 3
figure 3

The pseudocode of the improved algorithm

As shown in Fig. 3, a set PF is initially equal to the entire frame of discernment (FD). The minimum and maximum values of the limit-state function g(X) in the set PF are evaluated and recorded as \( {g}_{\mathrm{min}}^r \) and \( {g}_{\mathrm{max}}^r \). Their corresponding locations in the set PF are recorded as \( {\boldsymbol{X}}_{\mathrm{min}}^r \) and \( {\boldsymbol{X}}_{\mathrm{max}}^r \). At the first iteration, the PF is partitioned into sets Dt (t = 1 and 2). If \( {\boldsymbol{X}}_{\mathrm{min}}^r \) locates in the set Dt, the recorded value \( {g}_{\mathrm{min}}^r \) will be assigned to the minimum value of g(X) in this set, i.e., \( {g}_{\mathrm{min}}\left({D}^t\right)={g}_{\mathrm{min}}^r \). Similarly, if \( {\boldsymbol{X}}_{\mathrm{max}}^r \) locates in the region Dt, \( {g}_{\mathrm{max}}\left({D}^t\right)={g}_{\mathrm{max}}^r \). If \( {\boldsymbol{X}}_{\mathrm{min}}^r \) or \( {\boldsymbol{X}}_{\mathrm{max}}^r \) locates outside the set Dt, gmin(Dt) or gmax(Dt) will be calculated. Then, \( {g}_{\mathrm{min}}^r \), \( {g}_{\mathrm{max}}^r \) and their corresponding locations \( {\boldsymbol{X}}_{\mathrm{min}}^r \) and \( {\boldsymbol{X}}_{\mathrm{max}}^r \) in the set Dt are updated and recorded. If gmin(Dt) < 0 and gmax(Dt) > 0, Dt will be placed in the set PF. If \( {g}_{\mathrm{min}}\left({D}_k^t\right)<0 \) and \( {g}_{\mathrm{max}}\left({D}_k^t\right)\le 0 \), Dt will be placed in the set TF. Otherwise, Dt will be eliminated and not considered further. Then, the second iteration begins. The above iteration process will not be stopped until the set PF cannot be partitioned. Finally, the Pl of failure can be computed by (21).

$$ Pl\left(g<0\right)=\sum \limits_{E_1\in PF}m\left({E}_1\right)+\sum \limits_{E_2\in TF}m\left({E}_2\right) $$
(21)

where the sets PF and TF are used to storage the focal elements with types 1 and 3 in Fig. 1, respectively. E1 and E2 denote the focal elements in the sets PF and TF, respectively.

In the above algorithm, if gmin(Dt) or gmax(Dt) cannot be directly assigned the recorded values, they can be calculated by the vertex method or the gradient-based optimization method. Compared with the gradient-based optimization method, the vertex method is more efficient because it does not require the optimization solving process to search for the minimum and maximum values of a limit-state function. In the whole design space, the limit-state function may be nonlinear and non-monotonic. But in practical engineering problems, the range of FD is usually small. The nonlinearity of the limit-state function is not high in this small range and the limit-state function can be considered as monotonic. Then the vertex method can be employed. Additionally, for a FD composed of n evidence variables and parameters, an n-dimensional hyperrectangle can be used to represent this FD. The following partition scheme for the n-dimensional hyperrectangle is employed in the above improved algorithm. At the jth iteration (j = 1,…,n), the hyperrectangle is partitioned into two parts by an (n-1)-dimensional hyperplane perpendicular to the jth dimension. Assuming that the evidence variable or parameter in the jth dimension has p focal elements, if p is an even number, the (n-1)-dimensional hyperplane will be located in between the (p/2)th and (p/2 + 1)th focal elements; if p is an odd number, the (n-1)-dimensional hyperplane will be located in between the (p/2–0.5)th and (p/2 + 0.5) focal elements. For the case that the hyperrectangle cannot be partitioned by the hyperplane perpendicular to the jth dimension, if j < n, the (j + 1)th dimension will be chosen; if j = n, the 1st dimension will be selected. After the nth dimension is chosen, the new dimension cycle in sequence will begin. The partition process will be terminated when the types of all focal elements are identified.

A limit state function with two disjoint failure domains is used to explain the developed algorithm. The FD and the limit-state function g = 0 are shown in Fig. 4. Each rectangle block in the FD denotes a focal element. The FD contains the total of 20 focal elements denoted by Fi, i = 1, 2, …, 20. The red dots and blue triangles in Fig. 4 denote the minimum and maximum values (i.e., \( {g}_{\mathrm{min}}^r \) and \( {g}_{\mathrm{max}}^r \)) in each set, respectively. At the first iteration, \( {g}_{\mathrm{min}}^r \) and \( {g}_{\mathrm{max}}^r \) in the FD are calculated and recorded. Simultaneously, their locations \( {\boldsymbol{X}}_{\mathrm{min}}^r \) and \( {\boldsymbol{X}}_{\mathrm{max}}^r \) are recorded. Then, the FD is partitioned into two sets by the red vertical line, including \( {D}_1^1=\cup {F}_i,\kern0.5em i=1,2,6,7,11,12,16,17 \), and \( {D}_1^2=\cup {F}_i,\kern0.5em i=3,4,5,8,9,10,13,14,15,18,19,20 \). It can be observed that \( {\boldsymbol{X}}_{\mathrm{min}}^r \) and \( {\boldsymbol{X}}_{\mathrm{max}}^r \) locates in \( {D}_1^1 \) and \( {D}_1^2 \), respectively. Thus, for \( {D}_1^1 \), it can be determined that \( {g}_{\mathrm{min}}\left({D}_1^1\right)={g}_{\mathrm{min}}^r \); while \( {g}_{\mathrm{max}}\left({D}_1^1\right) \) needs to be calculated, i.e., \( {g}_{\mathrm{max}}\left({D}_1^1\right)=\max g\left(\boldsymbol{X}\right),\boldsymbol{X}\in {D}_1^1 \). Similarly, for \( {D}_1^2 \), it can be judged that \( {g}_{\mathrm{max}}\left({D}_1^2\right)={g}_{\mathrm{max}}^r \); while \( {g}_{\mathrm{min}}\left({D}_1^2\right) \) needs to be calculated, i.e., \( {g}_{\mathrm{min}}\left({D}_1^2\right)=\min g\left(\boldsymbol{X}\right),\boldsymbol{X}\in {D}_1^2 \). Because \( {g}_{\mathrm{min}}\left({D}_1^1\right)<0 \) and \( {g}_{\mathrm{max}}\left({D}_1^1\right)>0 \), \( {D}_1^1 \) is placed in the PF. Similarly, \( {g}_{\mathrm{min}}\left({D}_1^2\right)<0 \) and \( {g}_{\mathrm{max}}\left({D}_1^2\right)>0 \), \( {D}_1^2 \) is also placed in the PF. Then, the first iteration is finished.

Fig. 4
figure 4

Illustration of the improved algorithm for calculating the Pl of failure

At the second iteration, \( {D}_1^1 \) is partitioned into \( {D}_1^{11}=\cup {F}_i,i=11,12,16,17 \) and \( {D}_1^{12}=\cup {F}_i,i=1,2,6,7 \). \( {D}_1^2 \) is partitioned into \( {D}_1^{21}=\cup {F}_i,i=13,14,15,18,19,20 \) and \( {D}_1^{22}=\cup {F}_i,i=3,4,5,8,9,10 \). After the judgment, \( {D}_1^{11} \), \( {D}_1^{21} \) and \( {D}_1^{22} \) are placed in PF and \( {D}_1^{12} \) is discarded. At the third iteration, \( {D}_1^{11} \) is partitioned into \( {D}_1^{111}=\cup {F}_i,i=11,16 \) and \( {D}_1^{112}=\cup {F}_i,i=12,17 \). \( {D}_1^{21} \) is partitioned into \( {D}_1^{211}=\cup {F}_i,i=13,18 \), \( {D}_1^{212}=\cup {F}_i,i=14,15,19,20 \). \( {D}_1^{22} \) is partitioned into \( {D}_1^{221}=\cup {F}_i,i=3,8 \), \( {D}_1^{222}=\cup {F}_i,i=4,5,9,10 \). After the judgment, \( {D}_1^{111} \), \( {D}_1^{112} \), \( {D}_1^{212} \) and \( {D}_1^{222} \) are placed in PF, and \( {D}_1^{211} \) and \( {D}_1^{221} \) are discarded. After five iterations, the partition of the whole FD is finished. Finally, TF = {F5, F10, F16} and PF = {F4, F9, F11, F12, F14, F15, F17}. The sum of BPA of all focal elements in PF and TF can be calculated and considered as the Pl of failure.

3.3 Flowchart of the improved two-stage framework

Figure 5 shows the flowchart of the improved two-stage framework of EBDO. The explanations of variables and parameters in this figure have been given after (14) and (20). In the first stage, evidence variables are transformed into random variables during the reliability analysis. Then the reliability analysis with random variables is conducted by the HMV method. The RBDO problem with random variables is solved by SORA. The gained RBDO optimum is viewed as the vicinity of the EBDO optimum. In the second stage, the EBDO problem is solved, in which the Pl of constraint violation is calculated and the RBDO optimum is considered as the initial design point. Generally, a derivative-free optimizer is required to solve the EBDO problem due to the discontinuous nature of the combined BPA structure (Mourelatos and Zhou 2006). In this research, the derivative-free global optimizer, i.e., the DIvisions of RECTangles (DIRECT), is used. DIRECT is a modification of the standard Lipschitzian approach that eliminates the need to specify a Lipschitz constant (Jones et al. 1993; Mourelatos and Zhou 2006). Evolutionary algorithms can be also utilized as the global optimizer, but large initial sample points are required.

Fig. 5
figure 5

Flowchart of the improved two-stage framework of EBDO

The improved two-stage framework of EBDO is illustrated by a two-dimensional example in Fig. 6. The grid represents the FD, in which each block denotes one focal element. The red dot in the grid is an evidence design point.

Fig. 6
figure 6

Movements of FD under the improved two-stage framework of EBDO

4 Test examples

In this section, three numerical examples and an engineering example are presented to test the improved two-stage framework of EBDO.

4.1 A mathematical example

This example is modified from (Youn et al. 2005). The mathematical model of the EBDO problem is shown in (22). \( {e}_{x_1} \) and \( {e}_{x_2} \) denote the nominal values of evidence variables x1 and x2, respectively. To test the advantage of the improved method without the assumption that unknown evidence variables and parameters obey the normal distribution, the BPA structures for x1 and x2 are set to distinguish them from those with normal distribution (seen in Table 1).

$$ {\displaystyle \begin{array}{l} find:\kern1em {e}_{x_1},{e}_{x_2}\\ {}\min :\kern1em f\left({e}_{x_1},{e}_{x_2}\right)=-\frac{{\left({e}_{x_1}+{e}_{x_2}-10\right)}^2}{30}-\frac{{\left({e}_{x_1}-{e}_{x_2}+10\right)}^2}{120}\\ {}\kern0.75em s.t.\kern1em Pl\left\{{g}_i\left({x}_1,\kern0.5em {x}_2\right)<0\right\}\le {p}_f,\kern1.5em i=1,2,3\\ {}\kern3em {g}_1=\frac{x_1^2{x}_2}{20}-1\\ {}\kern3.5em {g}_2=1-{\left(Y-6\right)}^2+{\left(Y-6\right)}^3-0.6{\left(Y-6\right)}^4+Z\kern6em \\ {}\kern3.5em {g}_3=\frac{80}{x_1^2+8{x}_2+5}-1\\ {}\kern4.5em Y=0.9063{x}_1+0.4226{x}_2\\ {}\kern3.25em Z=0.4226{x}_1-0.9063{x}_2\\ {}\kern4.5em 0\le \kern0.5em {e}_{x_1}\le 10,\kern0.5em 0\le {e}_{x_2}\le 10\end{array}} $$
(22)
Table 1 BPA structures for x1 and x2

According to the improved two-stage framework of EBDO, in the first stage, firstly, evidence variables are converted into random variables using the equal areas method. Taking the design point (\( {e}_{x_1} \) = 5, \( {e}_{x_2} \) = 5) as an example, Fig. 7 illustrates the BPA structures and their corresponding PDFs after transformation. For the sake of convenience in calculating the probability and cumulative densities of a random variable, the RBF metamodel is constructed and used to replace the piecewise linear PDF of the random variable. Then, the reliability analysis is conducted by HMV and the RBDO problem with random variables is solved by SORA. In the second stage, the improved algorithm in Section 3.2 is used for calculating the Pl of constraint violation. In this work, the vertex method is employed to calculate the values of gmin(Dt) and gmax(Dt) when they cannot be assigned the recorded values directly. Finally, the EBDO optimum in (22) is obtained by using the DIRECT.

Fig. 7
figure 7

BPA structures and the corresponding PDFs after transformation

Comparatively, the EBDO problem in this example is directly solved by using DIRECT without searching for the RBDO optimum, which is termed as one-stage EBDO in this study. In addition, the two-stage EBDO method with the hypothetical normal distribution (ND; Mourelatos and Zhou 2006), called as two-stage EBDO with ND in this study, is also adopted to solve the problem in this example. For both of the two-stage methods, the number of maximum iterations is set to 5 for the optimization in the second stage. To make a fair comparison, the number of maximum iterations for the one-stage method is also set to 5. Besides, the improved algorithm for calculating the Pl of failure is employed in the one-stage method. Comparative results obtained by the three methods under different values of pf are provided in Table 2. To demonstrate the accuracy of these methods, genetic algorithm (GA) is applied to search for the relatively accurate EBDO optimum. In GA, the population size is set to 100, and it stops if the average relative change of the best fitness function value is less than 10−6. For the cases with pf = 0.01, 0.1, and 0.2, the final EBDO optimums obtained by GA are −1.6426, −1.8199, and −1.95267 under 116, 87, and 85 generations, respectively.

Table 2 Comparative results under different cases in example 1

As shown in Table 2, compared with the two-stage method with ND, the RBDO optimum obtained by the improved two-stage method is closer to its EBDO optimum under each case. Therefore, the improvement in the first stage of the two-stage EBDO method is beneficial. To take an example, Fig. 8a, b show the RBDO and EBDO optimums obtained by the improved two-stage method under pf = 0.01, respectively. The failure focal elements in FDs are represented by green rectangle blocks in Fig. 8a, b. The relative location of the RBDO and EBDO optimums is illustrated in Fig. 8c. It can be observed that they are very close, which will be helpful for obtaining the EBDO optimum in the second stage.

Fig. 8
figure 8

Movements of FD under the improved two-stage framework of EBDO and pf = 0.01 in example 1

To check the efficiency of the improved two-stage method, the total call number of the objective and constraint functions in each stage during the whole EBDO solving process is summarized in Table 2. As shown in Table 2, compared with the two-stage methods, the total number of function calls in the one-stage method is the least under cases with the number of maximum iterations 5. However, the EBDO optimums obtained by the one-stage method under these cases are the most conservative. We try to increase the number of maximum iterations in the one-stage method. When it arrives at 50, the less conservative optimum can be obtained. However, much more function calls are required. Compared with the two-stage method with ND, the improved method needs fewer function calls except the case of pf = 0.01. On the other side, the EBDO optimums obtained by these two methods are close. Therefore, the improved two-stage framework of EBDO is computationally more efficient to some extent.

On the other hand, the efficiency of the improved algorithm for calculating the Pl of failure is checked. As mentioned in Section 3.2, gmin(Dt) and gmax(Dt) can be calculated by the vertex method or the gradient-based method. In this example, for the vertex method, the Pl of failure at the EBDO optimum is calculated by the improved algorithm and the traditional algorithm involving all focal elements, respectively. Table 3 lists the number of each constraint function calls during the calculation. Compared with the traditional algorithm, the improved algorithm needs much fewer calls for each constraint function under each case of pf. For the gradient-based method, we compare the improved algorithm and the original algorithm without records (Mourelatos and Zhou 2006). From Table 3, it can be found that the improved algorithm needs less number of optimizer calls than the algorithm without records for constraints g1 and g2. For constraint g3, the same number of optimizer calls is needed in the two algorithms. This is because no type 1 focal elements in Fig. 1 exist in the FD. Thus, it is demonstrated that the improved algorithm has a high efficiency on calculating the Pl of failure.

Table 3 Comparison of efficiency for calculating Pl of failure in example 1

4.2 A cantilever beam example

A cantilever beam in vertical and lateral bending (Mourelatos and Zhou 2006) is presented and used to test the improved two-stage framework of EBDO. Figure 9 shows the cantilever beam. The vertical and lateral loads Y and Z are applied to the tip of the cantilever beam. The length L of the beam is 100 in. The width w and thickness t of the cross section of the beam are deterministic design variables. Assuming the material density and the beam length are constant, the objective is to minimize the weight of the beam, which can be transformed into the minimization of f = w × t. Two non-linear failure modes are considered: yielding at the fixed end of the beam and the tip displacement. The allowable value of the tip displacement is D0 = 2.5 in. The EBDO problem is shown as:

$$ {\displaystyle \begin{array}{l}\mathrm{find}:w,t\\ {}\min :f=w\times t\\ {}\kern0.5em s.t.\kern1em Pl\left\{{g}_i\left(\boldsymbol{d},\boldsymbol{P}\right)<0\right\}\le {p}_f,i=1,2\\ {}\kern2.25em {g}_1\left(y,Z,Y,w,t\right)=y-\left(\frac{600}{wt^2}\times Y+\frac{600}{w^2t}\times Z\right)\\ {}\kern2.25em {g}_2\left(E,Z,Y,w,t\right)={D}_0-\frac{4{L}^3}{Ewt}\sqrt{{\left(\frac{Y}{t^2}\right)}^2+{\left(\frac{\mathrm{Z}}{w^2}\right)}^2}\\ {}\kern2.25em 0\le w,t\le 5\end{array}} $$
(23)

where g1 and g2 are the limit state functions. The deterministic design variables d = [w, t]. In this example, in order to test the advantage of the improved method without the assumption that unknown evidence variables and parameters obey the normal distribution, the BPA structures of evidence parameters P = [Z, y, Y, E] are modified to distinguish them from those with normal distribution. E denotes the Young modulus, and y is the yield strength. The BPA structures are listed in Table 4.

Fig. 9
figure 9

Cantilever beam example

Table 4 BPA structures for Z, y, Y and E

Similar to example 1, the EBDO problem of the cantilever beam is solved by three EBDO methods. For both of the two-stage methods, the number of maximum iterations is set to 10 for the optimization in the second stage. The number of maximum iterations for the one-stage method is also set to 10. The improved algorithm for calculating the Pl of failure is employed in the one-stage method. Comparative results under different values of pf are presented in Table 5. In order to validate the accuracy of these methods, GA is employed to search for the relatively accurate EBDO optimums. For the cases with pf = 0.01, 0.1, and 0.2, the final EBDO optimums gained by GA are 9.6852, 8.8444, and 8.5267 under 169, 130, and 121 generations, respectively.

Table 5 Comparative results under different cases in example 2

As shown in Table 5, compared with the two-stage method with ND, the RBDO optimum obtained by the improved two-stage method is closer to its EBDO optimum under almost each case. This proves that the improvement in the first stage of the two-stage EBDO method is beneficial. To take an example, Fig. 10a, b illustrate the limit-state surfaces of g1 at the RBDO and EBDO optimums obtained by the improved two-stage method under pf = 0.01. Because no evidence variables exist in this example, the RBDO and EBDO optimums of the deterministic design variables w and t can be adopted to define different limit-state surfaces with respect to evidence parameters. In Fig. 10a, the green plane denotes the limit-state surface g1(y, Z, Y, 2.46, 4.013) = 0 at the RBDO optimum. In Fig. 10b, the yellow plane denotes the limit-state surface g1(y, Z, Y, 2.526, 3.835) = 0 at the EBDO optimum. The red cubes in Fig. 10a, b represent the failure focal elements. The relative location of the two limit-state surfaces at the RBDO and EBDO optimums is displayed in Fig. 10c. It can be observed that the two planes are very close.

Fig. 10
figure 10

Limit-state surfaces of g1 = 0 at the RBDO and EBDO optimums under pf = 0.01 in example 2

From Table 5, with the number of maximum iterations 10, the total number of function calls in the one-stage method is the least under pf = 0.1 and 0.2. When its number of maximum iterations is increased to 30, less conservative optimums are obtained. However, much more function calls are required. Compared with the two-stage method with ND, the improved method needs fewer function calls under all cases. On the other hand, the EBDO optimums obtained by these two methods are close. Therefore, it is verified that the improved two-stage framework of EBDO is more efficient.

After calculating the Pl of failure at the EBDO optimum by different algorithms, Table 6 lists the number of each constraint function calls during the calculation in the vertex method and the number of optimizer calls for each constraint function in the gradient-based method. As seen in Table 6, compared to the traditional algorithm, the improved algorithm needs much fewer calls of each constraint function under all cases for the vertex method. For the gradient-based method, the number of optimizer calls for each constraint in the improved algorithm is not more than those in the algorithm without records under all cases. Accordingly, it is verified that the improved algorithm has a high efficiency on calculating the Pl of failure.

Table 6 Comparison of efficiency for calculating Pl of failure in example 2

4.3 A pressure vessel example

A thin-walled pressure vessel (Mourelatos and Zhou 2006) is provided and employed to test the proposed two-stage framework of EBDO. As shown in Fig. 11, the pressure vessel has hemispherical ends and the design variables are the radius R, the length L, and the thickness t. The objective is to maximize the volume of the vessel. The vessel is to withstand a specified internal pressure P. The yielding of the material in both the circumferential and radial directions should be avoided. Some geometric constraints are also taken into consideration. The material yield strength is Y. The safety factor SF is set to 2. The EBDO problem in this example is shown as:

$$ {\displaystyle \begin{array}{l} find:{e}_R,{e}_L,{e}_t\\ {}\max :\kern1em f=\frac{4}{3}\pi {e}_R^3+\pi {e}_R^2{e}_L\\ {}s.t.\kern1.75em Pl\left\{{g}_i\left(R,L,t,P,Y\right)<0\right\}\le {p}_f,\kern1.5em i=1,\dots, 5\\ {}\kern2.75em {g}_1=1.0-\frac{P\left(R+0.5t\right) SF}{2 tY}\\ {}\kern2.75em {g}_2=1.0-\frac{P\left(2{R}^2+2 Rt+{t}^2\right) SF}{\left(2 Rt+{t}^2\right)Y}\\ {}\kern2.75em {g}_3=1.0-\frac{L+2R+2t}{60}\\ {}\kern2.75em {g}_4=1.0-\frac{R+t}{12}\\ {}\kern2.75em {g}_5=1.0-\frac{5t}{R}\\ {}\kern2.75em 0.4\le {e}_t\le 2.0\\ {}\kern2.75em 6.0\le {e}_R\le 24\\ {}\kern2.75em 12\le {e}_L\le 48\end{array}} $$
(24)
Fig. 11
figure 11

Thin-walled pressure vessel

Different from the former two examples, this example has both epistemic uncertainty variables (i.e., R, L and t) and epistemic uncertainty parameters (i.e., P and Y). To test the improved method without assuming that unknown evidence variables and parameters obey the normal distribution, the BPA structures of evidence variables and parameters are modified to distinguish them from those with normal distribution. The BPA structures are provided in Tables 7 and 8.

Table 7 BPA structures for R, L and t
Table 8 BPA structures for P and Y

Similar to example 1, the EBDO problem of the pressure vessel is solved by three EBDO methods. For both of the two-stage methods, the number of maximum iterations is set to 5 for the optimization in the second stage. The number of maximum iterations for the one-stage method is also set to 5. The improved algorithm for calculating the Pl of failure is employed in the one-stage method. Comparative results under different values of pf are presented in Table 9. Moreover, GA is used to search for the relatively accurate EBDO optimums to check the accuracy of these three methods. The final EBDO optimums from GA are 6238.04 and 9665.26 for the cases with pf = 0.15 and 0.3 under 174 and 138 generations, respectively.

Table 9 Comparative results under different cases in example 3

As shown in Table 9, with the number of maximum iterations 5, the two-stage method with ND cannot obtain an EBDO optimum. However, the RBDO optimum obtained by the improved two-stage method is very close to its EBDO optimum under each case. Thus, the improvement in the first stage of the two-stage EBDO method is important and helpful. Moreover, under pf = 0.3, the improved two-stage method requires fewer function calls than the two-stage method with ND. Therefore, the improved method is more efficient for quickly obtaining an EBDO optimum. Considering the maximization problem in (24), the EBDO optimums of the improved two-stage method are less conservative than those of the one-stage method.

During calculating the Pl of failure at the EBDO optimum, Table 10 gives the number of each constraint function calls in the vertex method and the number of optimizer calls for each constraint function in the gradient-based method. As seen in Table 10, for the vertex method, the number of each constraint function calls in the improved algorithm is much less than that in the traditional algorithm under all cases. For the gradient-based method, compared with the algorithm without records, the improved algorithm needs fewer optimizer calls under all cases. Hence, it is demonstrated that the improved algorithm can efficiently calculate the Pl of failure.

Table 10 Comparison of efficiency for calculating Pl of failure in example 3

4.4 Engineering example: Electronic packaging design for a smart watch

In this section, the improved two-stage framework is applied to an electronic packaging design for a smart watch as shown in Fig. 12 (Huang et al. 2016). The optimization objective is to obtain an optimal thickness of the watch to satisfy the wearing comfort. Some extreme conditions with hard impact and high temperature should be considered to keep watch working reliably. For this purpose, three points on the screen are considered as experiment points to hit against with three identical steel balls. The material stress in each point, \( {\varGamma}_i^N \) for i = 1, 2, 3, should be less than the corresponding yield strength ΓDisplay. In order to ensure the normal operation of the watch, the maximum temperatures of two chips T1 and T2 are required to be less than allowable value TChip when the operating temperature of device is set to 50 °C. Moreover, the maximum stress ΓH of the solder should not be higher than the allowable value ΓSolder. The thicknesses of the device, main board, bracket, display, and lens are chosen as design variables, which are denoted by Xi for i = 1, 2,…,5. The Young Modulus of the main board, display and lens are selected as evidence parameters, which are denoted by Pi for i = 1, 2, 3. The power dissipations of two chips are also treated as evidence parameters, which are denoted by P4 and P5. The BPA structures of evidence parameters are provided in Table 11. There are six constraints and each of their target failure probabilities is 0.1. The EBDO problem is formulated as follows:

$$ {\displaystyle \begin{array}{l} find:{X}_i,\kern0.5em i=1,\kern0.5em 2,\kern0.5em \dots, \kern0.5em 5\\ {}\min :\kern1em f={X}_1+{X}_2+{X}_3+{X}_4+{X}_5\\ {}s.t.\kern1.75em Pl\left\{{g}_i\left(\boldsymbol{X},\boldsymbol{P}\right)<0\right\}\le {p}_f,\kern1.5em i=1,\dots, 6\\ {}\kern2.75em {g}_1={\varGamma}^{\mathrm{Display}}-{\varGamma}_1^{\mathrm{N}}\left(\boldsymbol{X},\boldsymbol{P}\right)\\ {}\kern2.75em {g}_2={\varGamma}^{\mathrm{Display}}-{\varGamma}_2^{\mathrm{N}}\left(\boldsymbol{X},\boldsymbol{P}\right)\\ {}\kern2.75em {g}_3={\varGamma}^{\mathrm{Display}}-{\varGamma}_3^{\mathrm{N}}\left(\boldsymbol{X},\boldsymbol{P}\right)\\ {}\kern2.75em {g}_4={\varGamma}^{\mathrm{Solser}}-{\varGamma}^{\mathrm{H}}\left(\boldsymbol{X},\boldsymbol{P}\right)\\ {}\kern2.75em {g}_5={T}^{\mathrm{Chip}}-{T}_1\left(\boldsymbol{X},\boldsymbol{P}\right)\\ {}\kern2.75em {g}_6={T}^{\mathrm{Chip}}-{T}_2\left(\boldsymbol{X},\boldsymbol{P}\right)\\ {}{\varGamma}^{\mathrm{Display}}=82.0\mathrm{Mpa},\kern0.75em {\varGamma}^{\mathrm{Solser}}=62.8\mathrm{Mpa},\kern0.75em {T}^{\mathrm{Chip}}={95}^{\circ}\mathrm{C}\\ {}1.0\mathrm{mm}\le {X}_1\le 2.0\mathrm{mm},\kern0.5em 0.8\mathrm{mm}\le {X}_2\le 1.6\mathrm{mm},\kern0.5em 0.6\mathrm{mm}\le {X}_3\le 2.2\mathrm{mm}\\ {}1.2\mathrm{mm}\le {X}_4\le 2.4\mathrm{mm},\kern0.5em 1.2\mathrm{mm}\le {X}_5\le 2.4\mathrm{mm}\end{array}} $$
(25)
Fig. 12
figure 12

A wearable smart watch

Table 11 BPA structures for P1, P2, P3, P4 and P5

In order to improve the optimization efficiency, a quadratic response surface function is established for each constraint utilizing 65 FEM samples. The expressions of constructed response surface functions can be found in Huang et al. (2016). Similar to mathematical examples, the EBDO problem of the smart watch is solved by three EBDO methods. For both of the two-stage methods, the number of maximum iterations is set to 10 for the optimization in the second stage. The number of maximum iterations for the one-stage method is also set to 10. The improved algorithm for calculating the Pl of failure is employed in the one-stage method. Comparative results under pf = 0.1 are presented in Table 12. In addition, the EBDO optimum obtained by GA is 6.8046 under 164 generations.

Table 12 Comparative results under pf = 0.1 in example 4

As shown in Table 12, with the number of maximum iterations 10, the two-stage method with ND and one-stage method cannot obtain an EBDO optimum. However, the RBDO optimum obtained by the improved two-stage method is close to its EBDO optimum. So it is demonstrated that the improvement in the first stage of the two-stage method is helpful and beneficial. Furthermore, compared with the two-stage method with ND, fewer function calls are required by the improved two-stage method. Hence, the improved method is more efficient for quickly finding an EBDO optimum.

During calculating the Pl of failure at the EBDO optimum, Table 13 gives the number of each constraint function calls in the vertex method and the number of optimizer calls for each constraint function in the gradient-based method. As seen in Table 13, for the vertex method, the number of each constraint function calls in the improved algorithm is much less than that in the traditional algorithm. For the gradient-based method, compared with the algorithm without records, the improved algorithm needs the same calls of the optimizer, because focal elements with types 1 and 3 do not exist in the FDs for all constraints.

Table 13 Comparison of efficiency for calculating Pl of failure in example 4

4.5 Analysis and discussion

As illustration of the improved algorithm for calculating the Pl of failure in Fig. 4, it can be found that the focal elements with type 3, which intersect with the limit-state curves or surfaces, will have a large influence on the computational efficiency. As the number of focal elements with type 3 increases, the number of limit-state function calls and the computational cost will raise. Generally, the EBDO optimum is close to the limit-state curves or surfaces. For the one-stage EBDO method, the initial search region is often far from the limit-state curves or surfaces. During its initial optimization iterations, a few focal elements with type 3 exist and the number of limit-state function calls is small. With the increasing of iterations, the number of limit-state function calls will increase rapidly. On the other hand, for the two-stage EBDO method, the RBDO optimum obtained in the first stage is in the vicinity of the EBDO optimum and considered as the initial design point in the second stage. To obtain an EBDO optimum, usually only few iterations (such as 5–10 iterations) are required in the second stage. Because its initial design point is close to the limit-state curves or surfaces, the second stage will need more function calls than the one-stage method when the same few iterations are set (such as the number of maximum iterations 5 and 10). This can be verified by the statistical data in Tables 2, 5, 9 and 12.

Additionally, to obtain a relatively accurate EBDO optimum, the number of maximum iterations needs to be increased for the one-stage method. From the statistical data in Tables 2, 5, 9 and 12, it can be found that much more function calls are required for the one-stage method, although the relatively accurate EBDO optimums can be obtained.

For the two-stage method with ND, it is assumed that each dimension of the FD is equal to some times the standard derivation of a hypothetical normal distribution. Therefore, the RBDO optimums obtained by the two-stage method with ND may be farther from the limit-state curves or surfaces than those obtained by the improved method. As shown in Tables 2, 5, 9 and 12, compared with the two-stage method with ND, the RBDO optimum obtained by the improved method is closer to its EBDO optimum under each case. This proves that the improvement in the first stage is beneficial. On the other hand, as shown in Tables 3, 6, 10 and 13, it is demonstrated that the improved algorithm has a higher efficiency on calculating the Pl of constraint failure than the existing algorithm in Mourelatos and Zhou (2006). Also, the improved method requires fewer function calls in the second stage than the two-stage method with ND under all cases. This proves that the improvement in the second stage is also beneficial.

Meanwhile, as shown in Tables 2, 5, 9 and 12, with the same iterations in the second stage, the improved two-stage method can search for the EBDO optimums in all examples; while the two-stage method with ND fails in the pressure vessel and smart watch examples. Moreover, in the mathematical and beam examples, it can be seen that compared with the two-stage method with ND, the improved method needs fewer total function calls except the only one case of the mathematical example with pf = 0.01. In these two examples, the improved method obtains less conservative EBDO optimums than the two-stage method with ND. Nevertheless, the EBDO optimums obtained by the improved method are the closest to those obtained by GA, which are used as the reference values to check the accuracy of all the methods. Overall, the improved two-stage method is more efficient and the accuracy of its solutions can be ensured.

5 Conclusions and further work

This paper makes two improvements on the original two-stage framework of EBDO in Mourelatos and Zhou (2006): (1) the first stage is improved to get rid of the assumption that unknown evidence variables and parameters obey the normal distribution. Specifically, the equal areas method is employed to convert evidence variables into random variables. Then, a RBDO problem with random variables is defined and solved by SORA; (2) in the second stage, the original algorithm for the calculation of the Pl of constraint violation is improved by continuously recording the minimum and maximum values of limit-state functions to achieve higher computational efficiency. Numerical and engineering examples are given to test the advantages of the improved two-stage framework of EBDO. Results show that the RBDO optimum obtained in the first stage by the improved framework is very close to its EBDO optimum in the second stage. The improvement in the first stage is proved to be beneficial to quickly obtain the EBDO optimum. Meanwhile, the higher efficiency of the improved algorithm for calculating the Pl of constraint failure is demonstrated. Overall, the improved two-stage framework is more efficient for obtaining an accurate EBDO optimum.

For multimodal function optimization problems, the RBDO solution of the first stage may not be in the vicinity of the actual EBDO optimum. As a global optimizer, DIRECT has a potential to find the actual EBDO optimum if the initial search region is not in its vicinity. However, large computational cost may be needed. On the other hand, the SORA method and the DIRECT optimizer used in the improved two-stage method cannot solve multi-objective problems. Thus, the improved two-stage framework of EBDO is not suitable for multimodal or multi-objective optimization problems under epistemic uncertainty.

Additionally, although the computational efficiency of the two-stage framework is improved significantly, the number of objective and constraint functions calls is still large. As discussed in Section 4.5, the number of focal elements which intersect with the limit-state curves or surfaces has a large influence on the number of limit-state function calls. Generally, the total number of function calls will increase as the increasing of the number of evidence variables and parameters. For high-dimensional problems involving many evidence variables and parameters (such as 102–103), huge computational cost will be taken. In practical engineering applications, metamodeling techniques need to be used to replace computer-based simulations in the improved two-stage framework.

As part of further work, some other approaches for design under uncertainty can be taken into account in the two-stage framework, such as partially converged simulations, fusion with experimental data, and epistemic uncertainty in computer models.