1 Introduction

Currently, traditional deterministic optimization method has been widely applied to improve the performance of structures in engineering design. Nevertheless, without considering any uncertainties, the optimal design in deterministic optimization may be sensitive to variations. The structure may be either risky while the design has a low probability of constraint satisfaction or uneconomic with the use of a high safety factor (Li et al. 2013). Reliability-based design optimization (RBDO) gives a solution to this problem by quantitatively considering the uncertainties.

Many kinds of RBDO methods have been proposed, which can be divided into double-loop methods, single-loop methods and decoupling methods (Aoues and Chateauneuf 2010; Valdebenito and Schueller 2010). Earlier RBDO methods employ a double-loop structure in which design optimization loop and reliability analysis loop are nested (Yu et al. 1997; Youn et al. 2003; Youn et al. 2005). The outer loop is the design optimization loop where the design space is explored to obtain the updated design. The inner loop is the reliability analysis loop where the failure probability is calculated. For each iteration in the design optimization loop, reliability analysis which includes a number of performance function evaluations is needed. Therefore the computational cost of double-loop methods is usually very high. Single-loop methods (Kharmanda et al. 2002; Li et al. 2013) have only one loop where the reliability analysis loop is substituted by its first order KKT necessary optimality conditions. The efficiency of single-loop methods is usually very high for linear and moderate nonlinear performance functions. But for highly nonlinear problems, single-loop methods may be divergent. Decoupled methods (Du et al. 2008; Chen et al. 2013) separate the reliability analysis loop from the design optimization loop, and then the RBDO problem is converted to a sequence of deterministic optimization problems. Thus it can achieve a good balance between accuracy and efficiency. The decoupled methods can be divided into two categories: methods based on shifting vectors (Du and Chen 2004; Chen et al. 2013; Huang et al. 2016) and methods based on sequential approximate programming (SAP) (Cheng et al. 2006; Yi et al. 2008; Chen et al. 2014; Li et al. 2016). The former uses shifting vectors to convert the probabilistic constraints to equivalent deterministic constraints. The latter uses the Taylor expansion of failure probability P f (μ k) at the current design to decompose the RBDO problems into a sequence of deterministic sub-optimization problems.

In modern engineering practice, implicit performance functions that are often encountered further increase the difficulty of solving RBDO problem. To reduce the high computational time, cost, and/or risk brought by real-life experiments, computer simulations such as finite element analysis (FEA) and computational fluid dynamics (CFD) have been widely used. However, the direct callings of these simulations are still very time-consuming. Therefore, surrogate models have been introduced to approximate the original simulator in RBDO (Crombecq et al. 2011). Youn and Choi (2004) adopted the response surface method (RSM) to solve RBDO problem. Cheng et al. (2006) used sequential approximate programming strategy to solve reliability-based structural optimization problem. Cheng and Li (2008) proposed the reliability analysis method using artificial neural network (ANN) based genetic algorithms. Bichon et al. (2008) proposed the efficient global reliability analysis (EGRA) method that accurately characterizes nonlinear implicit limit state functions throughout the random variable space. Zhao et al. (2011) proposed a dynamic Kriging method for design optimization. Echard et al. (2011) combined Kriging model with Monte Carlo Simulation to calculate failure probability. Wang and Wang (2014) developed the maximum confidence enhancement (MCE)-based sequential sampling approach for RBDO using surrogate models. Chen et al. used local sampling method with Kriging to improve the accuracy and efficiency of RBDO (Chen et al. 2014; Li et al. 2015).

Though surrogate models have obvious advantages in solving RBDO problem, a number of computationally expensive high-fidelity (HF) analyses are still needed (Sun et al. 2010). To alleviate the high computational cost in HF analyses, an efficient alternative termed as variable fidelity (VF) method is proposed. In such approache, the computationally cheaper low fidelity (LF) model is adopted to capture the behavior of HF model over the entire design domain and some HF samples are used to ensure the accuracy of approximation in important regions (Haftka 1991). Thus, the accuracy of implicit constraints is insured using a great amount of LF data and a small number of expensive HF data (Forrester and Keane 2009).

The VF method is developed by Haftka who proposed the global–local approximation (GLA) method to combine the inexact LF model and refined HF model (Haftka 1991). Gano et al. applied variable fidelity methods in conjunction with the double-loop method to reduce the computational cost of RBDO (Gano et al. 2006). Kandasamy et al. used the variable fidelity methods for the resistance optimization of a waterjet propelled Delft catamaran (Kandasamy et al. 2013). Simpson et al. presented a review of variable fidelity method (Simpson et al. 2008).

Though VF method has been widely used in deterministic design optimization, its application in RBDO is still limited. In Gano’s method, PMA (Performance Measure Approach) is used to solve RBDO problem, which is inefficient because of its double-loop structure. Moreover, the adaptive hybrid scaling method in Gano’s method fails to take fully use of existing HF samples, which may lead to an inexact RBDO solution.

In this paper, the variable fidelity method is applied to solve RBDO problem. A hybrid scaling method based on least squares is proposed by using the HF function values and gradient values at design points. Monte Carlo simulation is used to calculate the failure probability and its gradient at current design. Sequential linear programming (SLP) is adopted to calculate the next design and the step size in every sub-optimization problem is determined by the target reliability index and the influence domain at the current design.

The remainder of this contribution is organized as follows: a brief survey on VF-RBDO methods is launched in Section 2, followed by a detailed explanation of the proposed approach in Section 3. Two mathematical examples and the shape optimization problem of a curved beam are illustrated to demonstrate the performance of proposed method in Section 4. Finally, conclusions are drawn in Section 5.

2 Commonly used methods in VF- RBDO

In engineering application, the HF model is obtained through computationally intensive numerical simulation or physical experiments and the LF model can be obtained through empirical equations, simplified theories or coarser models (Simpson et al. 2008). Thus, compared to HF model, the computational cost of LF model is much cheaper. Combining the modeling efficiency in LF model and the modeling accuracy in HF model, VF model can be constructed.

2.1 Description of VF-RBDO method

In VF-RBDO, the implicit performance function is replaced by the VF model and the formulation is as follows:

$$ \begin{array}{l}\mathrm{find}:\kern1.5em {\boldsymbol{\mu}}_{\boldsymbol{X}}\\ {} \min :\kern1.5em f\left({\boldsymbol{\mu}}_{\boldsymbol{X}}\right)\\ {}\mathrm{s}.\mathrm{t}.:\kern2em P\left({g}_{VF}\left(\boldsymbol{X}\right)\le 0\right)-\varPhi \left(-{\beta}^t\right)\le 0\\ {}\kern3.5em {\boldsymbol{\mu}}_{\boldsymbol{X}}^L\le {\boldsymbol{\mu}}_{\boldsymbol{X}}\le {\boldsymbol{\mu}}_{\boldsymbol{X}}^U\end{array} $$
(1)

Where X is the vector of random design variables, μ X denote the mean of X. f(•) is the objective function. g VF (X) is the VF performance function obtained from the LF model and some HF samples. P(•) is the probability operator, β t denotes the target reliability requirements. Φ(•) is the cumulative distribution function of the standard normal distribution. Superscripts "L" and "U" denote the lower and upper limits.

2.2 Failure probability and its gradient calculation using MCS

Failure probability calculation is critical in RBDO, in which most probable point (MPP) based methods and numerical simulation methods can be used. Though MPP based methods is very efficient, the lack of accuracy limits its application in VF-RBDO. Conversely, the numerical simulation methods are usually very accurate, but their efficiency is very low because too many computationally expensive HF samples should be directly called to conduct reliability analysis. If the computationally affordable VF model is adopted to replace the HF model, the computational cost in numerical simulation methods can be significantly reduced.

As the most commonly used numerical simulation method, Monte Carlo simulation (MCS) is achieved through realizing random variables and determining whether a particular event occurs for the simulation instance (Li et al. 2010). The ratio of the failure number to the total number of samples is regarded as the failure probability P f (μ k). In practical engineering, the failure probability is usually very low, therefore only after many simulations a reasonable failure probability can be achieved, which means MCS is computationally expensive. However, when combining with VF model, the computational cost of MCS can be affordable. The gradient information \( \frac{\partial {P}_f\left({\boldsymbol{\mu}}^k\right)}{\partial \boldsymbol{\mu}} \) for the failure probability P f (μ k) can be calculated as follows:

$$ \begin{array}{l}\frac{\partial {P}_f\left({\boldsymbol{\mu}}^k\right)}{\partial \boldsymbol{\mu}}=\frac{\partial }{\partial \boldsymbol{\mu}}{\displaystyle {\int}_{g\left(\boldsymbol{X}\right)\le 0}{f}_{\boldsymbol{X}}\left(\boldsymbol{X}\right)}d\boldsymbol{X}\\ {}\kern4.55em ={\displaystyle {\int}_{g\left(\boldsymbol{X}\right)\le 0}\frac{\partial {f}_{\boldsymbol{X}}\left(\boldsymbol{X}\right)}{\partial \boldsymbol{\mu}}}d\boldsymbol{X}\\ {}\kern4.55em ={\displaystyle {\int}_{g\left(\boldsymbol{X}\right)\le 0}\frac{\partial {f}_{\boldsymbol{X}}\left(\boldsymbol{X}\right)}{\partial \boldsymbol{\mu}}\frac{1}{f_{\boldsymbol{X}}\left(\boldsymbol{X}\right)}}{f}_{\boldsymbol{X}}\left(\boldsymbol{X}\right)d\boldsymbol{X}\\ {}\kern4.55em =\frac{1}{N}{\displaystyle \sum_{i=1}^N\frac{I_F\left({\boldsymbol{X}}^i\right)}{f_{\boldsymbol{X}}\left({\boldsymbol{X}}^i\right)}}\frac{\partial {f}_{\boldsymbol{X}}\left({\boldsymbol{X}}^i\right)}{\partial \boldsymbol{\mu}}\end{array} $$
(2)

Where X are the random design variables, f X (X) represents the joint probability density function of X, N is the number of test points in MCS. \( {I}_F\left(\boldsymbol{X}\right)=\left\{\begin{array}{l}1g\left(\boldsymbol{X}\right)\le 0\\ {}0g\left(\boldsymbol{X}\right)>0\end{array}\right. \) is an indicator function.

The test points X i, i = 1, ⋅ ⋅⋅, N in Eq. (2) are the same as that used for estimating the failure probability P f (μ k). In other words, the calculation of gradient \( \frac{\partial {P}_f\left({\boldsymbol{\mu}}^k\right)}{\partial \boldsymbol{\mu}} \) does not require additional MCS runs, and it can be obtained while calculating the failure probability P f (μ k). Details about the Eq. (2) are in reference (Song et al. 2009).

2.3 Commonly used scaling method in variable-fidelity model

The key of VF technique is to use the difference between a LF model and a HF model at a few points to correct the LF model at other points (Sun et al. 2010). To approximate the difference accurately, many kinds of scaling methods are proposed, among which the multiplicative scaling (Haftka 1991; Chang et al. 1993) and additive scaling (Lewis and Nash 2000; Kandasamy et al. 2013) are most commonly used.

2.3.1 Multiplicative scaling method

To fully use the advantages of the LF model in global prediction and the HF samples in local correction, Haftka proposed the multiplicative scaling method (Haftka 1991). In this method, the ratio of HF function value to LF function value at a given point x n is termed as the scaling factor. To approximate the scaling factor at other points, the first-order Taylor expansion is used as

$$ \widehat{\alpha \left(\boldsymbol{x}\right)}=\alpha \left({\boldsymbol{x}}_n\right)+\nabla \alpha {\left({\boldsymbol{x}}_n\right)}^T\left(\boldsymbol{x}-{\boldsymbol{x}}_n\right) $$
(3)

Where α(x n ) is the multiplicative scaling factor at x n . In Eq. (3), the gradient information at current design x n can be obtained using the following formulation:

$$ \nabla \alpha \left({\boldsymbol{x}}_n\right)=\left[\begin{array}{l}\frac{f_l\left({\boldsymbol{x}}_n\right)\frac{\partial {f}_h}{\partial {x}_1}-{f}_h\left({\boldsymbol{x}}_n\right)\frac{\partial {f}_l}{\partial {x}_1}}{f_l{\left({\boldsymbol{x}}_n\right)}^2}\\ {}\kern3.75em .\\ {}\kern3.75em .\\ {}\kern3.75em .\\ {}\frac{f_l\left({\boldsymbol{x}}_n\right)\frac{\partial {f}_h}{\partial {x}_m}-{f}_h\left({\boldsymbol{x}}_n\right)\frac{\partial {f}_l}{\partial {x}_m}}{f_l{\left({\boldsymbol{x}}_n\right)}^2}\end{array}\right] $$
(4)

Using Eq. (3) and Eq. (4), the consistency at function value and gradient value between the VF model and HF model at current design x n can be guaranteed.

In multiplicative scaling method, the VF model f VF (x) adopted to approximate the real HF model can be formulated as

$$ {f}_{VF}\left(\boldsymbol{x}\right)=\widehat{\alpha \left(\boldsymbol{x}\right)}{f}_l\left(\boldsymbol{x}\right) $$
(5)

2.3.2 Additive scaling method

In additive scaling method, the VF model is expressed as the sum of the LF model and an additive scaling function. The formulation of additive scaling method is as follows:

$$ {f}_{VF}\left(\boldsymbol{x}\right)={f}_l\left(\boldsymbol{x}\right)+\gamma \left(\boldsymbol{x}\right) $$
(6)

Where γ(x) is the additive scaling factor which approximates the difference between the LF model and the HF model. Similar to the multiplicative scaling method, to insure the consistency in function value and gradient between the VF model and HF model at current design x n , γ(x) is approximated as

$$ \widehat{\gamma \left(\boldsymbol{x}\right)}=\gamma \left({\boldsymbol{x}}_n\right)+\nabla \gamma {\left({\boldsymbol{x}}_n\right)}^T\left(\boldsymbol{x}-{\boldsymbol{x}}_n\right) $$
(7)

Among which, the gradient information at current design x n can be obtained using following formulation

$$ \nabla \gamma \left({\boldsymbol{x}}_n\right)=\nabla {f}_h\left({\boldsymbol{x}}_n\right)-\nabla {f}_l\left({\boldsymbol{x}}_n\right) $$
(8)

3 Proposed VF-RBDO method using least squares hybrid scaling

In this paper, a VF-SLP framework is proposed to solve RBDO problem. VF model is introduced to approximate the computationally expensive experiments or computer simulations and SLP is adopted to decouple the complicated double-loop structure of RBDO.

The key of VF technique is to use the difference between a LF model and a HF model at a few points to correct the LF model at other points. Therefore, reasonably choosing the HF samples and scaling method is very important. The design points which are critical in RBDO are selected to conduct HF evaluation and a new scaling method using all evaluated HF points around the current design is proposed to enhance the accuracy and efficiency of VF model.

3.1 Least squares hybrid scaling method

In some cases, the multiplicative scaling method may perform better than the additive scaling method, but in other cases the additive scaling method may be the better one. To fully utilize the advantages of both methods, Gano et al. proposed the adaptive hybrid scaling method (AHS) (Gano et al. 2006). Using a weight coefficient to adaptively combine the multiplicative scaling method and the additive scaling method, AHS method maintains the Taylor series matching and thus retains the convergence properties. In AHS method, the VF model f VF (x) can be formulated as follows:

$$ {f}_{VF}(x)=w\alpha (x){f}_l(x)+\left(1-w\right)\left({f}_l(x)+\gamma (x)\right) $$
(9)

In Eq. (9), the multiplicative scaling factor α(x) and the additive scaling factor γ(x) can be calculated using formulations in Section 2.3.

The weight coefficient w is calculated using the previously evaluated HF point, which can be formulated as follows:

$$ w=\frac{\widehat{f_h\Big(x}\Big)-\left({f}_l(x)+\gamma (x)\right)}{\alpha (x){f}_l(x)-\left({f}_l(x)+\gamma (x)\right)} $$
(10)

Using Eq. (9) and Eq. (10), the VF model can pass through the current and previous design, therefore it has a higher accuracy in the regions around these two points. However, in Gano’s method, only these two points are utilized to construct the VF model. If the optimal design lies far away from these two points, a larger modeling error will arise, and then an inexact optimum may be obtained (Seen in Fig. 1). In other words, AHS method cannot guarantee the accuracy of VF model in the region around the optimal design. If more evaluated HF samples around the current design are utilized, this disadvantage can be overcome. Based on this idea, a new scaling method using all the evaluated HF points at a small region around the current design (the size determination of this small region can be seen in Section 3.3) is proposed here.

Fig. 1
figure 1

The comparison between AHS and LSHS method

Different from the weight coefficient calculation formulation in Eq. (10), in the proposed method w is calculated by solving a least square problem:

$$ \min \kern0.5em {\displaystyle \sum_{i=1}^n{\left[{f}_h\left({x}_i\right)-\left(w\alpha \left({x}_i\right){f}_l\left({x}_i\right)+\left(1-w\right)\left({f}_l\left({x}_i\right)+\gamma \left({x}_i\right)\right)\right)\right]}^2} $$
(11)

Where n is the number of all the evaluated HF points around the current design (e.g., n =3 in Fig. 1).

In Fig. 1, curve HF and LF denote the HF model and LF model respectively. Curve VF1 is the constructed VF model using the AHS method and Curve VF2 is the constructed VF model using the LSHS method. x* is the optimal design and x i is the design in the ith iteration in RBDO.

In the 6th iteration, the VF model is constructed using current design x 6 and previous design x 5 in AHS method, therefore it has a high accuracy in the region around x 5 and x 6. However, without considering the previous iteration point x 4, the VF model VF1 is not accurate enough at the optimal design x*, which lies far away from x 5 and x 6. Unlike AHS method, more samples (such as x 4) are used to construct the VF model in proposed method. Therefore, the VF model based on LSHS has a much smaller global error in the region around the optimal design, which guarantees an accurate solution in RBDO (Seen in Fig. 1).

3.2 VF-RBDO using sequential linear programming

After the VF model using LSHS method is constructed to approximate the implicit performance function in RBDO, MCS is used to calculate the failure probability and its gradient. Then sequential linear programming (SLP) is adopted to calculate the next design. SLP is selected here because it is efficient and easy to be implemented, which only need the first derivatives for a Taylor expansion (Okamoto et al. 2015). In this paper, the first derivatives of failure probability can be obtained using MCS.

In SLP approach, the original VF-RBDO problem is decomposed into a sequence of sub-optimization problems. Each sub-optimization problem which consists of approximated probabilistic constraints is solved in a reduced design space. The formulation of VF-RBDO using sequential linear programming can be described as:

$$ \begin{array}{l}\mathrm{f}\mathrm{o}\mathrm{r}\kern0.5em k=1,2,\dots \\ {}\mathrm{f}\mathrm{ind}:\kern1.5em {\boldsymbol{\mu}}_{\boldsymbol{X}}\\ {} \min :\kern1.5em f\left({\boldsymbol{\mu}}_{\boldsymbol{X}}\right)\\ {}\mathrm{s}.\mathrm{t}.:\kern2em P\left({\widehat{g}}^k\left(\boldsymbol{X}\right)\le 0\right)-\varPhi \left(-{\beta}^t\right)\le 0\\ {}\kern3.5em {\boldsymbol{\mu}}_{\boldsymbol{X}}^L\le {\boldsymbol{\mu}}_{\boldsymbol{X}}^{Lk}\le {\boldsymbol{\mu}}_{\boldsymbol{X}}\le {\boldsymbol{\mu}}_{\boldsymbol{X}}^{Uk}\le {\boldsymbol{\mu}}_{\boldsymbol{X}}^U\end{array} $$
(12)

3.3 Size of sub-optimization problem

To determine the reduced design space (termed as subspace in this paper) in each sub-optimization problem, a new strategy using target reliability β t and influence domain at the current design is proposed. β t is the required reliability level in U-space, therefore, the definition of subspace in SLP involves a transformation from U-space to original design space. For the random variable which follows Gaussian distribution, the transformation can be formulated as

$$ -{\beta}^t\le \boldsymbol{u}\le {\beta}^t\Rightarrow -{\beta}^t\le \frac{\boldsymbol{\mu} -{\boldsymbol{\mu}}_c}{\sigma}\le {\beta}^t\Rightarrow -{\beta}^t*\sigma +{\boldsymbol{\mu}}_c\le \boldsymbol{\mu} \le {\beta}^t*\sigma +{\boldsymbol{\mu}}_c $$
(13)

Where μ c is the current design point.

For other distributed random variables, the transformation formulation can be seen in Ref. (Song 2013).

The updated formulation for subspace calculation is as follows:

$$ \begin{array}{l}{\boldsymbol{\mu}}_{\boldsymbol{X}}^{Lk}= \max \left({\boldsymbol{\mu}}_{\boldsymbol{X}}^L,{\boldsymbol{\mu}}_X^{LC}-c* \max \left({\beta}_t\right)*\sigma \right)\\ {}{\boldsymbol{\mu}}_{\boldsymbol{X}}^{Lk}= \min \left({\boldsymbol{\mu}}_{\boldsymbol{X}}^U,{\boldsymbol{\mu}}_X^{LC}+c* \max \left({\beta}_t\right)*\sigma \right)\end{array} $$
(14)

In Eq. (14), the adjustment coefficient c is used to guarantee that the β t circle which is crucial in reliability analysis is contained in the sub-optimization process. In consideration that different target reliability may be involved for multi-constraint problem, the maximum reliability index β t is used here.

To determine the coefficient c in Eq. (14), the possible modeling error in VF model construction should be considered. The concept of influence domain at the current design is proposed herein. Three situations are considered:

  1. (1)

    If there is only one HF sample existed in the subspace of SLP, the size of influence domain is determined only by the current design. Thus c is a constant, which is set to 1.5 here.

  2. (2)

    With the number of HF samples in the subspace of SLP increasing, the approximation accuracy of VF model around the current design is increasing. In this situation, the size of influence domain at current design should be determined by all the HF samples in this region and a bigger c should be used here.

  3. (3)

    With the difference between the VF function values and HF function values at HF samples in the subspace of SLP increasing, the approximation accuracy around the current design decreases. Therefore the size of influence domain at current design should be reduced and a smaller c should be used.

Taking fully consideration of each situation, adjustment coefficient c can be calculated as

$$ c=1.5*\left(1+1/\left(\frac{1}{n}+{\displaystyle \sum_{i=1}^n\frac{\left\Vert {f}_{tl}\left({x}_i\right)-{f}_h\left({x}_i\right)\right\Vert }{f_h\left({x}_i\right)}}\right)\right) $$
(15)

It's important to note that, 1.5 is selected as amplification coefficient for c because of the following reasons:

  1. (1)

    A set of different coefficients for c have been tested using several examples, good solutions can always be achieved using 1.5.

  2. (2)

    In Zhao’s paper”Response surface method using sequential sampling for reliability-based design optimization” (Zhao et al. 2009), the coefficient selection in the sub-design space of sequential sampling and optimization is 1.2 ~ 1.5. 1.5 is used here to give a conservative solution.

Using Eq. (15), with the number of HF samples and modeling accuracy in the region around the current design increasing, the size of influence domain at current design will increase and a bigger design space will be used to solve the sub-optimization problem in SLP.

3.4 Flowchart and procedures of the proposed method

The flowchart of the proposed VF-SLP framework using least square hybrid scaling method is provided in Fig. 2. The procedures can be described as follows:

Fig. 2
figure 2

Flowchart of the OSV-RI method

  1. (1)

    Initialize the design variable μ 0 X .

  2. (2)

    Calculate the HF function value and HF gradient at the current design. In this step, computationally expensive numerical simulation or costly physical experiments are performed to obtain HF information.

  3. (3)

    Determine the size of design space for the sub-optimization problem in VF-SLP framework. The method developed in Section 3.3 is used here.

  4. (4)

    Scale the LF model to obtain the VF model, and then replace the implicit performance function with the VF model. In this step, proposed LSHS scaling method is adopted.

  5. (5)

    Calculate the failure probability and its gradient using MCS, and then export the result to Eq. (12). Conduct optimization to calculate the next design.

  6. (6)

    If convergent, then stop. Else, k = k + 1, go back to step (2).

4 Application

In order to verify the accuracy and efficiency of the proposed method, two numerical examples and the shape optimization problem of a curved beam are tested and compared. The comparison methods are the additive scaling method (ADD), the multiplicative scaling method (MULTI) and the adaptive hybrid scaling (AHS) method. The design results will be assessed through the relative error ‖(d* − d * A )/d * A ‖, where d * A is the standard solution (STA) by calling the actual performance functions. All of these comparison methods are performed in the proposed VF-SLP framework, the detailed implementation process is as follows:

First, using various scaling methods (ADD, MULTI, AHS and LSHS) to construct VF model. Then the VF model is combined with MCS to calculate the failure probability and its gradient. Last, SLP is used to calculate the next design, where the proposed method in Section 3.3 is used to calculate the size of sub-optimization problem.

4.1 Mathematical example 1

The 1-D test function shown in Fig. 3 (Forrester and Keane 2009; Han et al. 2013) is commonly used to demonstrate the performance of VF methods. In this paper, this test function is further applied to replace the performance function in RBDO. The problem is given as

Fig. 3
figure 3

Graph and optimal design of 1-D problem

$$ \begin{array}{l}\mathrm{find}:\kern1em \mu \\ {} \min :\kern1em f\left(\mu \right)={\mu}^2\\ {}\mathrm{s}.\mathrm{t}.:\kern1.62em P\left(g\left(\boldsymbol{X}\right)<0\right)\le \varPhi \left(-{\beta}^t\right)\\ {}{g}_H(x)=-{\left(6x-2\right)}^2 \sin \left(12x-4\right)\\ {}{g}_{L1}(x)=-\left(0.5{\left(6x-2\right)}^2 \sin \left(12x-4\right)+10x\right)\\ {}{g}_{L2}(x)= \sin \left(12x-4\right)+0.4\mathrm{x}-10\\ {}x\sim N\left({\mu}_i,\;{0.05}^2\right),{\beta}^t=2.0\kern0.5em \\ {}0\le \mu \le 1,{\mu}^0=0.6\end{array} $$
(16)

Fig. 3 gives the graph and optimal design of 1-D problem. The blue curve is the HF model, the red curves LF1 and LF2 denote the LF models g L1(x) = − (0.5(6x − 2)2 sin(12x − 4) + 10x) and g L2(x) = sin(12x − 4) + 0.4x − 10 in Eq. (16) respectively.

Table 1 lists the optimization results of 1-D problem using the first LF model g L1(x) = − (0.5(6x − 2)2 sin(12x − 4) + 10x). The four methods mentioned above are used for comparison. “Obj. Value” is the objective function value at the optimal design. “Optimum” means the optimal design and “Iteration” denotes the number of design iterations. “HF” is the number of high fidelity function evaluation. “HF Gradient” stands for the total number of high fidelity gradient calls. β is the reliability index evaluated by MCS with a ten-million sample size. “LSHS” means the proposed least square hybrid scaling method.

Table 1 Summary of the optimization results for example 1 using the first LF model

It’s clear from Table 1 that all the five methods have the same number of iterations, HF function calls and HF gradient calls. The multiplicative scaling method and proposed LSHS method almost converged to the optimal design. That’s because they both have a good approximation in the region around the optimal design. Using only the current and the previous design to construct the scaling function, the VF model from AHS method may have a bigger error in the critical region near the optimal design, which leads to an inexact optimal design.

Figure 4 shows the predicted graph and HF points of the four scaling methods using the first LF model g L1(x) = − (0.5(6x − 2)2 sin(12x − 4) + 10x). As can be seen from this figure, the multiplicative scaling method and the proposed least squares hybrid scaling method have a good approximation in the region around the optimal design. Therefore, they both have a good RBDO solution.

Fig. 4
figure 4

Predicted graph and HF points of example 2 using the first LF model

However, if the second LF model g L2(x) = sin(12x − 4) + 0.4x − 10 is used to construct VF model in RBDO, different comparison results will arise.

Seen from Table 2, the additive scaling method and the proposed LSHS method have better solutions than the multiplicative scaling method and the adaptive hybrid scaling method using g L2(x) = sin(12x − 4) + 0.4x − 10. Without considering all the existing samples around the current design, the approximation of AHS around the optimal design is not accurate enough, thus it may result in an inexact optimum.

Table 2 Summary of the optimization results for example 1 using the second LF model

It's worth noting that in LSHS method multiple points are used to construct the VF model, which may slow down the convergence process. For example, the proposed LSHS method has more iterations than the multiplicative scaling method and the additive scaling method in Table 2. However, because using multiple points could improve stability of the optimization, the proposed LSHS method is more accurate.

When the second low fidelity model is used, the predicted graph and HF points of the 1-D test function using the four methods are shown in Fig. 5.

Fig. 5
figure 5

Predicted graph and HF points of example 1 using the second LF model

Taking the optimization results using the two different LF models into account, the RBDO solutions from the multiplicative scaling method and the additive scaling method are sensitive to the LF model. Therefore, for some LF models, the multiplicative scaling method may perform better, but for other LF models, the additive scaling method may perform better. Without considering all the HF samples around the current design, the RBDO solution of AHS is not so accurate. Using all the existing samples around the current design to approximate the difference between the LF model and HF model, the proposed LSHS method has a good accuracy at the optimal design. Therefore, the proposed LSHS method is a promising method to solve RBDO problems, which always make an accurate and robust solution in this example.

4.2 Mathematical example 2

Another mathematical problem (Lee and Jung 2008; Chen et al. 2014; Li et al. 2015) with highly nonlinear constraint is tested to compare the performance of the four VF-RBDO methods.

The problem is formulated as

$$ \begin{array}{l}\mathrm{find}:\kern1em \boldsymbol{\mu} ={\left[{\mu}_1,{\mu}_2\right]}^T\\ {} \min :\kern1em f\left(\boldsymbol{\mu} \right)={\left({\mu}_1-3.7\right)}^2+{\left({\mu}_2-4\right)}^2\\ {}\mathrm{s}.\mathrm{t}.:\kern1.62em P\left({g}_i\left(\boldsymbol{X}\right)<0\right)\le \varPhi \left(-{\beta}_i^t\right),\kern0.5em i=1,2\\ {}{g}_1\left(\boldsymbol{X}\right)=-{X}_1 \sin \left(4{X}_1\right)-1.1{X}_2 \sin \left(2{X}_2\right)\\ {}{g}_2\left(\boldsymbol{X}\right)={X}_1+{X}_2-3\\ {}0.0\le {\mu}_1\le 3.7,\kern1em 0.0\le {\mu}_2\le 4.0\\ {}{X}_j\sim N\left({\mu}_j,\;{0.1}^2\right),\kern0.5em j=1,2\\ {}{\beta}_1^t={\beta}_2^t=2.0,\kern0.5em {\boldsymbol{\mu}}^0=\left[2.97,3.40\right]\end{array} $$
(17)

There are two probabilistic constraints in this example and the first one is highly nonlinear. As shown in Fig. 6a, objective function is a simple quadratic function denoted with the dotted line and the optimal design is marked with a small “x”. The shaded area is the feasible design domain. The contour lines in Fig. 6b and the 3d graphics in Fig. 6c show the highly nonlinearity of the first probabilistic constraint. For the sake of simplicity, the linear probabilistic constraint g 2(X) is removed in the VF-RBDO model.

Fig. 6
figure 6

Feasible region and the first constraint in Mathematical example 1

A LF version of this problem is created by modifying the first probabilistic constraint as follows:

$$ {g}_1^l\left(\boldsymbol{X}\right)=-2 \sin \left(4{\mathrm{x}}_1\right)-2.2 \sin \left(2{\mathrm{x}}_2\right)-5 $$
(18)

Four different scaling methods integrated with the VF-SLP framework are adopted to solve this problem and the comparison results are summarized in Table 3. It is clear from the table that the additive scaling method is the most efficient one, which uses the smallest number of HF function and gradient calls. However, it has a bigger error than proposed LSHS method. The multiplicative scaling method is not suitable for this example, which can’t find the optimal design. Without considering all the HF samples around the optimal design, AHS method is divergent.

Table 3 Summary of the optimization results for example 2

To further demonstrate the performance of the proposed LSHS method, the HF points and the final VF model using the four methods are compared in Fig. 7. Seen from this figure, the VF model using additive scaling method and proposed LSHS method is more accurate than other scaling methods, therefore they both have a good RBDO solution. Compared with LSHS method, the additive scaling method is more accurate in the whole design space. But the accuracy of LSHS method is a bit higher than that of additive scaling method around the optimal design, which leads to a better RBDO solution (Seen in Table 3).

Fig. 7
figure 7

Predicted graph and HF points of example 2

4.3 Shape optimization problem of a curved beam

Shown in Fig. 8, the shape optimization problem of a curved beam subjected to an applied force P = 8000N is presented. Unlike the curved beam with a rectangular cross section, which is widely used to test the performance of VF method (Balabanov and Venter 2004), the circular cross section is used here.

Fig. 8
figure 8

The crane hook and the divided sections

Including the points with the highest stress, the lower left portion of the hook (Termed as the curved beam in this paper) is optimized here, which is divided into three sections and the radiuses of each section r i are selected as the random design variables. The optimization goal was to minimize the volume of the curved beam, subject to probabilistic constraints on the maximum stresses.

The problem was analyzed using a simple finite element hook model (HF model) and a straight beam substitution (LF model). The simplified model in UG and the grid in Hyperworks are shown in Fig. 9.

Fig. 9
figure 9

3D model and Grid of the crane hook

The LF model is built using a straight beam which is divided into the same number of sections as the curved beam in Fig. 10. The length is chosen to be equal to the middle layer radius of the curved beam.

Fig. 10
figure 10

The straight beam and divided sections

The maximum stress of the low-fidelity analysis in each section is calculated using the following formulation (the shear stresses were not considered):

$$ {\sigma}_A=\frac{32M}{\pi {d}^3} $$
(19)

In the above formulation, the bending moment M can be calculated using the applied force P and its distance to the root of each section.

The RBDO problem of the curved beam design is described as follows:

$$ \begin{array}{l}\mathrm{find}:\kern1.5em {\boldsymbol{\mu}}_{r_i}=\left[{\mu}_{r_1},{\mu}_{r_2},{\mu}_{r_3}\right]\\ {} \min :\kern1.5em V\\ {}\mathrm{s}.\mathrm{t}.:\kern2em P\left({\sigma}_{\max}\ge 140\right)-\varPhi \left(-{\beta}^t\right)\le 0\\ {}\kern3.5em {r}_i\sim N\left({\boldsymbol{\mu}}_{r_i},0.1\right),8\le {\boldsymbol{\mu}}_{r_i}\le 12\\ {}\kern3em {\boldsymbol{\mu}}_{r_i}^0=\left[10,10,10\right],{\beta}^t=3.0\end{array} $$
(20)

Where σ max is the maximum stress in the structure.

Table 4 lists the optimization results for the four different scaling methods. At the optimum, probabilistic constraints are evaluated by MCS with a ten-million sample size. It is clear from Table 4 that using additive scaling method or multiplicative scaling method alone to construct VF model leads to relatively lager errors. Combining the two methods reasonably, the adaptive hybrid scaling method and proposed LSHS method perform much better in terms of accuracy, which both converge to the optimal design. However, the number of HF function and gradient calls in proposed method is much smaller than that of AHS method, which verifies the high efficiency of LSHS. It’s worth noting that although AHS reaches a higher reliability than LSHS method, it has a larger objective function value (1365.5) and relative error (0.0017). Therefore, the proposed LSHS method can provide sufficiently exact RBDO solution with a small number of HF calls for shape optimization of the curved beam.

Table 4 Summary of the optimization results for the curved beam design

5 Conclusion

In this paper, a VF-SLP framework is proposed to solve RBDO problem. VF model is introduced to approximate the implicit performance function and SLP is adopted to decouple the complicated double-loop structure of RBDO.

Reasonably combining computationally cheap LF model with more accurate but expensive high-fidelity model, variable fidelity method has been widely adopted as the substitution of the actual black-box model in engineering applications. To further extend its application in RBDO, a hybrid scaling method based on least squares is developed in this paper. Using the HF function values and gradient values at design points, the VF model is constructed by solving a least square problem. Monte Carlo simulation is used to calculate the failure probability and its gradient. Then SLP is adopted to calculate the next design and the step size in every sub-optimization problem is determined by target reliability index and the influence domain at the current design.

The VF-SLP framework developed in this paper is demonstrated on three examples: a commonly used 1-D problem, a highly nonlinear 2-D problem and the shape optimization problem of a curved beam. Four different scaling methods: the additive scaling method, the multiplicative scaling method, the adaptive hybrid scaling method and the proposed LSHS method are compared in this framework. The comparison results show the high accuracy and efficiency of the proposed method.

In future, to take full advantage of the surrogate modelling technique and VF approach, using the surrogate model to construct the VF model and searching appropriate sampling strategy will be the next stage of our research.