1 Introduction

Design optimization plays a vital role in industrial technologies, aiming to obtain the set of design variables that can provide optimal objective performances under the given constraints. Conventionally, the design optimization is often performed with deterministic inputs and parameters. However, real life problems in engineering are almost all non-deterministic. The uncertainties existing in structural material properties, geometrical dimensions, loading conditions, etc. may cause significant degradation in system performances. Problems that are sensitive to slight perturbations may give rise to suboptimal or even infeasible solutions when optimized without incorporating uncertainties (Ur Rehman and Langelaar 2015; Du et al. 2009).

Therefore, to explicitly account for the uncertainties is of great importance for establishing robust-to-variations solutions (Zhang et al. 2017; Beyer and Sendhoff 2007). This has promoted the development of robust design techniques in different scientific and engineering fields, which aims to reduce the sensitivity of the system performances to the uncertainties since Taguchi’s pioneer work (Taguchi 1978). Within the field of robust design, two categories of uncertainty description are mainly considered: one is probabilistic (random) models and the other is non-probabilistic models, including the convex models and interval models. The probabilistic description of uncertain variations has been investigated thoroughly in the past several decades. In this circumstance, the robust optimization will often show in the formulation of bi-criteria, in which both the expected value and the standard deviation of the performance functions are to be minimized. Using this concept, Doltsinis and Kang (Doltsinis et al. 2005; Doltsinis and Kang 2004) investigated the robust design of linear and non-linear structures with perturbation-based stochastic finite element analysis. Surrogate models have also been used in the robust design of engineering systems (Sun et al. 2014; Wiebenga et al. 2012). Robust topology optimization techniques with the random models (including random field uncertainty) are also developed (Asadpoure et al. 2011; Richardson et al. 2015; Zhao and Wang 2014).

Generally, reliable results can be obtained by the probabilistic approaches only when sufficient statistical data are available. In order to handle the uncertainties without sufficient information, the convex model (Ben-Haim and Elishakoff 1990) and interval model (a special instance of convex model) (Moore et al. 2009) were introduced to describe the uncertainties of parameters involved in engineering problems. In the non-probabilistic models, only the bounds of the uncertainties are required to build the description models, which is easier to obtain in practical engineering. Several strategies have been developed in the field of optimization under non-probabilistic uncertainty. Zhou et al. (2012) introduced a sequential quadratic programming method for robust optimization under interval uncertainties with both objective robustness and feasibility robustness. Cheng et al. (2015) proposed a hybrid multi-objective differential evolution robust optimization strategy which incorporated sequential quadratic programming. For robust design under convex models, Kang and Bai (2013) defined a robustness measurement as the minimal distance from the origin to the limit-state surface and the design objective is to maximize the minimum one of the robustness indices. The authors (Hu et al. 2017) developed a new robustness index to handle bounded constraints on performance variation where the traditional inner optimization loop is omitted to improve computational efficiency.

As aforementioned, either probabilistic model or non-probabilistic model is adopted to describe the uncertainties. However, these two kinds of variations may exist simultaneously in the real uncertain engineering problems (Wang and Huang 2016). Among these concerned uncertainties, some can be described as probabilistic factors with sufficient data, while others need to be modeled as interval or convex ones due to the lack of samples. There are some researches on the hybrid uncertainty method which have been applied to the structural reliability analysis (Elishakoff and Colombi 1993; Jiang et al. 2012; Luo et al. 2011; Han et al. 2014) and structural response prediction (Wang and Huang 2016; Gao et al. 2011; Wu and Gao 2017; Xia et al. 2015; Feng et al. 2017).

For robust optimization when the random and interval variables co-exist, as far as the authors knowledge, there have been some prospective researches (Du et al. 2009; Li et al. 2015; Liu and Lin 2006; Wu et al. 2017). Du et al. (2009) proposed a double loop Monte Carlo (DMC) simulation procedure to assess the system robustness with the mixture of random and interval variables. The outer loop is for the interval combinations and for each outer loop, an inner loop is performed to get the samples of the random variables according to their distributions. Then, a weighted sum method for several objective is developed. Li et al. (2015) used the aforementioned double loop Monte Carlo simulation procedure in the robust crashworthiness design. In Liu and Lin (2006), a percentile-based robust optimization with metamodeling method is introduced to handle the design problem with the mixture of random and interval factors. The system objective value and constraint values perform as a sequence of probability distribution with the marginal distribution concept. Wu et al. (2017) proposed a level set-based robust topology optimization method for computational design of metamaterials where the Young’s modulus of the solid is described as a random variable while the Poissons ratio is regarded as an interval variable. The Polynomial Chaos-Chebyshev inclusion method in Wu et al. (2017) approximates the objective function with a weighted summation of polynomial basis, whose coefficients are functions of interval factors. In their work, the robust objective function is also formulated by a combination of interval mean and interval variance of the deterministic objective function, which is similar to Du et al. (2009). The Polynomial Chaos expansion and Chebyshev inclusion function are used for random variables and interval variables, respectively. The aforementioned techniques to evaluate the intervals of expectation and standard variance were also used in uncertain analysis for structural-acoustic systems with random and interval parameters (Wang and Huang 2016).

These strategies have provided us some meaningful insights into the design problem with mixed uncertainties. Nevertheless, from the authors’ viewpoint, the existing methods are similar in two aspects.

  1. 1)

    The existing methods express the system performance as a sequence of probability distribution with the interval factors varying within their domain. Then, the bi-criteria approaches (i.e., to minimize the mean value and standard deviation of the performance at the same time) are used to evaluate the robustness of the system with only random factors. The difference is that when hybrid uncertainties exist, the design objective and constraints are modeled with the upper/lower bounds of the means or deviations of performances. Nevertheless, these approaches cannot show whether the robustness constraints are satisfied for a certain design.

  2. 2)

    In order to evaluate the intervals of the means and deviations at every design candidate, the procedures all require the double level sampling of random and interval factors, respectively, which will lead to an increment in computational times.

Therefore, there are several issues that can be improved in this field:

  1. 1)

    A new quantitative measurement of robustness should be developed since the existing bi-criteria approaches techniques are not able to give a clear robustness index quantitatively for a certain solution.

  2. 2)

    A more efficient method is needed to avoid the heavy sampling work in assessing the mean intervals and standard deviation intervals.

Starting from these considerations, in this paper, a robust optimization method is introduced, in which a new robustness assessment technique with the mixture of random and interval uncertainties is developed. This new evaluation process can provide a distinct robustness index with fewer computational efforts.

The rest of the paper is arranged as follows. In Section 2, the assessment method of WCSR for the objective and constraints in literature is given. The new proposed robustness index definition for the MURO is illustrated in Section 3. Section 4 shows the mathematical formulation of the new robust design procedure. Two numerical examples and two engineering examples are carried out in Section 5 to demonstrate the effectiveness of the presented method and several conclusions are drawn in Section 6.

2 Definition of the WCSR

A robust optimization concerning both the objective robustness and the feasibility robustness with the random and interval variables can be formulated in the following equation,

$$ \begin{array}{lll} &\min\quad f({\textbf{X}},{{\textbf{U}}_{0}},{{\textbf{V}}_{0}})\\ & \begin{array}{lll} s.t.&{g_{j}}({\textbf{X}},{\textbf{U}},{\textbf{V}}) \le 0, \quad j = 1,2,...,K\\ & \left| {\frac{{f({\textbf{X}},{\textbf{U}},{\textbf{V}}) - f({\textbf{X}},{{\textbf{U}}_{0}},{{\textbf{V}}_{0}})}}{{\Delta {f_{0}}}}} \right| - 1 \le 0 \\ &{\textbf{lb}} \le {\textbf{X}_{0}} \le {\textbf{ub}} \end{array} \end{array} $$
(1)

where X is the vector of design variables, U and V are the random and interval variable vectors respectively while U0 and V0 are their nominal values and mid-point values, respectively. Then, the interval factor V can be described as V ∈ [V0 −ΔV,V0 + ΔV ], where ΔV is the variation range of variable V. g j (X,U,V) represents the i th constraint value under uncertainties and K is the number of the constraints. Δf0 is a presumed acceptable variation range for the objective function.

In this paper, the robust design problem in (1) will be transformed into an ordinary optimization with robustness index constraints. The main contribution of this paper is the development of the indices for the robust design problems with mixed uncertainties. The procedure to obtain the new index can be divided into two sub-step.

The first step is the WCSR assessment for the objective function and constraint functions, and the random and interval factors are treated equally in this step. That is to say, both the random and interval factors are treated as interval factors in this section, ignoring the probabilistic characteristic of the random ones. The second step is a numerical integral within the WCSR to define the robustness index of the design candidates under mixed uncertainties. In this section, the different properties of the random and interval uncertainties are embodied, which will be shown in Section 3. The first step is shown in the following part of this section.

2.1 WCSR assessment with interval uncertainty

The sensitivity region concept for objective function and constraint functions are developed by Gunawan and Azarm (2004; 2005a, b) for robust design problem with interval uncertainties. This concept is also used in this paper and briefly introduced as follows.

In parameter variation space, where the uncertainties can vary, the sensitivity region is formed by the points whose performance after variation still satisfy the robust restrictions (for both objective function and constraint functions). The sensitivity region of objective functions and constraint functions associated with a given design vector X0 can be defined as follows,

$$\begin{array}{@{}rcl@{}} {\mathrm{S}}{{\mathrm{R}}_{{\text{obj}}}}({{\textbf{X}}_{0}},{{\textbf{P}}_{0}}) &=& \left\{\Delta {\textbf{p}} \in {R^{G}}: \left( f(\textbf{X}_{0}, \textbf{P}_{0})\right.\right.\\ && \quad - \left.\left. f(\textbf{X}_{0}, \textbf{P}_{0} + {\Delta} {\textbf{p}}) \right)^{2} \le {\Delta} {f_{0}}^{2} \right\} \end{array} $$
(2)

and

$$ {\mathrm{S}}{{\mathrm{R}}_{fea}}({{\textbf{X}}_{0}},{{\textbf{P}}_{0}}) = \left\{ {\Delta {\textbf{p}}\in {R^{G}}:\forall j,{g_{j}}({{\textbf{X}}_{0}},{{\textbf{P}}_{0}} + {\Delta} {\textbf{p}}) \le 0} \right\} $$
(3)

where P0 is the nominal values of the uncertainties and ΔP is the variation vector. Note that P represents both the interval and random factors because in the search of the WCSR they are treated in the same way.

However, in general, as the formulation of the sensitivity region’s shape is not known, an analytical calculation of its size is not possible (Gunawan and Azarm 2004). Therefore, a hyper-sphere constructed with the most sensitive directional vector is used to approximate the sensitivity region and is the so-called WCSR. Figure 1 shows the WCSR estimation for the objective and constraints that have been developed in literature, respectively. It should be noted that in this paper, we assume the objective and constraints functions are both continuous with respect to the uncertainties. Therefore, the WCSR defined in (2) and (3) are both connected around the origin of the Δp-space.

Fig. 1
figure 1

WCSR of a objective robustness and b feasibility in robustness two-dimensional

Therefore, the optimization to find the radius of WCSR of objective function ξ O b j is shown as follows (Gunawan and Azarm 2004):

$$ \begin{array}{lll} &\min\quad \xi_{Obj} = \sqrt{\Delta{\textbf{p}}^{\mathrm{T}}{\Delta{\textbf{p}} }}\\ & \begin{array}{lll} s.t.&{{{\left( {f({{\textbf{X}}_{0}},{{\textbf{P}}_{0}}) - f({{\textbf{X}}_{0}},{{\textbf{P}}_{0}} + {\Delta} {\textbf{p}})} \right)}^{2}}={\Delta} {f_{0}}^{2}} \end{array} \end{array} $$
(4)

Similarly, the search for the radius of WCSR of constraint functions ξ F e a is Gunawan and Azarm (2005a)

$$ \begin{array}{lll} &\min\quad \xi_{Fea} = \sqrt{\Delta{\textbf{p}}^{\mathrm{T}}{\Delta{\textbf{p}} }}\\ & \begin{array}{lll} s.t.& \max_{j = 1,...K} [{g_{j}}({{\textbf{X}}_{0}},{{\textbf{P}}_{0}} + {\Delta} {\textbf{p}})] = 0 \end{array} \end{array} $$
(5)

2.2 Normalization of uncertain variables

Since the uncertain parameters or variables may have incommensurable units and scales (Gunawan and Azarm 2004), the normalization of the uncertainties is necessary to obtain a reasonable radius of WCSR. Otherwise the shape of sensitivity region may be stretched and the radius of WCSR will be close to zero.

For each interval factor V (i.e., a component of interval factor vector ( V)), we choose to normalize the range to [− 1,+ 1] by dividing it by Vw for each V, where ΔV is the width of the interval factor.

$$ v = \frac{{V - V_{0}}}{\Delta V} $$
(6)

For each random factor U (i.e., a component of random factor vector (U)) with non-standard normal distribution, it can be firstly transformed into standard normal random variable by Kang and Luo (2010):

$$ u = \frac{{U - U_{0}}}{\sigma} $$
(7)

where U0 and σ are the mean value and the standard deviation of U, respectively. For dependent random variables, they can be transformed into a set of uncorrelated normal random variables via Rosenblatt transformation (Rosenblatt 1952). From the “6σ” criterion, the domain of each u can be approximated by [− 3,+ 3]. Since the interval factors are normalized into [− 1,+ 1], in order to make these two domain as uniform as possible, the u with standard normal distribution is transformed into random ones obeying random distribution whose standard deviation is 1/3.

With the two transformation in (7) and (6), the normalized random and interval parameters form a new δ-space. Therefore, the design optimization in (4) and (5) should be carried out in this normalized δ-space.

3 Definition and calculation of the hybrid index

3.1 The mathematical formulation of the hybrid index

In this work, the interval variables are treated as random ones that obey uniform distribution within their upper and lower bounds. Therefore, for each uncertain variable, there is a probabilistic density function (PDF) associated with each of them. Based on this treatment, the new index is defined as the possibility that the uncertain vector locates within the WCSR area. The details are shown in this section, which is the main contribution of this paper.

In this circumstance, the probability that the uncertainties locate in the WCSR in δ-space can be formulated as

$$ \eta = \int { {\cdots} {\int}_{\Omega} {\psi ({\boldsymbol{\delta}})}} d{\Omega} $$
(8)

where the integral domain Ω is the WCSR associated with current solution and ψ(δ) is joint PDF of the n-dimensional uncertain variables.

From the probability theory (DeGroot and Schervish 2012), if random variables are independent with each other, the joint PDF can be stated as the mutation of every marginal PDF of the random variable

$$ \psi ({\boldsymbol{\delta}}) = {\psi_{1}}({\delta_{1}}){\psi_{2}}({\delta_{2}}) {\cdots} {\psi_{n}}({\delta_{n}}) $$
(9)

where ψ i (δ i ) is the marginal PDF of the i th random variables (including the interval variables which are regarded as random ones with uniform distribution). n is the total number of the uncertain factors, including both the random and intervals. It is easy to figure out that the ψ i (δ i ) is the PDF of this variable itself. Therefore, the (8) is transformed into

$$ \eta = \int { {\cdots} {\int}_{\Omega} {\psi_{1}}({\delta_{1}}) \cdot {\psi_{2}}({\delta_{2}}) {\cdots} {\psi_{n}}({\delta_{n}})} d{\Omega} $$
(10)

A 2-dimensional problem (i.e., n = 2) in Fig. 2 is shown to explain (10) in details. ψ(δ) is the joint PDF of the 2-dimensional uncertain factors δ1 and δ2 which are normalized random and interval factors, respectively. Figure 2a shows the joint PDF when δ1 and δ2 both vary within [− 1,+ 1]. In (b), (c) and (d), the joint PDF when \({{\delta _{1}^{2}} + {\delta _{2}^{2}} \le {\xi ^{2}}}\) is shown, where ξ equals 0.7, 0.9 and 1.1, respectively. The ξ here is the radius of the WCSR for either the objective or constraint functions that is derived in Section 2.

Fig. 2
figure 2

The joint PDF ψ(δ) and integral domain Ω for a 2D uncertain problem

From the point view of geometry, the integral in (10) represents the domain volume which is under the integral function ψ(δ) within given domain and the domain in this problem is the WCSR. Therefore, we can see from the Fig. 2, the integral η value can vary from 0 to 1 with the increase of ξ.

3.2 Calculation procedure

After the definition of the hybrid index, the calculation of the integral in (10) should be performed. For the numerical integration, there are plenty of techniques for one-dimensional cases, such as rectangle rule, the trapezoidal rule, Simpson’s rule and the Gauss rule (Leobacher and Pillichshammer 2014). When the one-dimensional rules are applied to multidimensional integration, the convergence rate shows a poor dependency on the dimension, which is unacceptable in high-dimensional cases. To solve this problem, Monte Carlo Integration (Jarosz 2008; Robert and Casella 2004; Gong 2001; Kang 2015; Ren et al. 2005) is an efficient and accurate alternatives and its convergence rate is independent of the dimension. Therefore, in this paper, the Monte Carlo integration is used to calculate (10).

The fundamental idea of Monte Carlo integration is to use random sampling of a function to numerically compute an estimate of its integral. For the details the readers are referred to Leobacher and Pillichshammer (2014) and Robert and Casella (2004). The procedure of Monte Carlo integration in our problem can be described as follows,

  1. 1)

    In δ-space, make the uncertain parameter as independent random variables under uniform distributions within the box which has the smallest area to enclose the WCSR. Then we obtain N samples and all the obtained samples are uniformly distributed in an n-dimensional box. A 2D problem is shown in Fig. 3.

  2. 2)

    Sequentially substitute the samples into the hyper-sphere function \({\sum \limits _{i = 1}^{m} {{\delta _{i}^{2}}} \le {\xi ^{2}}}\) and get the ones satisfying this function. Thus, we can obtain a pile of samples uniformly distributed in the n-dimensional WCSR. The number of samples in this step is N2.

  3. 3)

    Sequentially substitute the samples into (9) to get the function values ψ(δ) and get the summation of these values \({{F}=\sum \limits _{i = 1}^{N2}{\psi ({\boldsymbol {\delta }})}}\) .

  4. 4)

    Finally, the integral estimation of η is \({ \displaystyle \hat \eta = \frac {V}{{N2}}F}\) where V is the volume of the WCSR. For the purpose of simplification, η is used instead of \({\hat \eta }\) in the following sections.

Fig. 3
figure 3

Samples for the numerical integration

With the aforementioned method, the hybrid robustness index of the current solution can be obtained to show the possibility that the solution satisfies both the objective and feasibility robustness restrictions. Since the WCSR for objective and constraint functions generally are different and therefore the integral procedure in this section should be performed twice for each solution to get the η O b j and η F e a .

3.3 Interpolation model

As we can see from (10), when the uncertainty type is determined (i.e., the number of the uncertain variables and the distribution or interval bounds associated with them), the integral function ψ(δ) is fixed. Thus, the robustness index η is only dependent on the integral domain, actually, the WCSR radius. Obviously, the WCSR radius-robustness index (ξη) function is a non-decreasing one-to-one mapping relationship.

In the optimization procedure, (10) should be performed repeatedly to assess the objective and feasibility robustness indices, respectively, which may lead to an increase of computational burden. In this section, a 1D interpolation model is adopted to deal with this problem. That is to say, for a certain MURO design problem, the integral in (10) is carried out with different radius ξ before the optimization procedure. The ξ values should be from 0 to a value that is a little bit larger than \({\sqrt {n}}\) since that when \({\xi \ge \sqrt {n}}\), the WCSR covers the whole variation region(i.e., just shown in Fig. 2a), and the η value approaches 1.

Then a radius-index table is built and in the subsequent optimization process, a table lookup operation is used to approximate the index at the actual WCSR radius for a particular design. In this work, the spline method is applied between the adjacent sample ξ values. Figure 4 is to demonstrate the relationship of ξ and η for 2D, 3D and 4D problems, respectively. By this way, no integral procedure is needed in the optimization and compared with the computational costs of function calls in optimization, the cost of building this lookup table can be neglected.

Fig. 4
figure 4

The relationship of ξ and η for problems with different number of uncertain factors

Since the interpolation model is introduced to replace the original integration, it is necessary to evaluate the errors related to this procedure. We use a 2-dimensional problem (one uncertain parameter is random and the other is interval) to calculate the η values with Monte Carlo integration method and the interpolation model. The relative errors of η values with interpolation model with respect to the values with Monte Carlo integration method is defined as

$$ {\text{Relative error =} }\frac{{{\eta_{IM}} - {\eta_{MCI}}}}{{{\eta_{MCI}}}} \times 100\% $$
(11)

where the subscripts IM and MCI stand for interpolation model and Monte Carlo Integration, respectively. Since the integration value of η when ξ = 0 is zero, the relative error is calculated when ξ starts from 0.05 with a sampling interval of 0.005. The sampling interval when we build the interpolation model is 0.025 and therefore, in Fig. 5, the errors show a trend of oscillation around zero. When ξ is quite small(such as ξ ≤ 0.1), the errors are relatively high because the η values are close to zero now and the higher errors may contain numerical errors inside. From a global view, the relatively errors within the whole ξ domain are almost all smaller than ± 1%. Therefore, we believe that the interpolation is accurate enough to be used in the MURO method.

Fig. 5
figure 5

The relative errors introduced by interpolation model in 2D problem

4 Robust optimization with hybrid index constraint

With the strategies developed in the previous sections, the MURO design problem in (1) can be reformulated as follows

$$ \begin{array}{llll} &\min\quad f({\textbf{X}},{{\textbf{U}}_{0}},{{\textbf{V}}_{0}})\\ & \begin{array}{lll} s.t.&\eta_{Obj} ({\textbf{X}},{\textbf{U}},{\textbf{V}}) \ge {\eta_{Obj}^{*}}\\ &\eta_{Fea} ({\textbf{X}},{\textbf{U}},{\textbf{V}}) \ge {\eta_{Fea}^{*}}\\ &{\textbf{lb}} \le {\textbf{X}_{0}} \le {\textbf{ub}} \end{array} \end{array} $$
(12)

where \({\eta _{Obj}^{*}}\) and \({\eta _{Fea}^{*}}\) are respectively the prescribed target indices for objective and constraint functions.

When the values of these two kinds of indices approximate 1, it implies that no robust constraints violations are allowed. Since the robustness index represents the possibility that the uncertain vector locates in the WCSR and the existence of the random factors, the index can not be exactly equal to 1, and instead, can approximate 1. Also, when the target indices determined by the decision-maker are less than 1, it means that the system allow a certain degree of constraint violations. That is to say, the proposed MURO method allows the decision makers to determine the robustness of the system.

In this paper, the inner loop to evaluate the robustness indices η O b j and η F e a is solved by the fmincon function (”SQP” algorithm) of the Matlab optimization toolbox (2012b Release). To speed up computation of the proposed approach, gradient-based optimization algorithms NLPQL (Schittkowski 1986) is used in the outer loop as opposed to population-based optimization ones (such as Genetic Algorithms or Simulated Annealing). For both the inner and outer loop, the convergence criteria is ∥xk+ 1x k ∥ ≤ 10− 6. When this new method is applied into other engineering problems, the choice of algorithm should be dependent on the nonlinearity of the problems. The flowchart of solving the MURO method is depicted in Fig. 6.

Fig. 6
figure 6

The flowchart of the proposed MURO method

5 Test examples

In this section, two nonlinear numerical and two engineering design optimization examples are tested. The robust optimal results are compared to their deterministic counterparts (i.e., without any uncertainy) and robust solutions by DMC method in Du et al. (2009) to demonstrate the computational effectiveness and efficiency of the proposed MURO method. Table 1 summarizes the uncertainty occurrences in each example.

Table 1 Uncertainty occurrences in each example

The design objective of the DMC method is the weighted summation of three factors, which is often used in the multiobjective design problem, shown as follows,

$$ {f_{DMC}=w_{1} \overline \mu_{f}+w_{2} \overline \sigma_{f}+w_{3} \delta_{f}} $$
(13)

where \({\overline \mu _{f}}\), \({\overline \sigma _{f}}\) and δ f are the average mean values of original objective f, average of standard deviations and the difference between maximum and minimum standard deviations, respectively. Note that these three values should be normalized before summation. A particular set of weighting factors (i.e., w1,w2 and w3) can only produce a single Pareto-optimal solution generally (Hu et al. 2013) and the essential part of the DMC method to find a robust solution is the determination of the weighted factors. However, the determination work is usually not an easy job. In Du et al. (2009), the three weighting factors are equal. In this paper, we have tried the equal weighting factor and the optimal solutions are much worse than that of proposed method. Finally, the weighting factors are set as w1 = 0.7,w2 = 0.15 and w3 = 0.15.

5.1 Nonlinear numerical example 1

The first example is a nonlinear problem with uncertainty in design variables from Zhou and Li (2014). We modified it a little in this paper. The formulation is given in the following equation:

$$ \begin{array}{llll} &\min \quad \!\!\!f\,=\,{{x_{1}^{3}}}\sin ({x_{1}} \,+\, 4) \,+\, 10{x_{1}^{2}} \,+\, 22{x_{1}} \,+\, 5{x_{1}}{x_{2}} \,+\, 2{x_{2}^{2}} \,+\, 3{x_{2}} \,+\, 12\\ & \begin{array}{lll} s.t.&g_{1}={x_{1}^{2}} + 3{x_{1}} - {x_{1}}\sin {x_{1}} + {x_{2}} - 2.75 \le 0\\ &g_{2}=- {\mathrm{log(0.1{x_{1}} + 0 .41)}} + {x_{2}}{e^{- {x_{1}} + 3{x_{2}} - 4}} + {x_{2}} - 3 \le 0\\ &{\Delta} {x_{1}} = 0.1,{\sigma_{2}} = 0.01\left| {{\mu_{{x_{2}}}}} \right|\\ &{\Delta} f_{0} = 0.15 \end{array} \end{array} $$
(14)

In this example, the design variables x1 and x2 suffer from uncertainty. x1 is a interval variable with a variation range Δx1 = 0.01 around its nominal value and the standard variation of x2 is 0.1 times of its mean value. Also, the acceptable objective function variation Δf0 = 0.15. The optimal solutions of the proposed method, deterministic design and DMC method are displayed in Table 2 and Fig. 7. Note that the η stands for \({\eta _{Obj}^{*}}\) and \({\eta _{Fea}^{*}}\) at the same time. In order to verify the effectiveness of the proposed method, Monte Carlo simulation is executed to compute the possibility β of solutions to satisfy the objective and feasibility robustness restrictions under uncertainties. In this simulation, 106 samples are taken according to the probability distribution and intervals of x1 and x2.

Table 2 Results comparisons with different design for example 1
Fig. 7
figure 7

Comparison of different optimal solutions for example 1

We can find that the proposed MURO method can produce better objective values with fewer number of function calls (N.F.C) when η = 0.75 and η = 0.99. The constraints g1 and g2 in the table are the nominal values for the optimal designs. From η = 0.75 to η = 0.99, with the increasement of the robustness index target, the nominal constraint values get smaller.

The β values of the proposed method with η = 0.99 and the DMC method are both almost 100%, which means that the design constraints in (1) can be satisfied. However, it is hard to judge these two solutions from the perspective of robustness. Therefore, the Monte Carlo simulations with 106 samples were carried out to verify the actual distribution of performances under uncertainty (i.e., the sensitivity of the objective and constraint functions to the uncertainties). We did not give the distributions of g1 because it is not a tight constraint at the optimal. Δf = ff o p t is the variation of the objective and the f o p t is the nominal objective value. The yellow bars represent the event frequency when the constraint g2 ≤ 0 is violated. In Fig. 8a, the tail distribution is amplified to provide a clearer view and the proportion of the yellow ones has been reduced into \({0.22\%= 1-\beta _{\eta ^{*}= 0.99}}\) and the restriction of Δf0 = 0.15 is also satisfied. We can see from Fig. 8b that there is no violation of the constraints for g2 but Δf ≤Δf0 has been violated. Therefore, the DMC method cannot guarantee a feasible solution for all the robustness constraints. From another perspective, the solution of DMC is too conservative because the distribution area of g2 is far away from the design restriction g2 ≤ 0. Therefore, even the DMC method can produce robust solutions, the proposed MURO technique can significantly reduce the conservatism with a equal degree of robustness (i.e., achieving a satisfactory level β close to 100%).

Fig. 8
figure 8

Event frequency in example 1 by Monte Carlo simulations

Note that the difference between the η and β in Table 2 is caused by the usage of the WCSR to estimate the actual sensitivity region whose shape the designers cannot know for sure. These two indices both stand for the possibility that the performance with variation satisfy the restriction. The difference is that the η can be regarded as a worst-case estimation of β since that β is produced with Monte carlo simulation which can be considered as the actual possibility. Generally, the actual size of sensitivity region is larger than the WCSR and therefore, the values of β are not less than the η. Therefore, one of the advantages of this new index or method is that we can provide a worst-case estimation of the robustness constraint satisfaction degree for a certain design.

5.2 Nonlinear numerical example 2

Another numerical example is originally from the literature (Zhou and Li 2014; Zhou et al. 2012). In this example, the uncertainties exist in both the design variables and the parameters, and the design variable x3 is interval and the two parameters p1 and p2 are random ones. The optimal solutions are listed in Table 3.

$$ \begin{array}{llll} &\min\quad f={({x_{1}} - 0.6)^{2}} + {({x_{2}} - 0.6)^{2}} - {x_{3}}{x_{4}} + 10\\ & \begin{array}{lll} s.t.&g_{1}={p_{1}} + {x_{1}} + {x_{2}} \le 0\\ &g_{2}={p_{2}} + {x_{3}} + {x_{4}} \le 0\\ &{\Delta} {x_{3}} = 0.1,{\sigma_{1}} = 0.03\left| {{\mu_{{p_{1}}}}} \right|,{\sigma_{2}} = 0.03\left| {{\mu_{{p_{2}}}}} \right|\\ &{\Delta} f_{0} = 0.2 \end{array} \end{array} $$
(15)
Table 3 Results comparisons with different design for example 2

In this example, the optimal objective when η = 0.99 is 9.8824, which is smaller than the solution of DMC method 9.8911. Also, the N.F.C of the new method is lesser. The η F e a of the deterministic optimization is 0 means that this design cannot withstand any uncertainty because it has been located in the design boundary. Nevertheless, the β for this deterministic design is 50%, not 0. This phenomenon is because the samples set of Monte Carlo simulation to evaluate β includes the ones which lead to the satisfaction of the constraints. The iteration histories of the example 2 are shown in Fig. 9 and we can find that the MURO method can converge to the optimal point with fewer iterations.

Fig. 9
figure 9

The iteration histories of proposed method for example 2

5.3 Engineering example -Two bar truss

The two-bar truss problem is also a typical engineering design problem which has been tested in literature (Zhou and Li 2014; Zhou et al. 2012). The formulation is

$$ \begin{array}{llll} &\min\quad f(x) \,=\, \frac{{20{{(16 +{x_{3}^{2}})}^{\frac{1}{2}}}}}{{1000{x_{1}}{x_{3}}}}\\ & \begin{array}{lll} s.t.&{g_{1}} \,=\, f - 100 \le 0\\ &\!{g_{2}} \,=\, 1000\left[{x_{1}}\left( {\left( 16 \,+\, {x_{3}^{2}}\right)^{\frac{1}{2}}}\right) \,+\, {x_{2}}{(1 \,+\, {x_{3}^{2}})^{\frac{1}{2}}}\right] \,-\, 100 \!\le\! 0\\ & {g_{3}} \,=\, \frac{{80{{(16 + {x_{3}^{2}})}^{\frac{1}{2}}}}}{{1000{x_{1}}{x_{3}}}} - 100 \le 0\\ &0.0001 \le {x_{1}},{x_{2}} \le 0.25,1.0 \le {x_{3}} \le 3.0\\ &{\Delta} {x_{1}} = 0.0001,{\sigma_{1}} = 0.002 \left|{\mu_{{x_{2}}}}\right|,{\sigma_{2}} = 0.002 \left|{\mu_{{x_{3}}}}\right|\\ &{\Delta} f_{0} = 1 \end{array} \end{array} $$
(16)

where one of the design variable x1 is interval and the other two are random. The design solutions are demonstrated in Table 4. Similar to the previous two numerical examples, better objective can be obtained by the proposed method with lesser function calls. Also, Fig. 10 presents the distribution of the tight constraint g2 under uncertainties, which shows the obvious reduction of conservatism of the new method over the DMC method. Note that the optimal solution of deterministic optimization is from the literature (Zhou et al. 2012) and therefore the N.F.C in that column is not provided.

Fig. 10
figure 10

Event frequency of g2in two bar truss by Monte Carlo simulations

5.4 Engineering example -Ten bar truss

The ten-bar example from Hu et al. (2017) is modified in this paper. The structure, shown in Fig. 11, is subject to two vertically concentrated forces P1 = P2 = 100 kip at node 4 and 6, respectively. The total 10 bars are classified into two groups and the bars within each group have the same Youngs modulus. The bars from number 1 to 6 are with an elastic modulus E1 whose value is 10000 ksi. The other bars are with an elastic modulus E2 whose value is 8000 ksi. The design objective is to minimize the total volume of the structure and the nominal values of the cross-section areas are the design variables. The vertical displacement of node 3 and node 6 are restricted by v3 ≤ 2.0 in and v6 ≤ 5.0 in, respectively.

Fig. 11
figure 11

A ten bar truss structure

In this problem, the uncertain parameters are the elastic modulus E1, E2 and the fixed loads P1 and P2. The two loads are independent intervals with variation range ΔP = 10 kip. The two elastic modulus obey normal distributions with their standard deviations being 0.03 times their nominal values, respectively. Therefore, in this ten-bar truss design problem, four independent uncertain parameters exist. Then, Table 4 lists the optimal solutions. Note that in this problem, only feasibility robustness constraint is considered and the objective function volume has no relationship with the uncertainties.

Table 4 Results comparisons with different design for two engineering examples

The actual distribution of v3 and v6 under uncertainties is shown in Fig. 12. The distributions of proposed MURO and DMC method are similar to each other and the distribution by DMC method does not show large extent of conservatism. However, from Table 4, we can find that the optimal volume 25032 in3 of proposed method when η = 0.99 is 20% smaller than that of DMC method (i.e., 29878 in3). Obviously, the ten bar truss robust design problem is highly nonlinear and the MURO method can obtain better solutions than the DMC method with equivalent robustness.

Fig. 12
figure 12

Event frequency in ten bar truss by Monte Carlo simulations

The iteration histories of the ten bar truss example are shown in Fig. 13. Compared with the numerical example 2, more iterations are needed in this design problem. Nevertheless, the optimization algorithm can find the design points which are close to the final optimal within about 20 iterations. Since we transfer the original robust design problem into an optimization procedure with index constraints, the relationship between the design variables and the indices can be highly nonlinear. This may lead to the slow convergence problem and more attentions will be paid to handle this problem in the authors’ future research.

Fig. 13
figure 13

The iteration histories of proposed method for ten bar truss

6 Conclusions

A new efficient mixed uncertainty robust optimization strategy is developed in this paper to handle design problems with both random and interval uncertainties. With the hybrid robustness index, the original robust design problem is transformed to an ordinary design problem with indices restrictions. Four different types of examples are investigated and the results show the validity of the proposed method.

Based on the numerical results, several conclusions can be drawn out as follows.

  1. 1)

    The new hybrid robustness index η can provide an effective quantitatively measurement of system robustness. The Monte Carlo simulations also show that the hybrid index represent a worst-case estimation of the robustness constraint satisfaction degree for a certain design. This is the main contribution of this work.

  2. 2)

    The new method can produce better objective values with the satisfaction of the robustness restrictions. Compared with the existing method, the proposed technique can reduce the conservatism of the solution significantly .

  3. 3)

    The new method is more efficient than previous techniques because the double sampling procedure of existing tools can be omitted in the MURO method.

  4. 4)

    The proposed method still aim to minimize the objective which is the same with deterministic designs while the objectives of other methods are relatively complex (i.e., the weighted summation of mean interval and standard deviation interval). Therefore, the objective of our method has clear physical or practical meaning and can be readily understood and performed.

  5. 5)

    This new method allows the decision-makers to determine the system robustness level and a robustness index less than 1 is also acceptable. However, few other robust design methods have this convenience.

The numerical results have shown that the MURO method may suffer from the problem of slow convergence. In order to make the method more reliable and robust, in the next step, the sensitivity analysis of the robustness index with respect to design variables should be carried out. Also, other powerful optimization techniques instead of SQP can be investigated.