Keywords

1 Introduction

The design of structures is directly associated to satisfy conflicting requirements, such as cost, safety and durability. Unquestionably, uncertainties are naturally present in all the variables and steps that compose the design. Thus, optimizing a structure through a deterministic approach often results in poor reliable configurations.

In this way, the Reliability-Based Design Optimization (RBDO) approach is able to find the best compromise between cost and reliability assurance, by including the uncertainties [3]. The challenge of RBDO is the computational effort employed in evaluate reliability constraints, making it difficult to deal with more complex problems [8].

Besides the usual double-loop approach, other techniques have been proposed to speed up the RBDO process, such as SORA (Sequential Optimization and Reliability Analysis) [10], SLA (Single-Loop Approach) [17] and SAP (Sequential Approximate Programming) [9].

Truong and Kim [25] point out that for deterministic design optimization (DDO), there are many articles and research material in the literature, but in the case of RBDO, few studies related to steel frames have been performed. In addition, geometric and material nonlinear behaviors often are not considered [23, 26].

This paper intents to present a RBDO double-loop approach of a steel frame, considering first and second order structural analyses. FORM is used to obtain the reliability index \(\beta \) and Genetic Algorithms perform the optimization. Three target reliability indices are established and design variables can also assume different values. Reliability indices found by second order analysis tend to be higher than the first order ones. However, the minimum mass found by second order approach is not always smaller than first order case.

1.1 Softwares

CS-ASA. The CS-ASA - Computational System for Advanced Structural Analysis is a finite element method based program [24], able to perform statics and dynamics analysis of steel structures [6] and statics analysis of composites steel-concrete structures [16], considering geometric imperfections, material nonlinearity and semi-rigid connections.

MATLAB. manages all the analysis stages, as calling the structural analysis program CS-ASA; the reliability loop (FORM algorithm) and the optimization loop (‘ga’ function in ).

1.2 Second Order Analysis of Structures

The CS-ASA presents 3 options for second order analysis formulations. The one used in this work, called SOF-2 [24], was developed by Yang and Kuo, in 1994 [27]. The typical frame finite element adopted can be seen in Fig. 1 and its implementation passes by some simplifying assumptions, such as: cross sections remain flat after deformation and are compacts; lateral or torsional buckling are not allowed; small deformations are assumed, but large rotations/displacements are allowed; axial shortening due to curvature is neglected.

Fig. 1.
figure 1

Adopted finite element.

Achieving the condition of structural equilibrium consists of resolving a balance between applied external forces and internal forces of the structure [4]. Such task can be expressed in an equation, as Eq. 1 and, for the second order analysis case, it depends on displacements (\(\textbf{U}\)) and internal forces in the members (\(\textbf{P}\)).

$$\begin{aligned} \mathbf {F_{int}(U,P)} \simeq \mathbf {F_{ext}} \end{aligned}$$
(1)

\(\mathbf {F_{int}}\) is the vector of internal forces; \(\mathbf {F_{ext}}\) is the vector of external forces, which can be expressed as the product between a load parameter \(\lambda \) and a reference external force vector \(\mathbf {F_{r}}\), which defines the direction of the acting external forces [4]. So, \(\mathbf {F_{ext}}=\lambda \mathbf {F_{r}}\).

The numerical strategy to solve Eq. 1 is an incremental-iterative approach, based on Newton-Raphson method. Thus, it is more convenient to expand Eq. 1, defining the elastic and geometrical matrices involved in the process:

$$\begin{aligned} \mathbf {F_{int}^{t}} + \sum _{e}^{n} [\mathbf {k_{i}^{e}}+\mathbf {k_{g}^{e}}]{\boldsymbol{\Delta }} \mathbf {u^{e}} \simeq \mathbf {F_{ext}^{t}} + \varDelta \lambda \mathbf {F_{r}} \end{aligned}$$
(2)

where the superscript “t” defines the last equilibrium configuration; \({\boldsymbol{\Delta }} \mathbf {u^{e}}\) is the incremental nodal displacement vector of the element “e”; \(\mathbf {k_{i}^{e}}\) is the element’s elastic linear stiffness matrix, defined in Eq. 3; \(\mathbf {k_{g}^{e}}\) is the element’s geometric stiffness matrix, defined in Eq. 4; \(\varDelta \lambda \) is the load parameter increment.

$$\begin{aligned} \mathbf {k_{i}^{e}} = \int _{L^{e}} \mathbf {N^{T}DN}dx \end{aligned}$$
(3)
$$\begin{aligned} \mathbf {k_{g}^{e}} = \textbf{P} \int _{L^{e}} [\mathbf {N_{u}^{T}N_{u}}+\mathbf {N_{v}^{T}N_{v}}]dx \end{aligned}$$
(4)

\(L^{e}\) is the finite element length; \(\textbf{N}\) refers to the interpolation functions vector; \(\textbf{D}\) represents the material constitutive relationship matrix; \(\textbf{P}\) is the axial force acting on the finite element. The interpolation function vectors \(\mathbf {N_{u}}\) and \(\mathbf {N_{v}}\) are associated with the axial and lateral displacements, respectively.

2 Structural Optimization

2.1 Optimization Problem

The optimization problem consists of maximizing or minimizing one or more objective functions, within specific design conditions previously established [21]. We may formulate it as follows:

$$ Find {{{\mathbf {{\small {\uppercase {X}}}}}}} =\left\{ \begin{array}{c} x_{1} x_{2} ... x_{n} \end{array} \right\} , that minimizes/maximizes f({{{\mathbf {{\small {\uppercase {X}}}}}}}), $$

subject to:

$$\begin{aligned} c_{i}({{{\mathbf {{\small {\uppercase {X}}}}}}})\le 0, i=1,2, ...,m, \end{aligned}$$
(5)
$$\begin{aligned} \vspace{0.5cm} d_{j}({{{\mathbf {{\small {\uppercase {X}}}}}}})= 0, j=1,2, ...,p, \end{aligned}$$
(6)
$$\begin{aligned} x^{low}_k \le x_k \le x^{up}_k, k=1,2, ...,n, \end{aligned}$$
(7)

where:

  • X is the n-dimensional vector containing the design variables to be optimized;

  • f(X) is the objective function of the problem, which in structural optimization, can represent the weight, volume or manufacturing cost, for example;

  • \(c_{i}({{{\mathbf {{\small {\uppercase {X}}}}}}})\) and \(d_{j}({{{\mathbf {{\small {\uppercase {X}}}}}}})\) are inequality and equality constraints, respectively, known as behavior constraints, related to the performance and limit states of the structural system under study;

  • \(x^{low}_k\) and \(x^{up}_k\) are the lower and upper bounds that design variables can assume, known as lateral constraints, related to feasible physical limits [22];

  • i, j, k, m, n and p are arbitrary values.

Figure 2 represents a hypothetical two-dimensional problem, in which the feasible region was obtained by applying two behavior constraints \(c_{a}\) and \(c_{b}\), as well as the lateral constraints \(x^{low}_1\), \(x^{up}_1\), \(x^{low}_2\), \(x^{up}_2\).

Fig. 2.
figure 2

Constraints surfaces for a hypothetical two-dimensional problem

2.2 Genetic Algorithms

In 1975, Holland [15] proposed a new optimization method, based on principles of nature, such as genetics and natural selection in the reproduction of species: the Genetic Algorithms (GA). The GA’s make part of a set of so-called modern optimization methodologies [22].

As stochastic and gradient-free method, GA has good applicability in problems like multi-objective optimization; problems containing mixed continuous and discrete variables; also for discontinuous or non-differentiable functions, as well as for non-convex design spaces. The basic terminology relevant to genetic algorithms is cited below:

  • Objective function: the function to be optimized;

  • Penalty function: mathematical expression applied to the fitness value of an individual, calculated based on the violation of constraints;

  • Fitness function: mathematical expression given by the sum of the objective and penalty functions, which indicates how fitted to the problem an individual solution can be;

  • Individual: is the variables vector. It is also called chromosome and, its entries, genes. Vector \(\textbf{X}\) below represents this structure:

    $$ {{{\mathbf {{\small {\uppercase {X}}}}}}} =\left[ \begin{array}{c} x_{1} x_{2} x_{3} ... x_{n} \end{array} \right] ; $$
  • Population: is the matrix of individuals. The user must specify a value p, for the population size. Therefore, the population matrix will have dimension \(p \times n\) and n is the number of variables in the problem;

  • Generation: each generation represents an iteration, in which a new population matrix will be created, by applying the genetic operators, known as: selection, elitism, crossover and mutation;

  • Diversity: is measured by the distance between individuals in a population. Greater the diversity of a population is, greater is the scan of the design space;

  • Parents and Children: The GA’s, through the selection process, use the individuals with the best fitness value of the current generation, called parents, to create those of the next iteration (children).

The flowchart in Fig. 3 outlines the running of genetic algorithms.

Fig. 3.
figure 3

Genetic algorithms flowchart.

3 Reliability Analysis and RBDO Methodology

3.1 First Order Reliability Method - FORM

The reliability indices calculated in this work are obtained by applying FORM, that is an approximation of the limit state function, by a tangent hyper-surface at the design point [11], where the distance from the origin to this point is what we call the reliability index \(\beta \) [13]. Once we have \(\beta \), it is possible calculate the failure probability \(p_{f}^{FORM}\), which is the cumulative standard normal distribution function (\(\varPhi \)) value at \(-\beta \) (Eq. 8).

$$\begin{aligned} p_{f}^{FORM} = \varPhi (-\beta ) \end{aligned}$$
(8)

Obtaining \(\beta \) involves a mapping transformation of random variables [18, 20]. For the Nataf’s transformation case, this operation starts from an original space X to a normal space Z, and then from the space Z to a standard normal uncorrelated space Y. Equations 9 and 10 show the chain rule for the Jacobian matrices, used in the process:

$$\begin{aligned} {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{yx} = \left[ \frac{\partial {y}_{i}}{\partial {x}_{k}} \right] = \left[ \frac{\partial {y}_{i}}{\partial {z}_{j}} \frac{\partial {z}_{j}}{\partial {x}_{k}} \right] = {{{{\mathbf {{\small {\uppercase {L}}}}}}}}^{-1}{({{{{\mathbf {{\small {\uppercase {D}}}}}}}}^{neq})}^{-1} = {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{yz}{{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{zx} \end{aligned}$$
(9)
$$\begin{aligned} {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{xy} = \left[ \frac{\partial {x}_{i}}{\partial {y}_{k}} \right] = \left[ \frac{\partial {x}_{i}}{\partial {z}_{j}} \frac{\partial {z}_{j}}{\partial {y}_{k}} \right] ={{{{\mathbf {{\small {\uppercase {D}}}}}}}}^{neq}{{{\mathbf {{\small {\uppercase {L}}}}}}} = {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{xz}{{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{zy} \end{aligned}$$
(10)

where \(\textbf{L}\) is the lower triangular matrix obtained from the Cholesky decomposition and \(\mathbf {D^{neq}}\) is the diagonal matrix of standard deviations of equivalent normal variables. Thus, \(\textbf{x}\) and \(\textbf{y}\) variables can be obtained by Eqs. 11 and 12:

$$\begin{aligned} {{{\mathbf {{\small {\uppercase {y}}}}}}} = {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{yx} \left\{ {{{\mathbf {{\small {\uppercase {x}}}}}}} - {\mu }^{neq} \right\} \end{aligned}$$
(11)
$$\begin{aligned} {{{\mathbf {{\small {\uppercase {x}}}}}}} = {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{xy} \left\{ {{{\mathbf {{\small {\uppercase {y}}}}}}} + {\mu }^{neq} \right\} \end{aligned}$$
(12)

where \({\mu }^{neq}\) is the normal equivalent mean.

FORM calculates the reliability index \(\beta \) by means of the following steps (based on [7]):

  1. 1.

    Calculation of non-normal distributions parameters;

  2. 2.

    Determination of equivalent correlation coefficients and the Cholesky decomposition matrix \({{{\mathbf {{\small {\uppercase {L}}}}}}}\);

  3. 3.

    Determination of Jacobian matrices \({{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{yz}\) and \({{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{zy}\);

    $$\begin{aligned} {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{yz} = {{{{\mathbf {{\small {\uppercase {L}}}}}}}}^{-1} \end{aligned}$$
    (13)
    $$\begin{aligned} {{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{zy}= {{{\mathbf {{\small {\uppercase {L}}}}}}} \end{aligned}$$
    (14)
  4. 4.

    Choice of the starting point \({x}_{k}\), for \(k=0\) (beginning of the iterative process);

    Start of the iterative process

  5. 5.

    Calculation of equivalent normal distributions parameters;

  6. 6.

    Updating the Jacobian matrices \({{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{yx}\) and \({{{{\mathbf {{\small {\uppercase {J}}}}}}}}_{xy}\);

  7. 7.

    Transformation of point \({x}_{k}\) from X to Y;

  8. 8.

    Limit state function \(g({x}_{k})\) assessment;

  9. 9.

    Calculation of gradients:

    1. a.

      Calculation of the partial derivatives of \(g({{{\mathbf {{\small {\uppercase {X}}}}}}})\) in the design space X;

    2. b.

      Gradient transformation to Y;

    3. c.

      Calculation of linearized sensitivity factors \(\alpha ({y}_{k})\);

  10. 10.

    Calculation of the new point \({y}_{k+1}\);

  11. 11.

    Transformation of \({y}_{k+1}\) to space X;

  12. 12.

    Convergence check. If convergence criteria are met, the algorithm is interrupted. Otherwise, the iteration number is increased and it returns to step 5. Convergence criteria:

    $$\begin{aligned} 1 - \frac{\vert \nabla {g({{{{\mathbf {{\small {\uppercase {y}}}}}}}}_{k+1})}^{t} {{{{\mathbf {{\small {\uppercase {y}}}}}}}}_{k+1} \vert }{\vert \vert \nabla g({{{{\mathbf {{\small {\uppercase {y}}}}}}}}_{k+1}) \vert \vert \vert \vert {{{{\mathbf {{\small {\uppercase {y}}}}}}}}_{k+1} \vert \vert } < \varepsilon \end{aligned}$$
    (15)
    $$\begin{aligned} \vert g({{{{\mathbf {{\small {\uppercase {y}}}}}}}}_{k+1}) \vert < \delta \end{aligned}$$
    (16)
  13. 13.

    Evaluation of the reliability index at the design point: \(\beta = \vert \vert {y}^{*}\vert \vert \).

3.2 Reliability-Based Design Optimization - RBDO

In RBDO methodology, the uncertainties related to each variable of a problem are directly taken into account in the optimization process [7]. Failure probabilities or targets reliability indices are defined as optimization constraints. Hilton and Feigen [14] first proposed the method in their work: Minimum weight analysis based on structural reliability, in 1960.

In this way, we must add the reliability constraint to the optimization problem presented in Subsect. 2.1:

$$\begin{aligned} P[g_{i}({{{\mathbf {{\small {\uppercase {X}}}}}}})]\le P_{f}, i=1,2, ...,n \end{aligned}$$
(17)

or:

$$\begin{aligned} \beta _{i}({{{\mathbf {{\small {\uppercase {X}}}}}}})\ge \beta _{T}, i=1,2, ...,n \end{aligned}$$
(18)

where \(P[g_{i}({{{\mathbf {{\small {\uppercase {X}}}}}}})]\) is the failure probability of a structure for a given limit state function \(g_{i}({{{\mathbf {{\small {\uppercase {X}}}}}}})\); \(P_{f}\) is the failure probability calculated by Eq. 19 below; \(\beta _{i}({{{\mathbf {{\small {\uppercase {X}}}}}}})\) is the reliability index of a structure; \(\beta _{T}\) is the target reliability index.

$$\begin{aligned} P_{f} = P[{{{\mathbf {{\small {\uppercase {X}}}}}}} \in \varOmega _{f}] = \int _{\varOmega _{f}} f_{X}({{{\mathbf {{\small {\uppercase {X}}}}}}}) \,d{{{\mathbf {{\small {\uppercase {X}}}}}}} \end{aligned}$$
(19)

\(\varOmega _{f}\) is the failure domain; \(f_{X}({{{\mathbf {{\small {\uppercase {X}}}}}}})\) is the probability density function for the random variable \({{{\mathbf {{\small {\uppercase {X}}}}}}}\).

Table 1 shows a sequence of steps for the RBDO analysis, using a double-loop approach, where optimization is the outer loop and reliability assessment is the inner one.

4 Numerical Example: RBDO Analysis of a Single Floor Steel Frame

4.1 General Information

A RBDO analysis is made for the single floor steel frame shown in Fig. 4. The problem has 8 random variables, whose statistical characteristics are in Table 2, including the applied loads D, L and W; the section properties: area A, inertia \(I_{x}\) and plastic section modulus \(Z_{x}\); material properties: Young’s modulus E and yield strength \(F_{y}\).

Table 3 shows the W-shapes characteristics, from AISC database (2017), in which the optimizer searches for the best configuration to satisfy the constraint (a target reliability index) and the objective function, which is minimize the total mass. This frame has been studied by several authors [5, 12, 19], but originally as a reliability problem only.

4.2 Limit State Function

For the reliability analysis carried out by FORM, one ultimate limit state is verified, which is flexure and axial force acting on column element 4, node 4. This interaction of efforts shall be limited by Eqs. 20 and 21 [2]:

  1. (i)

    If \(\frac{P_{r}}{P_{c}} \ge 0.2\)

    $$\begin{aligned} \frac{P_{r}}{P_{c}} + \frac{8}{9} \left( \frac{M_{rx}}{M_{cx}} + \frac{M_{ry}}{M_{cy}} \right) \le 1.0 \end{aligned}$$
    (20)
  2. (ii)

    If \(\frac{P_{r}}{P_{c}} < 0.2\)

    $$\begin{aligned} \frac{P_{r}}{2P_{c}} + \left( \frac{M_{rx}}{M_{cx}} + \frac{M_{ry}}{M_{cy}} \right) \le 1.0 \end{aligned}$$
    (21)
Table 1. General sequence of steps for the RBDO analysis.
Fig. 4.
figure 4

Single Floor Steel Frame.

Table 2. Statistical properties of random variables [12].

where \(P_{r}\): required axial strength; \(P_{c}\): available axial strength (Eqs. 22 and 23, if tension or compression); \(M_{r}\): required flexural strength; \(M_{c}\): available flexural strength (Eq. 24); x: major axis bending; y: minor axis bending.

$$\begin{aligned} P_{c,ten} = A F_{y} \end{aligned}$$
(22)
$$\begin{aligned} P_{c,com} = A F_{cr} \end{aligned}$$
(23)
$$\begin{aligned} M_{c} = Z_{x} F_{y} \end{aligned}$$
(24)

\(F_{cr}\) is the critical stress given by Eq. 25 or Eq. 26:

  1. (i)

    If \(\lambda _{c} \le 1.5\)

    $$\begin{aligned} F_{cr} = \left( 0.658^{\lambda _{c}^{2}}\right) F_{y} \end{aligned}$$
    (25)
  2. (ii)

    If \(\lambda _{c} > 1.5\)

    $$\begin{aligned} F_{cr} = \left( \frac{0.877}{\lambda _{c}^{2}} \right) F_{y} \end{aligned}$$
    (26)

\(\lambda _{c}\) is the reduced slenderness ratio [1], calculated by Eq. 27:

$$\begin{aligned} \lambda _{c} = \frac{KL}{\pi } \sqrt{\frac{A F_{y}}{E I_{x}}} \end{aligned}$$
(27)

where K is the effective length factor and L the laterally unbraced length of the member.

4.3 Design Variables

The variables are W-shapes, taken as discretes by GA optimizer, varying from 1 to 18 and then mapped to Table 3, whose characteristics like linear mass, area (A), inertia (\(I_{x}\)) and plastic section modulus (\(Z_{x}\)) are used in the process.

For first and second order analyses, 3 possibilities were studied: 1- considering all elements with the same W-shape (1 optimization variable); 2- considering beam elements and columns elements with different W-shapes (2 optimization variables); 3- considering beam elements with the same W-shape, but allowing columns with different W-shapes (3 optimization variables).

Table 3. W-shapes properties of the design space.

4.4 Design Constraints

Besides the lateral constraints of the previous item, there is also the reliability constraint, given by a target value, as Eq. 28. Three scenarios were proposed: \(\beta _{T,1}=2.0\), \(\beta _{T,2}=2.5\) and \(\beta _{T,3}=3.0\).

$$\begin{aligned} c = \frac{\beta _{T,i}}{\beta _{i}}-1 \le 0 \end{aligned}$$
(28)

4.5 Objective Function

The objective function , which represents the minimum mass of the structure, is given by Eq. (29):

$$\begin{aligned} M({{{\mathbf {{\small {\uppercase {X}}}}}}}) = \sum _{i=1}^{n} m_{i} l_{i} , \end{aligned}$$
(29)

where n is the number of variables; \(m_{i}\) is the linear mass for a given W-shape (Table 3); \(l_{i}\) is the length of the bar.

4.6 Optimization Algorithm Setting

Genetic Algorithms were used to optimize the structure, with the following specific settings:

  • Population size (‘PopulationSize’): 10 - 12 individuals;

  • Creation function (‘CreationFcn’): ‘gacreationuniform’ (default);

  • Crossover function (‘CrossoverFcn’): ‘crossoverscattered’ (default);

  • Mutation function (‘MutationFcn’): ‘mutationgaussian’ (default);

  • Elite individuals (‘EliteCount’): 5% of population size (default);

  • Maximum number of generations (‘MaxGenerations’): 200;

  • Algorithm for handling nonlinear constraints (‘NonlinearConstraintAlgorithm’): ‘penalty’;

  • Tolerance for objective function (‘FunctionTolerance’): \( 10^{-6} \);

  • Tolerance for constraints (‘ConstraintTolerance’): \( 10^{-3} \);

4.7 Results

Case A: One Design Variable. Table 4 shows the results obtained for case where all the elements have same section. It can be seen that for \(\beta _{T,3}=3.0\), the second order analysis reaches a economy of material of \(11.8\%\), when compared to the first order case. Furthermore, second order case presents smaller constraint violations and, consequently, higher calculated reliability indices, when both approaches use the same structural configuration.

Table 4. Case A: one design variable.

Case B: Two Design Variables. Table 5 shows the results obtained for case where beam elements and column elements have different sections. It can be seen that for \(\beta _{T,3}=3.0\), the second order analysis reaches a economy of material of \(6.1\%\), when compared to the first order case. Furthermore, second order case presents smaller constraint violations and, consequently, higher calculated reliability indices, when both approaches use the same structural configuration.

Table 5. Case B: two design variables.

Case C: Three Design Variables. Table 6 shows the results obtained for case where beam elements are composed of the same W-shape and columns elements may differ in their sections. It can be observed that in the 3 scenarios, both approaches had the same structural configuration. Furthermore, second order case presents smaller constraint violations and, consequently, higher calculated reliability indices.

Table 6. Case C: three design variables.

5 Conclusions

This work presented theoretical topics and an example for the RBDO method, using FORM and Genetic Algorithms, through a double loop approach. Despite the simplicity of the numerical application, a considerable computational effort was used in the process, with the analysis time being quite dependent on the computer’s processing power.

It was possible to notice that as far as the design variables were increased, closer was the mass found by the first or second order analyses. However, second order case presented smaller constraint violations and, consequently, higher calculated reliability indices, when both approaches used the same structural configuration.