1 Introduction

Metaheuristic algorithms have extensive applications in structural optimization due to their superior global search capacity and simple iterative mechanism (Astroza et al. 2016; Cao et al. 2017b; Chen et al. 2015; Gandomi et al. 2015). However, their slow convergence rate and enormous computational expenses required in the structural optimization process often desire cost-effective optimization approaches for large-scale structures based on the conventional optimization formulation (Arora and Wang 2005).

Structural optimization using metaheuristic algorithms has undergone tremendous evolutions, which can be summarized into four categories: 1) improving or hybridizing the standard metaheuristic algorithms (Ahrari and Deb 2016; Chen et al. 2015; Farshchin et al. 2016; García-Segura et al. 2015; Kaveh et al. 2014); 2) proposing new metaheuristic algorithms (Abdel-Raouf and Abdel-Baset 2014; Gandomi et al. 2013; Kaveh and Bakhshpoori 2016; Kaveh and Mahdavi 2014; Rashedi et al. 2009); 3) utilizing parallel computing techniques (Agarwal and Raich 2006; Jansen and Perez 2011; Umesha et al. 2005); and 4) incorporating the surrogate modeling approaches (Jin 2011; Lute et al. 2009; Ong et al. 2003; Shan and Wang 2010). The enhancement efficiency in the first two categories is problem dependent in light of the No-Free Lunch theorem (Wolpert and Macready 1997). The third category integrates the metaheuristic algorithms with parallel computing, which, together with a demand for a high performance-computing platform, restricts its applications. The fourth category normally applies a low-cost surrogate model to substitute the original objective or constraint functions within the optimization process. However, the determination of an accurate surrogate model for high dimensional and strong nonlinear problems often encounters challenges (Li et al. 2016).

Metaheuristic algorithms initially apply to unconstrained problems. In order to accommodate the constraints, various methods have emerged to efficiently handle the constraints (Efren 2009; Jordehi 2015; Mezura-Montes and Coello 2011). The penalty-based methods (Jordehi 2015), including the static penalty functions, dynamic penalty functions, adaptive penalty functions, death penalty, and etc., are widely applicable approaches for their simplicity and ease of implementation in transforming the constrained problem into an unconstrained one where the objective is augmented with penalties proportional to the degree of constraint infeasibility. Unlike the penalty method, another constraint handling techniques like the death penalty approach (Efren 2009; Mezura-Montes and Coello 2011), the Deb rule (Deb 2000) and the bi-objective method (Venter and Haftka 2010) handle the objective function and the constraint violation separately without parameter-tuning. The constraint handling approaches have also undergone certain developments upon using the metaheuristic algorithm search rules, for instance, the fly-back mechanism in particle swarm optimization (PSO) (Venter and Sobieszczanski-Sobieski 2003), the segregated genetic algorithm (Le Riche et al. 1995), the constraint-consistent genetic algorithm (Kowalczyk 1997), the filter-genetic algorithm (Tang and Wang 2015), and the mapping strategy for teaching-learning-based optimization (Baghlani et al. 2017). In summary, these constraint-handling approaches guide the search to the feasible regions while assessing both the objective function and the constraint functions for each new trial in the optimization process. Consequently, the indicator measuring the search capability using these approaches is equivalent to that assessing the computational efficiency for a given metaheuristic algorithm since both rely on the amount of objective or constraint violation evaluations in the implemented optimization.

For structural optimization, the objective function often demands low computational cost that may be negligible in comparison with that for constraint evaluations, which require time-consuming structural analyses. Therefore, the computational efficiency of structural optimization mainly depends on the amount of structural analyses and enhancing the computational efficiency of the metaheuristic algorithms will then be feasible by reducing the amount of constraint evaluations during the optimization process but concurrently maintaining the accuracy of the optimal design. Unlike previous constraint handling approaches, which focus on maintaining the feasibility of the solutions and/or facilitating the convergence rate, Kazemzadeh Azad et al. (Kazemzadeh et al. 2013; Kazemzadeh Azad and Hasançebi 2013) developed an upper bound strategy (UBS) to eliminate the structural analyses for the solutions with heavier net weight than the penalized weight of the current history best. Similar to the idea of the UBS, Cao et al. (2017a) introduced a filter strategy into the PSO to filter the redundant structural analyses in the optimization procedure. Compared with the UBS, the filter strategy initializes the swarm with feasible solutions and the upper bound in the filter strategy is the net weight of the best found trials, which can avoid the troublesome selection of the penalty coefficients and always guarantee the feasibility of the designs. Recent researches (Cao et al. 2017b; Kaveh and BolandGerami 2017; Kaveh and Ilchi Ghazaan 2017; Kazemzadeh Azad 2017a; Sheikholeslami et al. 2016) have demonstrated that these strategies can significantly enhance the computational efficiency of the metaheuristic algorithms and show huge potentials in large-scale structure optimization. This study extends the filter strategy proposed in (Cao et al. 2017a) to other metaheuristic algorithms and subsequently develops a general formula for this approach from the basal iterative formulas of the metaheuristic algorithms.

The remainder of this paper is organized as follows. Section 2 presents the general structural optimization formulation, and Section 3 summaries the metaheuristic algorithms from the new solution updating rules and addresses the filter strategy for metaheuristic algorithms with elitism. Section 4 applies the proposed method in five different metaheuristic algorithms and five particle swarm optimization based methods to investigate the performance of the proposed method by mathematical simulations and examine the relationship between the computational efficiency and the quality of the optimal solutions. Section 5 validates the enhancement computational efficiency of the proposed method with two large-scale structural optimization problems compared with the conventional penalty method and Deb rule by computational time. Section 6 summarizes the main conclusions of this study.

2 Structural optimization formulation

Structural optimization aims to identify the lightest or cost-optimal design under various structural functional requirements, e.g., deflection requirements at the serviceability limit sate, stress and stability requirements according to the ultimate limit state design. The optimization model therefore usually contains two parts: the objective function and the constraint functions. As various uncertain factors may affect the final cost estimation, investigators often define the objective function, Φ, based on the total usage of structural materials,

$$ \Phi =\min \left(W\left(\mathbf{x}\right)\right)=\min \left(\sum_{i=1}^{N_m}{\rho}_i{L}_i{A}_i\right) $$
(1)

where x denotes the design variables, W represents the total weight of the structure composed of N m members. For each member i, the parameters ρ i , L i , and A i refer to the material density, length of the member i, and the cross-sectional area, respectively.

The constraints comprise several inequalities driven by the mechanical requirements of the structure. Assuming that α quantifies the structural performance in term of stresses, displacements, natural frequencies, critical stability coefficients, etc., the constraint functions become,

$$ {\displaystyle \begin{array}{l}\begin{array}{cc}\hfill {\alpha}_j\le {\alpha}_j^{\ast}\hfill & \hfill for\kern0.5em j\in \left[1,\cdots, {N}_j\right]\hfill \\ {}\hfill {\alpha}_k\ge {\alpha}_k^{\ast}\hfill & \hfill for\kern0.5em k\in \left[1,\cdots, {N}_k\right]\hfill \end{array}\\ {}{\mathbf{x}}_{lower}\le \mathbf{x}\le {\mathbf{x}}_{upper}\end{array}} $$
(2)

where x lower , x upper define the lower and upper bounds of the design variables, respectively. α j and α k refer to the functional parameters with \( {\alpha}_j^{\ast } \) and \( {\alpha}_k^{\ast } \) defining the upper and lower bound values, respectively. N j and N k indicate the total numbers of the two types of inequality constraints.

Metaheuristic algorithms typically utilize a function ψ to evaluate the constraint violation in the optimization process. ψ = 0 implies a feasible design, while ψ > 0 indicates that the design variables locate in the infeasible space. The definition of the function ψ follows,

$$ \psi \left(\mathbf{x}\right)=\max \left(0,\phi \right) $$
(3)

where ϕ denotes the degree of constraint violation,

$$ \phi =\sum_{j=1}^{N_j}{\phi}_j+\sum_{k=1}^{N_k}{\phi}_k $$
(4)

The functions ϕ j and ϕ k evaluate the violation against the upper bound and lower bound constraint requirements, respectively,

$$ {\phi}_j=\left\{\begin{array}{c}\hfill {f}_j\le {f}_j^{\ast}\kern1em {\phi}_j=0\kern2.5em \hfill \\ {}\hfill {f}_j>{f}_j^{\ast}\kern1em {\phi}_j=\left|\frac{f_j-{f}_j^{\ast }}{f_j^{\ast }}\right|\hfill \end{array}\right. $$
(5)
$$ {\phi}_k=\left\{\begin{array}{c}\hfill {f}_k\ge {f}_k^{\ast}\kern1em {\phi}_k=0\kern2.5em \hfill \\ {}\hfill {f}_k<{f}_k^{\ast}\kern1em {\phi}_k=\left|\frac{f_k-{f}_k^{\ast }}{f_k^{\ast }}\right|\hfill \end{array}\right. $$
(6)

For the given design variables, x, the objective function is directly computable due to its explicit expression in structural optimization problems. However, the constraints are implicit equations with their evaluations relying on the time-consuming structural analyses especially for large-scale structures. In summary, structural optimization owns a low-cost objective function and high-cost constraint violation evaluation process.

3 Filter strategy for metaheuristic algorithms with elitism

Many constraint-handling techniques appeared to solve constrained problems by extending the application of the metaheuristic algorithms, which are designed originally for unconstrained optimization problems. In contrast to existing constraint-handling approaches, which require evaluations of both the objective value and constraint violations for each possible solution in the optimization process, this section illustrates a filter strategy that eliminates the unnecessary feasibility checks of the solutions in the optimization procedure using metaheuristic algorithms to enhance the computational efficiency.

3.1 Metaheuristic algorithms with elitism

Metaheuristic algorithms solve optimization problems with the following common steps: 1) initializing a series of random solutions; 2) seeking the optimal solution iteratively through some special search rules; 3) printing the best solution found in the search. The second step performs as the kernel for the metaheuristic algorithms simulating the mimicked physical or biological process. It also contains three elements: evaluating the fitness of each solution, producing new solutions and selecting solutions for the next iterative step. Different metaheuristic approaches employ distinct methods to deal with the newly generated solutions and the old ones. However, they can be summarized into two categories: 1) replacement, and 2) elitism.

As demonstrated in Fig. 1, assuming n solutions in each iteration step, the search rule generates n new solutions where the second, third and (n − 1)th solutions have better fitness than their corresponding old solutions. The replacement method (as shown in Fig. 1a) substitutes all old solutions with the newly generated ones without considering the fitness information at this stage. The gravitational search algorithm (GSA) (Rashedi et al. 2009) belong to this category. On the other hand, the elitism (as shown in Fig. 1b) updates the old solution by selecting only the new solution with improved fitness. The elitism ensures that the algorithm retains the best solutions found in previous iterations. The personal best and global best update rules in the PSO and the harmony memory update rule in the harmony search (HS) belong to the elitism category.

Fig. 1
figure 1

Two typical solution selection rules in metaheuristic algorithms: (a) replacement; (b) Elitism

Optimization algorithms require calculating each solution’s fitness as the search (new solution producing) rule for metaheuristic algorithms based on the fitness information of the current solutions. The number of the objective function evaluations at each iteration step for unconstrained problems thus equals the size of the population in evolutionary algorithms or swarm population in swarm intelligence algorithms. For constrained problems, however, the elitism based selection exhibits apparent advantages over the replacement method while employing the filter strategy proposed in this study.

3.2 Filter strategy

In contrast to unconstrained problems, constrained single-objective optimization problems, as illustrated in Section 2, contain an objective function and a series of constraint functions. The fitness calculation of a solution utilizes both the objective value and the constraint violation information. Thus, the metaheuristic algorithms based on the replacement strategies require the number of constraint violation evaluations equals that of the objective function evaluations in the optimization process. The elitism based selection can obviate this requirement by initializing the solutions in the feasible space and dealing with the objective function and the constraints separately.

In the searching process, each newly generated solution has four possible states compared with its old counterpart. The new solution may locate in four states: 1) in the feasible space with an improved objective value; 2) in the feasible space with a deteriorating objective value; 3) in the infeasible space with an improved objective value; and 4) in the infeasible space with a deteriorating objective value. With all old solutions in the feasible space, the elitism will filter all the new solutions with deteriorating objective values (in state 2 and state 4) without depending on their feasibilities. The constraint violation evaluations for these solutions thus become redundant. Based on this idea, the filter strategy, which predicates on the standard death penalty method (Jordehi 2015), aims to eliminate these redundant constraint violation evaluations. Similar to the death penalty method, this approach also requires initializing all the solutions in the feasible space and employs the following steps in the searching process:

1) Evaluating objective values of the newly generated solutions;

2) Selecting new solutions with improved objective values;

3) Evaluating the constraint violations for these selected new solutions;

4) Choosing the feasible ones and replacing the corresponding old solutions.

These four steps show that the number of the constraint violation evaluations required in the proposed method in each iteration step is smaller than the population of the newly generated solutions. This method founds on the characteristic of the elitism in metaheuristic algorithms and does not compromise the algorithms’ search capacity. Compared to the death penalty method, the above approach includes a filter operator to eliminate the redundant constraint evaluations for solutions with deteriorating objective values and does not reduce the efficiency of the constraint handling technique.

Figure 2 depicts the flowchart for filter strategy incorporated in the metaheuristic algorithms with elitism. Unlike the penalty methods that requires a careful fine-tuning of the penalty factor in determining the penalty severity (Kazemzadeh Azad 2017b; Kazemzadeh Azad and Hasançebi 2013), this approach maintains all the performance of the death penalty method and has a highly simplified procedure without additional parameters. The filter mechanism depends solely on the elitism of the metaheuristic algorithms and can thus enhance the computational efficiency of the optimization when coupled with other constraint handling techniques. For instance, it becomes identical to the UBS (Kazemzadeh Azad and Hasançebi 2013) when the filter mechanism is incorporated in the penalty methods. The Deb rule (Deb 2000) can also work with the filter mechanism to reduce the time-consuming structural analyses in large-scale structural optimization. Compared to the original constraint handling methods, these approaches coupled with the filter mechanism maintain their original performances with enhanced computational efficiency.

Fig. 2
figure 2

Flowchart of the metaheuristic algorithms with elitism incorporated with the filter strategy

This study defines a factor, R, to measure the computational efficiency of the optimization algorithm,

$$ R=\frac{N_{con}}{N_{obj}} $$
(7)

where N obj denotes the total number of the objective evaluations. N con refers to the total number of constraint violation evaluations and dominates the computational efficiency in an optimization process. A smaller value of R implies a larger improvement in the computational efficiency using the proposed method. R is equal to 1 for the penalty function method and the Deb rule.

3.3 Initialization of feasible solutions

The success of the search in metaheuristic based optimization also depends on the starting solutions (Maaranen et al. 2007). Similar to the death penalty method, the proposed filter strategy requires initializing solutions in the feasible space. Kazemzadeh Azad et al. (2017b) have demonstrated that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic based structural optimization algorithms. This study employs the improved opposition-based initialization strategy (Cao et al. 2017a; Cao et al. 2017b) as outlined below to generate feasible solutions.

  1. 1)

    Randomly produce a solution P in the search space;

  2. 2)

    Calculate the opposition point of P, denoted by OP, in the design space by,

$$ {\mathbf{x}}_{OP}={\mathbf{x}}_{upper}+{\mathbf{x}}_{lower}-{\mathbf{x}}_P $$
(8)

where x upper and x lower denote the upper and lower bound of design variables, respectively. x P represents the randomly generated solution in step 1 and x OP refers to the opposite positon of P in the design space.

  1. 3)

    Evaluate the constraint violation of solutions P and OP and calculate their objective values. If both particles reside in the feasible space, select the one with a better objective value, and go back to step 1; if only one of them is feasible, preserve the feasible one as an initial feasible particle and go back to step 1; if both two particles violate the constraints, go back to step 1. Repeat the loop until the number of selected solutions equals the predefined population of the solutions.

Amplifying the values of the size design variables in the lower bound vector, x lower , facilitates the feasible solutions for problems with small feasible ranges. Besides the above generic and stochastic approach, the mapping strategy (Baghlani et al. 2017) can also generate random feasible solutions for structural optimization problem with constraints varying linearly against the design variables.

4 Application on classical problems

This section utilizes five widely used numerical optimization problems to test the performance of the proposed constraint handling method by five standard metaheuristic algorithms with elite selection, including PSO (Trelea 2003), HS (Degertekin 2012), cuckoo search (CS) (Gandomi et al. 2013), flower pollination algorithm (FPA) (Yang 2012) and teaching-learning-based optimization (TLBO) (Farshchin et al. 2016). This study also employs four improved versions of the PSO including APSO (Nickabadi et al. 2011), BBPSO (Kennedy 2003), IPSO (Wu et al. 2011) and PSOPC (He et al. 2004) to demonstrate the relationship between the search efficiency and the computational efficiency. Each numerical test contains 50 independent runs of the optimization process and the algorithmic parameters utilize those recommended in the corresponding literatures.

The Appendix provides the details on the objective and the constraint functions for the five benchmark examples in (Yu et al. 2016). The total number of objective evaluations for Problem 1, Problems 3, Problem 4 and Problem 5, is 10,000 while that for Problem 2, the welded beam design problem, is 20,000. The population size equals 10 in CS and TLBO and that for FPA and PSO is 20 in all numerical tests. The termination criterion for HS is the maximum number of objective evaluations.

Table 1 lists the statistical results of the five problems using five different algorithms. Most of the algorithms, except HS, yield the optimal solution within 0.01% of the best objective value found so far. Both CS and PSO have identified the best solution for the other four problems in the 50 independent runs. Considering the average and standard deviation values, CS provides the best solutions in Problem 2 and Problem 5, and TLBO performs the best in Problem 1. FPA leads to the best solution in Problem 3 and PSO shows the strongest search ability in Problem 4. Even though HS fails to identify the best solutions for all five problems, the average and standard deviation values in problem 4 demonstrate that it performs better than FPA and TLBO. The results demonstrate that the search capabilities of the algorithms are problem dependent while the total number of objective evaluations remains the same.

Table 1 Comparison of the statistical results of the five problems using different metaheuristic algorithms

Figure 3 compares the average value of R calculated from the five different optimization algorithms. HS has the lowest R value while TLBO gains the largest R value among the five methods. This implies that HS has the highest computational efficiency among the five algorithms. Figure 3 and Table 1 also demonstrate that a larger R value does not correspond to a better solution. For Problem 4 as an example, PSO has the second smallest R, while performing the best among all five methods.

Fig. 3
figure 3

The average R calculated by different types of metaheuristic algorithms

Table 2 compares the statistical results of the five problems using PSO and its variations. PSOPC provides the best solution in Problem 1, Problem 2, Problem 3 and Problem 5. APSO performs the best in Problem 4. The advanced PSO approaches do not always perform better than the standard PSO for all the problems. This implies again that the algorithm search ability is problem dependent. Figure 4 shows the average R values calculated by different PSO methods. In contrast to Fig. 3, the variations of the R values for the same problem among PSO-based methods become much smaller. IPSO shows the highest computational efficiency among the five algorithms. PSOPC has the lowest R value in Problem 2 and the second lowest R values in Problem 1 and Problem 3. For these problems, PSOPC also provides the optimal solutions. PSOPC not only exhibits strong search ability, but also has a high computational efficiency in these three problems. This implies again that the accuracy of the optimal solution is independent of the R value.

Table 2 Comparison of the statistical results of the five problems using different PSO methods
Fig. 4
figure 4

The average R calculated by different types of PSO algorithms

5 Applications on large-scale structure optimization

This section employs two large-scale structure optimization problems, including a 26-storey, 942-bar truss tower (Hasançebi 2008; Hasançebi and Erbatur 2002) and a three-span suspension bridge with a total length of 1800 m (Cao et al. 2017b), to demonstrate the computational efficiency of the proposed constraint handling approach. This study utilizes the former benchmark example to examine the computational efficiency of different optimization algorithms. The suspension bridge example aims to investigate the efficiency of the proposed method in comparison with the penalty function approach and the Deb rule.

5.1 26-Storey 942-bar truss tower

Figure 5 shows a 26-story space truss tower consisting of 942 bars and 244 nodes. This problem aims to identify the lightest design with the design variables defined as the member cross sectional areas and divided into 59 groups as shown in Fig. 5. The loading on the tower includes,

Fig. 5
figure 5

The configuration of the 942-bar truss: (a) isometric view; (b) front view; and (c) section view of section 3

1) The vertical loads at each node in the first, second and third sections are −13.344 kN (−3.0 kips), −26.688 kN (−6.0 kips) and −40.032 kN (−9 kips), respectively;

2) The horizontal loads equal 6.672 kN (1.5 kips) in the x-direction at each node on the left half of the tower and 4.448 kN (1.0 kips) in the x-direction at each node on the right half;

3) The horizontal load in the y-direction has a magnitude of 4.448 kN (1 kips) applied on all nodes.

The density and elastic modulus of the material are 2767.99 kg/m3 (0.1 lb./in3) and 69 GPa (1.0 × 104 ksi), respectively. The constraint conditions include allowable stresses and displacements for the truss tower. The maximum allowable stress in each member under tension and compression equals 172.37 MPa (25 ksi) while the maximum allowable displacement in x, y, z direction for the all the nodes is 38.1 cm (15.0 in). The cross-sectional area of each bar is an integer in in.2, which ranges in 1.0 in.2 (6.45 cm2) and 200 in.2 (1290.32 cm2).

Figure 6 depicts the convergence curves of the 942-bar problem using CS, FPA, HS, IPSO and TLBO. The maximum number of the objective function evaluations in each optimization run is 80,000. Figure 6a illustrates the evolution of the weight of the tower versus the number of the objective evaluations, where TLBO fails to solve this high-dimensional optimization problem, while the other four algorithms converge to a narrow weight range, between 1.20 × 105 lb. (5.44 × 104 kg) and 1.40 × 105 lb. (6.35 × 104 kg). Compared with the randomly generated feasible solutions, these optimizations lead to about 90% of material savings. Figure 6b shows the evolution of the objective value versus the number of structural analyses (constraint violation evaluations). To accomplish the 80,000 objective evaluations, the numbers of structural analyses required by different algorithms are apparently different. HS owns the highest computational efficiency while TLBO requires the most computational efforts even though it failed to find the optimum solution. Table 3 lists the value of R and the computational time for this 942-bar problem. As this problem has a wide search domain, the R value for each algorithm decreases compared with those in Fig. 3. The R value for HS is only 0.09, which implies the proposed constraint handling method leads to more than 90% of time savings. The computational time by HS is only about 24.4 min while that by TLBO (with an R value of 0.77) is 117.0 min.

Fig. 6
figure 6

The convergence curves of the 942-bar truss problem with respect to: (a) no. of objective evaluations; and (b) no. of structural analyses

Table 3 Comparison of the optimization results for the 942-bar problem

Hasançebi and his co-workers have utilized the simulated annealing (SA) (Hasançebi and Erbatur 2002) and an adaptive evolution strategy (AES) (Hasançebi 2008) to seek the optimal design for this 942-bar tower. Table 3 compares the best results obtained in (Hasançebi 2008; Hasançebi and Erbatur 2002) and this study. The lightest weight gained by Hasançebi (2008) is about 141,243.7 lb. using AES with 150,000 structural analyses. Besides TLBO, all the other four methods yield much lighter optimal designs than the best result by AES. The optimal weight obtained by IPSO, based on 24,740 structural analyses, is only about 90% of the weight by AES. HS requires 7142 structural analyses while identifying a much better optimum than AES.

5.2 A three-span suspension bridge

Figure 7 depicts the layout of a three-span suspension bridge with a total length of 1800 m and a width of 40 m. L s and L m denote the length of the side-span and the main-span, respectively. f refers to the sag of the cable at the mid-span. The shortest hanger at the main-span, H s , remains 6 m and the distance between the two adjacent hangers is 15 m. Both the horizontal distance between the end of the bridge and the cable anchorage and the height of the pylon below the deck equal 30 m. Figure 7b sketches the cross section of the steel pylon and the cross beam. The geometric and size parameters of the stiffeners are constant to reduce the number of the design variables. The upper and lower cross beams have the same section and their width equal to W 1. Figure 7c shows the section for the girder. This example also assumes all the hangers with the same cross-sectional area. Therefore, this suspension bridge optimization problem includes 16 design variables (as shown in Fig. 7):

Fig. 7
figure 7

The geometric parameters of a typical three-span suspension bridge

Pylon: The width of the pylon (W 1), the height to width ratio (W 2/W 1), the thickness of the plates (T 1 and T 2);

Cross beam: The height of the cross beam (W 3), the thickness of the plates (T 3 and T 4);

Stiffening girder: The bottom width and the height of the girder (W 4 and W 5), the plate thickness of the girder (T 5, T 5 and T 7),

Cable system: The sag-to-span ratio (f/L m ), the area of the main cable (A 1) and hanger (A 2).

Bridge layout: The side-to-central span ratio (L s /L m ).

(9) to (10) describe the effective areas for the pylon, cross beam and girder, respectively, to include the stiffeners and the diaphragms in these members.

$$ {A}_{pylon}=\left(2{W}_1{T}_2+2{T}_1{W}_2-4{T}_1{T}_2\right)+2\left({W}_1+{W}_2\right){T}_s^p+{W}_1{W}_2{T}_d^p/{D}_p $$
(9)
$$ {A}_{cross}=2{W}_1{T}_4+2{W}_3{T}_3-4{T}_3{T}_4+2\left({W}_1+{W}_3\right){T}_s^c+{W}_1{W}_5{T}_d^c/{D}_c $$
(10)
$$ {\displaystyle \begin{array}{l}{A}_{girder}=\left({BT}_6+{W}_4{T}_7+2{W}_5{T}_5-2{T}_5{T}_6-2{T}_5{T}_7\right)+\left(B+{W}_4+2{W}_5/\cos \beta \right){T}_s^g\times 84/64\\ {}\kern3.50em +{W}_5\left({W}_4+B\right)/2\times {T}_d^g/{D}_g\end{array}} $$
(11)

The area calculation of the diaphragms assumes the diaphragms without holes. \( {T}_s^p=15 \) mm, \( {T}_s^c=8 \) mm and \( {T}_s^g=8 \) mm denote the thickness of the stiffeners in pylon, cross beam and girder, respectively. \( {T}_d^p=20 \) mm, \( {T}_d^c=15 \) mm and \( {T}_d^g=15 \) mm represent the thickness of the diaphragms in pylon, cross beam and girder while D p , D c and D g refer to the longitudinal distance of the diaphragms in pylon, cross beam and girder and are equal to 3 m.

The total material usage of the bridge follows:

$$ \Phi =4{A}_{pylon}\left(f+36\right)+4\times 40\times {A}_{cross}+{A}_{girder}\left(2{L}_s+{L}_m\right)+{A}_1{L}_c+{A}_2{L}_h $$
(12)

where L c and L h represent the total length of the main cable and hangers, respectively. The value of L c and L h derive from the analytical form-finding method proposed in (Chen et al. 2015, 2013).

The elastic modulus and density of steel Q345 for the stiffening girder, pylons and cross beams are 205 GPa and 7.85 × 103 kg/m3, respectively and those for the cable system, which are made of parallel steel wires, are 205 GPa and 8.005 × 103 kg/m3. The side-to-central span ratio is a discrete variable to maintain the length of the main span is in multiples of 30 m. This example considers three types of loads: the dead load, the live load and the static wind load. The dead load (DL) imposing on the girder equals ρgA girder (ρ denotes the density of the girder and g refers to the gravity acceleration) with an additional uniform load of 60 kN/m. Figure 8 shows the three typical unfavourable live load (LL) cases for a three-span suspension bridge. The lateral wind load (WL) acting on the pylon equals 1.5W 1 kN/m and the force acting on the girder follows:

$$ {q}_{wind}=1.5{W}_5\cdot \min \left(0.3,\frac{\beta }{200}\right)\kern0.5em \mathrm{kN}/\mathrm{m} $$
(13)

where β denotes the angle illustrated in Fig. 7c. This problem considers four load combinations: 1) DL, 2) DL + LL1 + WL, 3) DL + LL2 + WL and 4) DL + LL3 + WL. Table 4 lists the constraints derived from the strength, serviceability and the geometric requirement of the bridge. This study implements the optimization procedure and the form-finding analysis of the suspension bridge in Matlab coupled with FE analyses in ANSYS.

Fig. 8
figure 8

The three live load cases considered in the optimization

Table 4 Constraints and their allowable values

To demonstrate the efficiency of the proposed constraint handling method, this study utilizes the HS for its high computational efficiency and IPSO due to its strong search ability. The two widely used constraint handling approaches, the penalty function method and the Deb rule, coupled with IPSO are also employed for comparison. The maximum number of the objective function evaluations is 40,000 for all the tests and the population of the particle in all IPSO tests equals 40. In the penalty method based optimization, the penalty factor is fixed at 100 during the optimization process.

Figure 9 compares the evolution of the objective value versus the number of structural analyses for the four optimization procedures. Since the proposed constraint handling method initializes the solutions in the feasible space, the initial objective values in the two proposed methods are much smaller than those in the penalty method and the Deb rule. The objective values for the two commonly used methods do not always decrease as the iteration increases due to the infeasible solutions found at the initial search stage. The proposed constraint handling approach dramatically reduces the number of the structural analyses. Table 5 lists the optimization results for the three-span suspension bridge. IPSO coupled with the proposed constraint handling technique performs the best among the four methods. All the identified optimal results are feasible, except for the penalty method with a constraint violation of 1.59 × 10−4, which defines an infeasible solution based on 4. The number of structural analyses required by HS equals 9812, which implies more than 75% enhancement in the computational efficiency than the other two methods. As shown above, HS suffers from premature convergence, and leads to the heaviest design, with just 3% heavier than the lightest solution among the other four methods. Thus, HS is still a competitive method due to its high computational efficiency. Among the three IPSO-based methods, the proposed approach has obvious advantages, with the lightest design and the least number of structural analyses, about 65% time savings compared to the penalty method and Deb rule. Table 5 also lists the computational time for each method. The time required by Deb rule in each run is about 57.41 h, slightly longer than the penalty method due to its complex algorithmic structure. The computational time for IPSO and HS using the proposed constraint handling approach decreases to 20.23 h and 14.38 h, respectively. This demonstrates that the proposed approach can greatly enhance the computational efficiency for metaheuristic algorithms with the elite selection in large-scale structure optimization problems.

Fig. 9
figure 9

The evolution curve of the objective value versus the number of structural analyses

Table 5 Comparison of the optimization results for the three-span suspension bridge

6 Conclusions

This study proposes a filter strategy, which predicates on the solution updating rule of the metaheuristic algorithms, to reduce the redundant structural analyses in large-scale structural optimization. The feasible condition and the merits of the filter strategy have been discussed theoretically. Both numerical simulations and two large-scale structural optimization examples have demonstrated the efficiency of the metaheuristic algorithms coupled with the filter strategy. The present study supports the following conclusions:

  1. (1)

    According to the new solution updating rule, the metaheuristic algorithms can be divided into the two categories: replacement and elitism. The elitism characterizes by only preserving the solutions with better fitness in the iteration process, performs as the foundation of the proposed filter strategy.

  2. (2)

    The filter strategy not only eliminates the unnecessary constraint violation evaluations in the optimization procedure, but also upholds the search ability of the metaheuristic algorithms and retains the merits of the death penalty approach, for instance, without parameter-tuning and always ensuring the feasibility of the optimal solutions. Besides the death penalty approach, the filter mechanism of the filter strategy is also applicable to other constraint handling techniques, such as the penalty method and the Deb rule, to enhance their computational efficiency while maintaining the original performances.

  3. (3)

    The mathematical simulations show that the HS occupies the lowest R value, which is always smaller than 0.4 and means about 60% of time savings while the TLBO obtains the largest R value among the five methods, including the CS, FAP, HS, PSO and TLBO. The R value also varies with different variations for the same metaheuristic algorithm. Besides the computational efficiency varies with different algorithms, an algorithm with a lower computational efficiency does not guarantee a higher quality optimal solution.

  4. (4)

    The 942-bar problem shows that the R value for HS is merely 0.09, which implies the proposed filter strategy leads to over 90% of time savings than the standard death penalty approach. To solve the suspension bridge optimization with the same number of iterations, the penalty method and the Deb rule coupled with the IPSO cost 55.62 h and 57.41 h, respectively. However, the computational time has reduced to 20.23 h for IPSO and 14.38 h for HS using the proposed method. It demonstrates that the filter strategy can significantly enhance the computational efficiency of the metaheuristic algorithms in structural optimization. The illustrated method can also improve the computational efficiency in the problems with high-cost objective function and low-cost constraint functions by reversing the sequence of the objective function evaluation and the constraint violation check.