Keywords

1 Introduction

A family of metaheuristic optimization methods based on swarm intelligence have been developed in the past two decades. These methods simulate the behaviour of different groups/swarms/colonies of animals and insects. Ant colony optimization (ACO), harmony search (HS), particle swarm optimization (PSO), big bang-big crunch optimization (BB-BC) and artificial bee colony optimization (ABC), teaching–learning-based optimization (TLBO) are few examples of recent metaheuristic algorithms. They also are classified as population-based or nature-inspired optimization methods. The main philosophy of all metaheuristic optimization methods is the mimicking of the natural phenomenon. Design optimization of skeletal structures using metaheuristic search methods is an important field of engineering under continuous development. The state-of-the-art utilization of metaheuristic algorithms in weight or cost optimization of skeletal structures have been recently reviewed by Lamberti and Pappalettere [1] and Saka [2].

The ACO was originally proposed by Dorigo et al. [3] for optimization problems. The method simulates the foraging behaviour of real-life ant colonies. The ACO attempts to model some of the fundamental capabilities observed in the behaviour of ants as a method stochastic combinatorial optimization [4]. In addition to its different applications, the method has also been used for design optimization of structural systems. ACO was used for optimization of truss structures by Camp and Bichon [5], Capriles et al. [6], Serra and Venini [7], and Hasancebi et al. [8], and frame structures by Camp et al. [4], Kaveh and Shojaee [9], Hasancebi et al. [10], Kaveh and Talatahari [11].

HS was first developed by Geem et al. [12] for solving combinatorial optimization problems. The method bases on the analogy between the musical process of searching for a perfect state of harmony and searching for solutions to optimization problems. HS has been used for a variety of structural optimization problems including optimum design of truss structures [8, 13, 14], geodesic domes [15], grillage systems [16] and steel frames [1719].

In recent years, improved/modified HS algorithms have been developed in order to increase the efficiency of the method. Saka and Hasancebi [20] developed an adaptive harmony search algorithm for design code optimization of steel structures. Hasancebi et al. [21] proposed an adaptive harmony search method for structural optimization. Lamberti and Pappalettere [22] proposed an improved harmony-search algorithm, where trial designs are generated including information on the gradients of cost function for truss structure optimization. Two improved harmony search algorithms called efficient harmony-search (EHS) algorithm and self-adaptive harmony-search (SAHS) algorithm were proposed by Degertekin [23] for sizing the optimization of truss structures.

The PSO method was first developed by Kennedy and Eberhart [24]. It is based on the premise that social sharing of information among members of a species offers and evolutionary advantage [25]. PSO has been used in optimization of skeletal structures [2629]. Researchers introduced new features in the standard implementation of PSO. Li et al. [28, 29] developed a heuristic particle swarm optimizer (HPSO), which combines the PSO scheme and the HS scheme, for sizing optimization of truss structures. Kaveh and Talatahari [30, 31] introduced a particle swarmer, ant colony optimization and harmony search scheme for truss structures with both discrete [30] and continuous variables [31].

The BB-BC proposed by Erol and Eksin [32] simulates the theories of the evolution of the universe. According to this theory, in the big bang phase energy dissipation produces disorder, and randomness is the main feature of this phase; whereas, in the big crunch phase, randomly distributed particles are drawn into an order [33]. BB-BC algorithm was applied for sizing optimization of truss structures [34]. In order to improve the convergence capability of standard BB-BC algorithm, Kaveh and Talatahari [33, 35] developed hybrid BB-BC (HBB-BC) algorithm to optimize space trusses and ribbed domes.

The ABC method was first developed by Karaboga [36] for numerical function optimization. The ABC is an optimization method based on the intelligent behaviour of honey bee swarm. The ABC has successfully been applied to the size optimization of truss structures with both continuous [37] and discrete variables [38].

Another metaheuristic method called ‘teaching-learning-based optimization (TLBO)’ has been proposed by Rao et al. [39] for constrained mechanical design optimization problems. The method bases on the effect of influence of a teacher on learners and the effect of learners with each other. Rao et al. [40] developed the TLBO method for large-scale nonlinear optimization problems for finding global solutions. TLBO was employed for optimum design of planar steel frames [41] and sizing optimization of truss structures Degertekin and Hayalioglu [42].

In this chapter, the robustness of the SAHS [23] and TLBO [42] will be investigated in the optimization of truss type structures. Three benchmark truss structures existing in the current literature are presented to test the efficiency of the methods. The results obtained from these methods will be compared with those of other metaheuristic optimization algorithms recently presented in the literatures like particle swarm optimization (PSO), heuristic particle swarm optimizer (HPSO), hybrid particle swarm optimization (HPSO), big bang-big crunch optimization (BB-BC), heuristic particle swarm ant colony optimization (HPSACO), hybrid big bang-big crunch optimization (HBB-BC), artificial bee colony optimization (ABC-AP) and improved harmony search algorithm (IHS).

The rest of this study is organized as follows. The formulation of optimum design problem is given in Sect. 2. SAHS and TLBO methods are explained in Sects. 3 and 4. The results obtained from the design examples are presented and compared with other metaheuristic optimization methods in Sect. 5. Finally, conclusions are presented in Sect. 6.

2 Formulation of Optimum Design Problem

The minimum weight design problem for a truss structure can be formulated as

Find \(X = [x_{1} ,x_{2} , \ldots ,x_{\rm ng} ]\) to minimize

$$W(X) = \sum\limits_{k = 1}^{\rm ng} {x_{k} } \sum\limits_{i = 1}^{\rm mk} {\rho_{i} L_{i} }$$
(1)

subject to the following normalized constraints

$$g_{\rm nl}^{s} (X) = \frac{{\left| {\sigma_{\rm nl} } \right|}}{{\left| {\sigma_{\rm nu} } \right|}} - 1 \,\le\, 0,\quad 1 \,\le\, n \,\le\, {\rm nm},\quad 1 \,\le\, l \,\le\, {\rm nl}$$
(2)
$$g_{\rm nl}^{b} (X) = \frac{{\left| {\sigma_{\rm cl} } \right|}}{{\left| {\sigma_{\rm cu} } \right|}} - 1 \,\le\, 0,\quad 1 \,\le\, n \,\le\, {\rm ncm},\quad 1 \,\le\, l \,\le\, {\rm nl}$$
(3)
$$g_{jl}^{d} (X) = \frac{{\left| {d_{jl} } \right|}}{{\left| {d_{ju} } \right|}} - 1 \,\le\, 0,\quad 1 \,\le\, j \,\le\, {\rm nn},\quad 1 \,\le\, l \,\le\, {\rm nl}$$
(4)
$$x_{\hbox{min} } \le x_{k} \le x_{\hbox{max} } ,\quad k = 1,2, \ldots {\rm ng}$$
(5)

where X is the vector containing the design variables, \(W(X)\) is the weight of the truss structure, \({\rm ng}\) is the total number of member groups (i.e. design variables), \(x_{k}\) is the cross-sectional area of the members belonging to the group k, mk is the total number of members in the group k, \(\rho_{i}\) is the density of member i, L i is the length of member i, \(g_{\rm nl}^{s} (X)\), \(g_{\rm nl}^{b} (X)\) and \(g_{jl}^{d} (X)\) are the constraint violations for member stress, member buckling stress and joint displacements of the structure. \(\sigma_{\rm nl}\) and \(\sigma_{\rm cl}\) are the member stress and the member buckling stress of the nth member due to loading condition l, \(\sigma_{nu}\) and \(\sigma_{cu}\) are their upper limits. \(d_{jl}\) is the nodal displacement of the jth translational degree of freedom due to loading condition l, \(d_{ju}\) is its upper limit. nl is the number of load conditions, nn is the number of nodes, max and min are the upper and lower limits for cross-sectional area.

The optimum design of truss structures must satisfy optimization constraints stated by Eqs. (2)–(5). In this study, the constraints are handled using a modified feasible-based mechanism [30]. The efficiency of the method was previously verified for optimization of truss structures [23, 30]. The method consists of the following four rules [30]:

Rule 1::

Any feasible design is preferred to any infeasible design.

Rule 2::

Infeasible designs containing slight violation of the constraints (from 0.01 in the first search to 0.001 in the last search) are considered as feasible designs.

Rule 3::

Between two feasible designs, the one having the better objective function value is preferred.

Rule 4::

Between two infeasible designs, the one having the smaller constraint violation is preferred.

3 Self-Adaptive Harmony Search Algorithm (SAHS)

Harmony-search (HS) algorithm for sizing optimization of the truss structures could be explained in the following steps [23]:

Step 1. Initializing the harmony search parameters

HS parameters are assigned in this step. The number of design vectors in harmony memory (HMS), harmony memory consideration rate (HMCR), pitch adjusting rate (PAR) and the stopping criterion are selected in this step.

Step 2. Initializing harmony memory

All design vectors are stored in the harmony memory (HM). The HM matrix given in Eq. (6) is filled with randomly generated design vectors as the size of the harmony memory (HMS) in this step.

$${\rm HM} = \left[ {\begin{array}{*{20}c} {x_{1}^{1} } & {x_{2}^{1} } & \ldots & {x_{{\rm ng} - 1}^{1} } & {x_{{\rm ng}}^{1} } \\ {x_{1}^{2} } & {x_{2}^{2} } & \ldots & {x_{{\rm ng} - 1}^{2} } & {x_{\rm ng}^{2} } \\ \vdots & \vdots & {:::} & \vdots & \vdots \\ \vdots & \vdots & {:::} & \vdots & \vdots \\ {x_{1}^{{\rm HMS} - 1} } & {x_{2}^{{\rm HMS} - 1} } & \ldots & {x_{{\rm ng} - 1}^{{\rm HMS} - 1} } & {x_{{\rm ng}}^{{\rm HMS} - 1} } \\ {x_{1}^{{\rm HMS}} } & {x_{2}^{{\rm HMS}} } & \ldots & {x_{{\rm ng} - 1}^{{\rm HMS}} } & {x_{{\rm ng}}^{\rm HMS} } \\ \end{array} } \right]\begin{array}{*{20}c} \to \\ \to \\ \to \\ \to \\ \to \\ \to \\ \end{array} \begin{array}{*{20}c} {W(X^{1} )} \\ {W(X^{2} )} \\ \vdots \\ \vdots \\ {W(X^{{\rm HMS} - 1} )} \\ {W(X^{{\rm HMS}} )} \\ \end{array}$$
(6)

In the HM, each row represents a truss design. X 1, X 2,…, \(X^{{{\text{HMS}} - 1}}\), \(X^{\text{HMS}}\) and \(W(X^{1} )\), \(W(X^{2} )\),…, \(W(X^{{{\text{HMS}} - 1}} )\), \(W(X^{\text{HMS}} )\) are designs and the corresponding objective function values, respectively. The truss designs in the HM are sorted by their objective function values (\(W(X^{1} )\, \le W(X^{2} )\, \le \cdots \, \le W(X^{{{\text{HMS}} - 1}} ) \le \,W(X^{\text{HMS}} )\)) which are calculated using Eq. (1).

Step 3. Improvising a new harmony

A new harmony (i.e. new truss design) \(X^{\rm new} = (x_{1}^{\rm new} ,x_{2}^{\rm new} , \ldots ,x_{\rm ng}^{\rm new} )\) is generated using three rules: (i) HM consideration, (ii) pitch adjustment and (iii) random generation. Generating a new harmony is called ‘improvisation’ [13].

In the HM consideration, the value of the first design variable \(x_{1}^{\text{new}}\) for the new harmony is chosen from the HM, (i.e. \(\{ x_{1}^{1} ,x_{1}^{2} , \ldots .,x_{1}^{{\rm HMS} - 1} ,x_{1}^{\rm HMS} \}\)) or from the possible range of values. The other design variables of new harmony \((x_{2}^{\rm new} , \ldots ,x_{{\rm ng} - 1}^{\rm new} ,x_{\rm ng}^{\rm new} )\) are chosen by the same consideration. HMCR is applied as follows:

$$\left\{ \begin{aligned} & x_{i}^{\rm new} \in \{ x_{i}^{1} ,x_{i}^{2} , \ldots \ldots ,x_{i}^{\text{HMS} - 1} ,x_{i}^{\text{HMS}} \} \quad \\ & x_{i}^{\rm new} \in X_{s} \\ \end{aligned} \right.\begin{array}{*{20}c} {\rm with} & {\rm probability} & {\rm HMCR} \\ {\rm with} & {\rm probability} & {(1 - {\rm HMCR})} \\ \end{array}$$
(7)

where X s is the set of the possible range of values for each design variable (\(x_{\hbox{min} } \le X_{s} \le x_{\hbox{max} }\)). The HMCR, which varies between 0 and 1, is the rate of choosing one value from historical values stored in the HM, while (1–HMCR) is the rate of randomly selecting one value from the possible range of values [14]. For example, a HMCR value of 0.95 indicates that HS algorithm will choose the design variable from historically stored values in the HM with a 95 % probability and from the entire possible range with a 5 % probability [13].

Any design variable of the new harmony obtained by the memory consideration is examined to determine whether it is pitch-adjusted or not. This is performed by the pitch adjusting rate (PAR). PAR investigates a better design in the neighbouring of the current design and is applied as follows:

Pitch adjusting decision for

$$x_{i}^{{{\rm new}}} \leftarrow \left\{ {\begin{array}{*{20}l} {{\text{yes}}\,\,{\text{with}}\,\,{\text{probability}}} & {{\text{PAR}}} \\ {{\text{no}}\,\,{\text{with}}\,\,{\text{probability}}} & {(1 - {\text{PAR}})} \\ \end{array} } \right.$$
(8)

The value of (1–PAR) sets the rate of doing nothing, whereas the value of PAR indicates that \(x_{i}^{\text{new}}\) is replaced as follows:

$$x_{i}^{\text{new}} \leftarrow x_{i}^{\text{new}} + {\text{bw}} \times u( - 1,1)$$
(9)

where bw is the arbitrary distance bandwidth for continuous variable and \(u( - 1,1)\) is a uniform distribution between −1 and 1. For example, a PAR of 0.1 indicates that the algorithm will choose a neighbouring value with 10 % × HMCR probability [13]. HMCR and PAR parameters are introduced to allow the solution to escape from the local optima and to improve the global optimum prediction of HS algorithm [13].

Step 4. Updating the harmony memory

If the new harmony \(X^{\rm new} = (x_{1}^{\rm new} ,x_{2}^{\rm new} , \ldots ,x_{\rm ng}^{\rm new} )\) is better than the worst design in the HM, judged in terms of the objective function value, the new harmony is included in the HM and the worst harmony is excluded from the HM. In this process, the HM is sorted again by objective function values.

Step 5. Terminating the process

Steps 3 and 4 are repeated until the termination criterion is satisfied.

The proposed SAHS algorithm differs from the standard HS algorithm as indicated in the following aspects:

SAHS algorithm presented in this study dynamically updates PAR during the search process as follows [23]:

$${\text{PAR}}(ns) = {\text{PAR}}_{\hbox{max} } - \frac{{({\text{PAR}}_{\hbox{max} } - {\text{PAR}}_{\hbox{min} } )}}{NI} \times ns$$
(10)

In SAHS algorithm, bw is completely removed and Eq. (9) is replaced as follows:

$$x_{i}^{\rm new} = x_{i}^{\rm new} + [\hbox{max} (\rm HM)_{i} - x_{i}^{\rm new} ] \times u(0,1)\quad{\text{if}}\,u(0,1) \le 0.5$$
(11)
$$x_{i}^{\rm new} = x_{i}^{\rm new} - [x_{i}^{\rm new} - \hbox{\rm min} ({\rm HM})_{i} ] \times u(0,1)\quad{\text{if}}\,u(0,1) > 0.5$$
(12)

where \(\hbox{min} ({\text{HM}})_{i}\) and \(\hbox{max} ({\text{HM}})_{i}\) are the lowest and highest values of the ith design variable in the HM. \(u(0,1)\) is a uniform random number in the [0,1] range. Since \(\hbox{min} ({\text{HM}})_{i}\) and \(\hbox{max} ({\text{HM}})_{i}\) approach optimum gradually, SAHS algorithm produces finer adjustments to the harmony.

4 Teaching–Learning-Based Optimization

The TLBO method presents a mathematical model for optimization problems based on the simple teaching process. In the TLBO, the learners in a class are considered as the population. The teacher is accepted as the well-versed person in his/her profession. Hence, the learner with the highest mark in a class is mimicked as a teacher.

An analogy between the TLBO and the optimization of truss structures is established in the following way: a class is considered as a population which contains truss designs, a learner in a class denotes a truss design in the population, a design variable represents a subject taught to student, the grade of a student denotes the weight of the truss design, the teacher is the truss design with the lowest weight in the population.

Optimization of truss structures using the TLBO method consists of following steps [42]:

Step 1. Initializing the TLBO

In this step, the class is filled with randomly generated learners (truss designs) as the size of the population (ps).

$${\rm ps} = \left[ {\begin{array}{*{20}c} {x_{1}^{1} } & {x_{2}^{1} } & \ldots & {x_{{\rm ng} - 1}^{1} } & {x_{\rm ng}^{1} } \\ {x_{1}^{2} } & {x_{2}^{2} } & \ldots & {x_{\rm ng - 1}^{2} } & {x_{\rm ng}^{2} } \\ \vdots & \vdots & {:::} & \vdots & \vdots \\ \vdots & \vdots & {:::} & \vdots & \vdots \\ {x_{1}^{{\rm ps} - 1} } & {x_{2}^{{\rm ps} - 1} } & \ldots & {x_{\rm ng - 1}^{{\rm ps} - 1} } & {x_{\rm ng}^{{\rm ps} - 1} } \\ {x_{1}^{ps} } & {x_{2}^{{\rm ps}} } & \ldots & {x_{{\rm ng} - 1}^{{\rm ps}} } & {x_{\rm ng}^{{\rm ps}} } \\ \end{array} } \right]\begin{array}{*{20}c} \to \\ \to \\ \to \\ \to \\ \to \\ \to \\ \end{array} \begin{array}{*{20}c} {W(X^{1} )} \\ {W(X^{2} )} \\ \vdots \\ \vdots \\ {W(X^{{\rm ps} - 1} )} \\ {W(X^{{\rm ps}} )} \\ \end{array}$$
(13)

In the class, each row represents a truss design. \(X^{1}\), \(X^{2}\),…, \(X^{{{\text{ps}} - 1}}\), \(X^{\text{ps}}\) and \(W(X^{1} )\), \(W(X^{2} )\),…, \(W(X^{{{\text{ps}} - 1}} )\), \(W(X^{\text{ps}} )\) are truss designs and the corresponding weight values, respectively. It should be noted that the designs in the class are sorted by their weight values (\(W(X^{1} ) \le W(X^{2} ) \le \cdots \le W(X^{{{\text{ps}} - 1}} ) \le W(X^{\text{ps}} )\)) which are calculated using Eq. (1).

Step 2. Teaching phase

The truss design with the lowest objective function (\(W(X^{1} )\)) is assigned as the teacher \((X^{\text{teacher}} = X^{1} )\). The aim of the teacher is to put effort to move the mean of the class \((X^{\text{mean}} )\). Therefore, ith design (i ≠ 1) is modified using the following expression:

$$X^{{{\text{new}},i}} = X^{i} + r(X^{1} - T_{F} X^{\text{mean}} )$$
(14)

in which \(X^{i}\) and \(X^{{{\text{new}},i}}\) are the current and new designs, respectively. r is the random number uniformly distributed in the range of [0,1], T F is the teaching factor which is either 0 or 1 [39] and \(X^{\text{mean}}\) is the mean of the designs calculated as the following way:

$$X^{\rm mean} = \left[ {m\left( {{\rm class}\left( {\sum\limits_{i = 1}^{\rm ps} {x_{1}^{i} } } \right)} \right),m\left( {{\rm class}\left( {\sum\limits_{i = 1}^{\rm ps} {x_{2}^{i} } } \right)} \right), \ldots \ldots ,m\left( {{\rm class}\left( {\sum\limits_{i = 1}^{\rm ps} {x_{\rm ng}^{i} } } \right)} \right)} \right]$$
(15)

where \(m( \cdot )\) is the mean of the design variable. If the new design (\(X^{{{\text{new}},i}}\)) is better than the current design (\(X^{i}\)) (i.e. \(W(X^{{{\text{new}},i}} ) < W(X^{i} )\)), the new design is replaced with the current design, \(X^{i} = X^{{{\text{new}},i}}\).

Step 3. Learning phase

In addition to the teacher’s effort to improve the mean of the class, the learners also interact with each other to improve themselves. A design in the population is randomly interacted with other designs to improve its quality. The learning phase is applied to learn new information between the design i and j (i ≠ j) in the population and can be expressed as [41]

$$X^{{\rm new},i} = X^{i} + r(X^{i} - X^{j} )\quad{\rm if}\,\,W(X^{i} ) < W(X^{j} )$$
(16a)
$$X^{{\rm new},i} = X^{i} + r(X^{j} - X^{i} )\quad{\rm if}\,\,W(X^{j} ) < W(X^{i} )$$
(16b)

in which \(X^{j}\) is the randomly determined design which has to be different from \(X^{i}\). If the value of \(W(X^{{{\text{new}},i}} )\) is better than \(W(X^{i} )\) (i.e. \(W(X^{{{\text{new}},i}} ) < W(X^{i} )\)), the new design is replaced with the current design \(X^{i} = X^{{{\text{new}},i}}\).

Step 4. Terminating the search process

The steps 2 and 3 are repeated until the lightest truss design does not improve during a predetermined number of structural analyses.

5 Design Examples

The effectiveness and robustness of the SAHS [23] and TLBO [42] are tested using four truss structures. The results obtained by the methods are compared with those of HS [13], IHS [22], PSO [25], PSO, PSOPC and HPSO [28], HPSACO [31], HBB-BC [33], BB-BC [34] and ABC-AP [37].

SAHS algorithm produces the minimum weight design for the values of 20 for HMS, 0.90 for HMCR, 0.20 and 0.80 for PARmin and PARmax, 0.001 and 0.01 [23].

Two tuning parameters are employed in the TLBO: the population size (ps) and the number of designs generated in the learning phase (ndlp). The best combination of them obtained after sensitivity analysis and the minimum weight design for the TLBO is obtained by setting ps = 30, ndlp = 4 [42].

Twenty independent runs are made for each design example involving 20 different initial designs because of the stochastic nature of the SAHS and TLBO. The best designs obtained by the methods, the number of structural analyses required to the optimum solutions, the average weight and the standard deviation of 20 independent runs are given in the tables. The SAHS and TLBO are coded in FORTRAN language and executed on a Intel Pentium Core 2 Duo 2.2 GHz machine.

5.1 Ten-Bar Plane Truss

The planar ten-bar plane truss, shown in Fig. 1 is the first design example. The Young’s modulus and density of truss members are 104 ksi (1 ksi = 6.895 MPa) and 0.1 lb/in3, respectively. The allowable stress for all members is specified as 25 ksi both in tension and compression. The maximum displacements of all free nodes in the X and Y directions are limited to ±2. Each member is considered as a design variable with the minimum gauge of 0.1 in2.

Fig. 1
figure 1

Ten-bar plane truss (1 in. = 2.54 cm, 1 kip = 4.448 kN)

The results obtained by the SAHS [23], TLBO [42] and the other optimization methods are reported in Table 1. The presented methods in this chapter found the better designs than those of the HS [13], PSO [25] and HPSACO [31] since the lighter designs obtained by classical HS [13], PSO [25] and HPSACO [31] violated the design constraints while the designs obtained by the SAHS [23] and TLBO [42] are feasible. Although IHS [22] required the lowest number of structural analyses, it should be noted that the number of structural analyses required by IHS [22] is not a significant basis of comparison to evaluate the efficiency of SAHS [23] and TLBO [42] since the IHS [22] included gradient information that allowed the number of function evaluations to be drastically reduced as gradient information, inherently speed up the optimization process [23]. Convergence histories (i.e. structural weight versus number of structural analyses) are illustrated in Fig. 2.

Table 1 The pseudocode of the HS algorithm
Fig. 2
figure 2

Convergence histories for the ten-bar plane truss

5.2 Twenty-Five-Bar Space Truss

The twenty-five-bar space truss, shown in Fig. 3 is one of the most popular design examples used in the literature for comparing different optimization methods. The Young’s modulus and the density of truss members are 104 ksi and 0.1 lb/in3, respectively. The structure is subject to the two loading conditions given in Table 2. The design variables of the structure and the allowable stress values for all groups are listed in Table 3. The displacement of nodes in all directions is restricted to be less than ±0.35 in. The minimum cross-sectional area for each group of elements is 0.01 in2.

Fig. 3
figure 3

Twenty-five-bar space truss

Table 2 Loading conditions for the twenty-five-bar space truss
Table 3 Allowable stress values for the twenty-five-bar space truss

The results obtained by the SAHS [23], TLBO [42] and the other optimization methods existing in the literature are reported in Table 4. It is clear from Table 4 that the TLBO [42] developed the best design overall since the lighter design obtained by HS [13] and HPSACO [31] violated the design constraints while the design obtained by the TLBO [42] is feasible. Moreover, it should be noted that although the TLBO [42] required more structural analyses than the SAHS [23] to find the optimum design, the TLBO [42] developed a design with a weight of 545.38 lb after 6665 structural analyses while the SAHS [23] required 6941 structural analyses to find the same weight. The convergence histories are illustrated in Fig. 4.

Table 4 Optimization results obtained for the twenty-five-bar space truss
Fig. 4
figure 4

Convergence history for the twenty-five-bar space truss

5.3 Seventy-Two-Bar Space Truss

The third example deals with the design of the seventy-two-bar space truss shown in Fig. 5. The structure is subject to the loading conditions given in Table 5. The Young’s modulus and density of the material are 104 ksi and 0.1 lb/in3, respectively. The member cross-sectional areas are treated as design variables, and are divided into 16 groups. The allowable stress for all members is specified as 25 ksi both in tension and compression. The maximum displacements of all free nodes are limited to ±0.25 in. The minimum cross-sectional areas are specified as 0.1 in2.

Fig. 5
figure 5

Seventy-two-bar space truss: a dimensions b element and node numbering patterns for the first storey

Table 5 Loading conditions for the seventy-two-bar space truss

Table 6 shows the results obtained by the SAHS [23], TLBO [42] and the other optimization methods reported in current literature [13, 25, 28, 33, 34, 37]. The TLBO [42] founded the lightest design overall because the lighter design obtained by the HS [13] violates the design constraints. It is seen from Table 6 that the BB-BC [34] found a minimum weight of 379.85 lb after 19621 structural analyses for case 1 while the TLBO [42] developed the same weight after 8422 structural analyses. Figure 6 shows the convergence histories of the SAHS [23] and TLBO [42].

Table 6 Optimization results obtained for the seventy-two-bar space truss
Fig. 6
figure 6

Convergence histories for the seventy-two-bar space truss

6 Conclusions

The SAHS and TLBO obtained results as good as or better than other metaheuristic optimization methods in terms of both the optimum solutions and the convergence capability. It appeared that although the TLBO developed slightly heavier designs than the PSOPC, HPSO and ABC-AP in a few cases, it required significantly less structural analyses than the PSOPC, HPSO and ABC-AP in all design examples. It should be noted that standard deviation of optimized weights obtained over 20 independent runs was quite small, which is <1.0 % in all design examples, compared with average optimized weight. This points out that the SAHS and TLBO are able to find a nearly global optimum design.