Introduction

Nonlinearity widely exists in real world optimization problems and seriously complicates the search space, posing significant challenges in obtaining the global optimality of interest. Metaheuristic techniques frequently provide satisfactory solutions to these problems; hence, they have been a major focus in recent decades (Yang 2010). These techniques usually mimic some natural or social phenomenon, such as the biological evolutionary process [e.g., genetic algorithm (GA) (Sivaraj and Ravichandran 2011), differential evolution (DE) (Das and Suganthan 2011) and biogeography-based optimization (BBO) (Simon 2008)] or animal behavior [e.g., particle swarm optimization (PSO) (Eberhart et al. 1995), artificial bee colony (ABC) (Karaboga and Basturk 2007), ant colony optimization (ACO) (Dorigo et al. 2006) and cuckoo search (CS) (Yang and Deb 2009)].

The harmony search (HS) algorithm is a recently developed meta-heuristic algorithm (Geem et al. 2001). It imitates the music improvisation process during which musicians continuously adjust the pitch of their instruments to obtain improved harmony. The optimization process is similar to the music improvisation process. Each decision variable continuously changes its value during the search process to converge to the global best. HS has received significant attention and has been applied in many areas such as renewable energy systems (Askarzadeh and Zebarjadi 2014; Maleki and Pourfayaz 2015), image registration (García-Torres et al. 2014), robotics (Koceski et al. 2014; Kundu and Parhi 2016), job shop scheduling (Gao et al. 2014b), transportation (Zheng et al. 2016; Hosseini et al. 2014; Yassen et al. 2015), assembly sequence planning (Li et al. 2016), computer experiment designs(Yi et al. 2016b) and wireless sensor networks (Zeng and Dong 2016). A more detailed survey of HS applications was summarized by Manjarres et al. (2013). Several studies have been proposed for obtaining improved HS performance and gaining a superior understanding of its concept, including exploratory power (Das et al. 2011), selection strategies (Al-Betar et al. 2013), stochastic analysis (Sarvari and Zamanifar 2012) and iterative convergence (Gao et al. 2014a).

With the development of nonlinear dynamics, the concept of chaos has been successfully applied in the area of optimization methods. Previously, chaotic sequences have been embedded into metaheuristic methods such as particle swarm optimization (Gao et al. 2012; Gandomi et al. 2013c), memetic differential evolution algorithm (Jia et al. 2011), artificial bee colony algorithm (Alatas 2010), firefly algorithm (Gandomi et al. 2013b), imperialist competitive algorithm (Talatahari et al. 2012), bat swarm optimization (Jordehi 2015), krill herd algorithm (Wang et al. 2014) and harmony search (Alatas 2010). In the above methods, sequences generated from different chaotic systems are used to substitute random numbers for different parameters. One of the advantages of chaotic sequences is that chaos can promote order from disorder. Similarly, many metaheuristic methods are inspired from biological systems where order can be realized from disorder. Owing to the similarities between chaos and metaheuristic methods, the use of chaos seems to improve the performance of metaheuristics (Kaveh 2014).

However, chaos is sensitive to its initial conditions, and even small changes in the parameters or the starting values can lead to vastly different future behaviors. Consequently, the performance of optimization algorithms is not stable. To increase the robustness of the optimization algorithm, in this paper, a parallel chaotic local search method is proposed and embedded into a modified harmony search with an intersect mutation operation. The proposed new harmony search variant is called parallel chaotic local search enhanced harmony search (MHS–PCLS). MHS–PCLS maintains the basic structure of HS and adopts an intersect mutation operation in the process of improving new harmonies, which helps to increase the diversity of harmony memory. The parallel chaotic local search is used to detracts the sensitivity of initial condition of chaotic maps and enhance the exploitation ability of MHS–PCLS. Several well-known numerical benchmark problems are tested to demonstrate the efficiency, accuracy and robustness of the new approach. The MHS–PCLS method is further combined with the improved Deb’s constrained handling method for constrained optimization problems, which frequently occur in real world engineering design. The performance of the extended method is validated by several constrained engineering design problems and a complex case study of car side impact design. The test results confirm that the MHS–PCLS outperforms other algorithms.

The remander of this paper is organized as follows. In “Previous work” section, the previous work on harmony search is introduced. The new MHS–PCLS algorithm is proposed in “Proposed approach” section. “Numerical simulations and analysis” section provides a numerical simulation of MHS–PCLS on several benchmark functions. The performance of MHS–PCLS combined with a modified Deb’s constraint handling technique is tested on several constrained engineering design problems in “Constrained engineering design problems” section. A case study is investigated in “Case study: car side impact design” section. Finally, “Conclusions” section offers the concluding remarks.

Previous work

The harmony search algorithm is simple in concept and easy in implementation with only a small number of parameters (Zarei et al. 2009). However, the performance of the basic HS is not satisfactory; hence, many previous studies have been conducted to improve its performance.

Table 1 Information of the considered chaotic maps

Mahdavi et al. (2007) proposed an improved harmony search (IHS) algorithm. IHS applies the same memory consideration, pitch adjustment and random selection as the basic HS algorithm; however, it dynamically updates the values of the parameters such as the pitch adjustment rate (PAR) and the bandwidth (BW).

Inspired by the concept of swarm intelligence as proposed in PSO (Eberhart et al. 1995), Omran and Mahdavi (2008) proposed another variant of the HS algorithm, called the Global-best harmony search (GHS) algorithm. In GHS, the improvised new vector directly adopts the current best pitch from the harmony memory to simplify the pitch adjustment step. Moreover, the parameters in GHS are also dynamically adapted in the same procedure as IHS. However, this variant has a serious deficiency in that it may generate infeasible solutions. The deficiency is induced by ignoring the differences between the decision variables of the different dimensions.

To overcome the deficiency of GHS, Pan et al. (2010) proposed a self-adaptive global best harmony search (SGHS) algorithm. The SGHS algorithm employs a new improvisation scheme. The memory consideration procedure was modified by adding additional adjustment to the original memory consideration procedure in HS. The pitch adjustment rule was also modified by adopting the corresponding value from the current best pitch. The SGHS algorithm does not require a precise setting of specific values for the critical parameters such as the harmony memory consideration rate (HMCR), PAR and BW, in accordance with the problem’s characteristics and complexity. The HMCR and PAR parameters are dynamically updated to a suitable range by recording their historical values corresponding to generated harmonies entering the \({{\varvec{HM}}}\). The value of the BW parameter is decreased with increasing generations by a dynamic method.

Enayatifar et al. (2013) proposed a novel HS algorithm based on learning automata called LAHS. In the LAHS algorithm, learning ability is employed in the HS to select the parameters based on spontaneous reactions. To begin, all the parameter values are randomly selected based on a specific probability distribution, e.g., uniform distribution. Then, a reinforcement signal is returned to the parameters after each fitness evaluation. If the fitness value was improved, the signal is a reward signal; otherwise, it is a punish signal. The automaton learning mechanism employs this feedback to update the existing parameter probability distributions. Repeating this action increases the probability distribution of the better parameters and the most favorable parameter values are eventually determined.

In other research works, selection strategies have been introduced into the HS algorithm. Al-Betar et al. (2012) proposed several novel selection schemes to replace the random selection scheme in the memory consideration operation of HS. The novel selection schemes employed the natural selection principle of “survival of the fittest” to generate the new harmony by focusing on the better solutions stored in \({{\varvec{HM}}}\). Computational results verified that the novel selection schemes directly influenced the performance of HS. In another research work, Castelli et al. (2014) proposed a geometric selective HS (GSHS). In GSHS, tournament selection is used to choose two parents for generating new harmonies by the newly defined mutation operation.

Other researchers have attempted to improve the performance of HS by modifying the algorithm structure. Related research work such as Al-Betar et al. (2015) proposed an island-based HS (iHS) algorithm. iHS learns from the concept of the island model, which has been successfully applied in other evolutionary algorithms (EAs). According to the island model, iHS divides the HM into several sub-populations (islands) and each island evolves independently for a numbers of iterations. Periodically, the islands interact using a migration process, which is responsible for sending and receiving certain individuals across islands controlled by migration rate and migration frequency.

Hybridization of different metaheuristics is also an efficient method to enhance the performance of the algorithms. It can integrate their advantages and shield their shortcomings. Along with this research line, the HS algorithm was successfully hybridized with PSO and DE to yield improved performance (Zhao et al. 2015; Abedinpourshotorban et al. 2016).

Proposed approach

Chaotic sequence

In meta-heuristic optimization, random sequences with a long period and good uniformity are always required (Schuster and Just 2006). Chaos is the external complexity performance of identifying a nonlinear system with inherent randomness (Wang and Yao 2009). It has the characteristics of long-term unpredictability, non-periodicity, non-convergence and boundedness. Further, it has a sensitive dependence upon its initial condition and parameter (Schuster and Just 2006). Phatak and Rao (1995) proved that a chaotic sequence could be used as a possible approach to obtain random numbers. In this study, eight well-known one-dimensional chaotic maps in Baykasoglu (2012) are considered. The detailed information of these chaotic maps is presented in Table 1.

To investigate the distribution and ergodic properties of these different chaotic maps, we run these maps for 10,000 iterations, and obtain the result for the distribution in the range of (0, 1). The chaotic sequences of the Chebyshev map and the ICMIC map are normalized. The distribution properties of the eight different maps are displayed in Fig. 1, which indicates that the distribution or the ergodic properties of these chaotic maps are different. Previous studies have demonstrated that the probability distribution property of different chaotic maps significantly affect the global searching capability and optimization efficiency (Yang et al. 2014; Yuan et al. 2014). Therefore, it is advantageous to employ the chaotic map contributing to a higher convergence rate and accuracy. This part will be discussed in “Performance of MHS–PCLS with different chaotic maps” section.

Fig. 1
figure 1

Distribution property comparisons of eight different chaotic maps

Basic HS algorithm

In the basic HS algorithm, each solution is called a harmony and is represented by an N-dimension real vector. An initial population of harmony vectors is randomly generated and stored in the harmony memory (HM). Then, a new candidate harmony is improvised from all of the solutions in the HM using a memory consideration rule, a pitch adjustment rule and a random re-initialization. Finally, the HM is updated by comparing the fitness between the new candidate harmony and the worst harmony in the current HM. The worst harmony vector is replaced by the new candidate harmony vector if it is better than the worst harmony vector in the HM. The improvisation and updating process repeat until a predefined termination criterion is reached. The pseudo code of HS is illustrated in Algorithm 1. A more detailed description of HS can be found in (Mahdavi et al. 2007).

figure a

Proposed MHS–PCLS algorithm

HS algorithm with intersect mutation operator

In our previous study, a modified HS variant (MHS) with an intersect mutation operator was proposed to solve the continuous function optimization problems (Yi et al. 2016a). The core idea of MHS is to utilize the intersect mutation operation between the better part and the worse part to maintain the diversity of the harmony memory. The intersect mutation operation was proven to be efficient and is adopted in this study. Figure 2 is the schematic drawing of the intersect mutation operation. To begin, all of the harmony vectors are divided into two parts based on their fitness evaluations. We introduce a constant coefficient \(M (0<M<1)\), which stands for the proportion of better harmonies in the harmony memory pool. In this research, we set \(M=0.4\). For the better part, we mutate the vectors with one harmony \((wr_1)\) chosen from the worse part and two harmonies (\(br_1\) and \(br_2\)) chosen from the better part, as the formula below indicates:

$$\begin{aligned} x_i=x_{wr_1}+F\cdot (x_{br_1}-x_{br_2}), br_1\ne br_2\ne wr_1\ne i \end{aligned}$$
(1)

where \(F (0<F<1)\) is the mutation parameter. F is set to 0.5 in this study.

For the worse part, we mutate the vectors with one harmony \((br_1)\) chosen from the better part and two harmonies (\(wr_1\) and \(wr_2\)) chosen from the worse part, as the formula below indicates:

$$\begin{aligned} x_i=x_{br_1}+F\cdot (x_{wr_1}-x_{wr_2}), wr_1\ne wr_2\ne br_1\ne i \end{aligned}$$
(2)
Fig. 2
figure 2

Schematic drawing of the proposed intersect mutation operation

The detailed pseudo code explaining the various steps of improvising new harmonies is exhibited in Algorithm 2 for easy implementation and understanding of the proposed approach.

figure b

Parallel chaotic local search

As mentioned in “Chaotic sequence” section, chaos is sensitive to its initial conditions, which leads to an algorithm with low robustness. To increase the robustness of the algorithm, a parallel chaotic local search (PCLS) is proposed in this paper. The PCLS method searches from several different initial points and hence can diminish the sensitivity of the initial conditions.

For an optimization problem with decision variables \({{\varvec{x}}}=(x_1,x_2,\ldots , x_N), N\) represents the number of decision variables. In the proposed method, each decision variable is mapped by m chaotic maps simultaneously. In the PCLS, a \(m\times N\) matrix \({{\varvec{CMS}}}\) of chaotic maps is generated by,

$$\begin{aligned} {{\varvec{CMS}}}_i = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \delta ^i_{11} &{}\delta ^i_{12}&{} \ldots &{} \delta ^i_{1N} \\ \delta ^i_{21} &{}\delta ^i_{22}&{} \ldots &{} \delta ^i_{2N} \\ \vdots &{} \ldots &{} \ldots &{} \vdots \\ \delta ^i_{m1} &{}\delta ^i_{m2}&{} \ldots &{} \delta ^i_{mN} \end{array} \right] _{m\times N} \end{aligned}$$
(3)

where \(\delta ^i_{jk}\) is a random number in the range of (0, 1) generated by the chaotic maps. i is the ith generation, j is the jth chaotic map and k is the kth variable. \(\delta ^i_{jk}\) is updated in every generation by the chaotic maps mentioned in “Chaotic sequence” section.

The candidate solutions in the PCLS method are generated by the following formulas:

$$\begin{aligned}&{{\varvec{CM}}}_i=\lambda {{\varvec{CB}}}_i+(1-\lambda ) {{\varvec{RS}}}_i \end{aligned}$$
(4)
$$\begin{aligned}&{{\varvec{CB}}}_i = \left[ \begin{array}{c} {{{\varvec{X}}}}_i \\ {{{\varvec{X}}}}_i \\ \vdots \\ {{{\varvec{X}}}}_i \end{array} \right] _{m\times N} \end{aligned}$$
(5)
$$\begin{aligned}&{{{\varvec{X}}}}_i = \left[ \begin{array}{cccc} x^1_i&x^2_i&\ldots ,x^N_i\end{array} \right] \end{aligned}$$
(6)

where \({{\varvec{CM}}}_i\) is the candidates matrix of PCLS in the ith generation, \({{\varvec{CB}}}_i\) is the matrix consisting of m individuals \({{{\varvec{X}}}}_i\), and \({{\varvec{RS}}}_i\) is generated by the equation \({{\varvec{RS}}}_i={{\varvec{LB}}}+{{\varvec{CMS}}}_i\cdot ({{\varvec{UB}}}-{{\varvec{LB}}})\), where [LBUB] is the search space of individual \({{{\varvec{X}}}}_i\). \(\lambda \) is the weighting factor.

Equation (4) is based on the geometric crossover operation (Moraglio et al. 2013), where the offspring \({{\varvec{CM}}}\) is the linear combination of the two parents \({{\varvec{CB}}}\) and \({{\varvec{RS}}}\). The generated offspring always stand between the segments delimited by the two parents and the greater the value of \(\lambda \), the closer the offspring is to the current best. A local search is performed to ensure that the offspring will be produced around the current best; hence, a greater value of \(\lambda \) is preferred. Moreover, in case the local search has been trapped into local optima, a dynamic adjustment strategy on \(\lambda \) according to the logistic curve (Pearl and Reed 1920) is adopted in this paper:

$$\begin{aligned} \lambda = 0.9+0.1\cdot \frac{1+e^{-5}}{1+e^{10\left( 0.5-\frac{FEs}{mFEs}\right) }} \end{aligned}$$
(7)

where FEs is the current number of function evaluations, and mFEs is the maximum number of function evaluations. Figure 3 presents the trend of \(\lambda \) with increasing FEs and assumes the shape of “S”. Because \(\lambda \) continues to increase during the entire optimization process, the step length of PCLS will be decreased; hence, PCLS will enhance the exploration abilities at the beginning phase of the optimization and refine the solutions in the later phase.

Fig. 3
figure 3

Trend of \(\lambda \) with increasing FEs

Although the PCLS method can enhance the search ability of the MHS algorithm, it also increases the computational cost. To avoid premature convergence and save FEs, PCLS terminates when a superior fitness is obtained. The procedure for the method is presented as follows and the flowchart is provided in Fig. 4.

  • Step 1 Initialization

    1. 1.1.

      Set parameters HMSHMCRPAR, and the max iteration number (NI), proportion of the better part in harmony memories \(M(0<M<1)\) and mutation parameter F.

    2. 1.2.

      Initialize all harmonies in \({{\varvec{HM}}}\) within the search space and evaluate their fitness.

    3. 1.3.

      Divide all the harmonies into two parts based on their fitness.

  • Step 2 Iteration

    1. 2.1.

      Improvise new harmonies with intersect mutation as indicated in Algorithm 2.

    2. 2.2.

      Divide all the harmonies into two parts again and identify the current best harmony in the harmony memory pool.

  • Step 3 Apply PCLS on the current best harmony.

  • Step 4 Check the termination criterion. If the termination criterion is met, output the best solution. Otherwise, return to Step 2.

Fig. 4
figure 4

Flowchart of the proposed MHS–PCLS

Table 2 Test functions

Numerical simulations and analysis

Experimental setup

In this section, the performance of the proposed MHS–PCLS algorithm is compared with the original HS algorithm and its prominent variants, including HS, IHS, GHS, SGHS and LAHS. All of the parameter settings were the same as in (Enayatifar et al. 2013). Because MHS–PCLS contains a local search operation, it increased the number of FEs (FEs indicates the number of function evaluations). To obtain an unbiased comparison, the termination criterion was set to 50,000 FEs for all compared algorithms. Several well-studied benchmark problems were used as test functions; these are indicated in Table 2. For all of the benchmark functions, the dimensions N was set to 30. Based on our empirical experience (Yi et al. 2016a), the parameters of MHS–PCLS were recommended as follows: the size of harmony memory \(HMS=100, HMCR=0.995, PAR=0.90, M=0.4\), and \(F=0.5\).

Table 3 Ranking of the performance of MHS–PCLS with different chaotic maps
Table 4 Mean and standard deviation \((\pm SD )\) of the benchmark functions optimization results (\(N=30\))

A proper search length for the local search is important for PCLS. A small length may be inefficient in exploring the best solution and may therefore be unsuccessful at improving the search quality. Conversely, with a longer length, the PCLS may consume additional function evaluations unnecessarily. Based on the recommendations in the literature (Jia et al. 2011), the maximum search length of PCLS in each iteration was set to \(S_l=N/5\).

The number of parallel chaotic maps is also important for PCLS. More parallel chaotic maps can increase the robustness of the algorithm; however, they may consume additional function evaluations unnecessarily. As a compromise between the above two factors, the number of parallel chaotic maps was to set to \(N_p=5\).

Performance of MHS–PCLS with different chaotic maps

As discussed in “Chaotic sequence” section, the search capability with different chaotic maps can differ in view of the convergence rate and accuracy. In this section, the performance of MHS–PCLS with different chaotic maps was compared to identify the best option. To perform a fair comparison, all parameter settings of the MHS–PCLS remained the same. Each benchmark function was tested for 100 runs and the maximum number of function evaluations mFEs was set to 10,000. The performance of MHS–PCLS with different chaotic maps was ranked based on their mean value of the 100 runs; the results are presented in Table 3. The performance of MHS–PCLS with an ICMIC map provided the best performance, which means an ICMIC map has the potential to generate superior solutions. Hence, the ICMIC map is selected for the experiments.

Comparison of the optimization results and computation time among different methods

Table 4 displays the results of all test functions obtained by HS, IHS, GHS, SGHS, LAHS and the proposed MHS–PCLS. Each benchmark problem performed 30 independent replications for all of the compared algorithms. In Table 4, AE and SD are the average value over 30 runs and the corresponding standard deviations, respectively. MHS–PCLS generated the best results for seven out of nine functions. For function \(f_4\), both SGHS and MHS–PCLS obtained the global optimum. For functions \(f_6\) and \(f_7\), the results obtained by MHS–PCLS were inferior to SGHS. To demonstrate the evolution process of the compared algorithms, the convergence curves of the competitive algorithms are given in Fig. 5. It can be observed from Fig. 5 that for the majority of the functions, MHS–PCLS converged marginally more slowly than the other algorithms during the initial stage of the evolution process; however it always overtaken during the later iterations. The average computation times of the compared methods are presented in Table 5. It can be seen in Table 5 that although MHS–PCLS expends extra computation cost on the local search, the total computation times have no significant difference compared to the other methods. The possible reason is that the number of FEs for all methods are the same.

Wilcoxon’s sum rank test

Fig. 5
figure 5

Convergence graphs of the six compared algorithms on the nine benchmark functions. (\(f_1-f_9\))

Table 5 The mean computation time of different methods (in s)
Table 6 P-values calculated for Wilcoxon’s Rank Sum Test for all of the benchmark problems
Table 7 The effect of HMS on the mean and standard deviation \((\pm SD )\) of the benchmark functions optimization results
Table 8 The effect of HMCR on the mean and standard deviation \((\pm SD )\) of the benchmark functions optimization results
Table 9 The effect of PAR on the mean and standard deviation \((\pm SD )\) of the benchmark functions optimization results
Table 10 The effect of \(N_p\) on the mean and standard deviation \((\pm SD )\) of the benchmark functions optimization results
Table 11 The effect of \(S_l\) on the mean and standard deviation \((\pm SD )\) of the benchmark functions optimization results

To judge whether the results obtained with the MHS–PCLS algorithm differ from the results of the other algorithms in a statistically significant manner, a nonparametric statistical test called Wilcoxon’s rank sum test for independent samples was conducted. The significance level was set to 5 %. The P-values for the compared HS variants over all nine benchmark functions are provided in Table 6. In Table 6, the P-values in each row are calculated through the rank sum between the algorithm that obtained the best solutions and the remaining algorithms; hence, NA appears when the algorithms obtained the best solutions for the corresponding benchmark function. The differences between the algorithms that obtained the best solutions and the other algorithms and are considered as significant if the P-values are less than 0.05. It can be observed from Table 6 that MHS–PCLS achieved statistically superior performance compared to all other HS variants for six out of nine benchmark functions.

Effects of changing the parameters on the performance of the MHS–PCLS

In this subsection, the effect of changing \(HMS, HMCR, PAR, N_p\) and \(S_l\) on the performance of MHS–PCLS is investigated. Tables 7, 8, 9, 10 and 11 presents the effects of these parameters on the mean and standard deviation \((\pm SD )\) of the benchmark function optimization results with 30 independent runs. It can be observed in Table 7 that the performance of MHS–PCLS with a smaller HMS is inferior to that with a larger HMS. This is because MHS–PCLS with a smaller HMS has poor solutions diversity and can be easily trapped into local minima. Thus, large value for HMS are suggested. As indicated in Table 8, the performance difference of MHS–PCLS with different values of HMCR is significant, and the greater the HMCR, the greater the performance. Therefore, a large value of HMCR is recommended. Table 9 indicates that there is no single choice for PAR, however, it seems that using a relatively large value (such as PAR \(=\) 0.5, 0.7 and 0.9) improves the performance of MHS–PCLS. As illustrated in Tables 10 and 11, \(N_p\) and \(S_l\) have similar effects on the performance of MHS–PCLS. It seems that moderate values of \(N_p\) and \(S_l\) are more suitable.

Constrained engineering design problems

In this section, the proposed MHS–PCLS algorithm is applied to solve several constrained engineering problems. These well-known examples have been previously solved based on a variety of optimization techniques; hence, they can be compared with the proposed approach and facilitate validating the effectiveness and efficiency of this approach. The parameter settings of MHS–PCLS are the same as in “Numerical simulations and analysis” section.

Modified Deb’s heuristic constrained handling method

Before applying MHS–PCLS algorithm to solve constrained engineering problems, the constraint handling method must first be determined first. In general, constrained optimization problems with m variables and n constraints can be stated as follows:

$$\begin{aligned}&Minimize: \quad f({{{\varvec{x}}}}), \end{aligned}$$
(8)
$$\begin{aligned}&s.t.\left\{ \begin{array}{rcl} &{}&{}{g_j({{{\varvec{x}}}})\le 0, j=1,2\ldots ,q} \\ &{}&{}{h_j({{{\varvec{x}}}})= 0, j=q+1,q+2\ldots ,n} \\ &{}&{}{LB_i\le x_i\le UB_i,i=1,2,\ldots ,m} \end{array} \right. \end{aligned}$$
(9)

where \({{{\varvec{x}}}}=(x_1,x_2,\ldots ,x_m )\) is the solution vector, \(f({{{\varvec{x}}}})\) is the objective function, and \(g({{{\varvec{x}}}})\) and \(h({{{\varvec{x}}}})\) are the inequality and equality constraints, respectively. The values of \(LB_i\) and \(UB_i\) are the lower and upper bounds of \(x_i\), respectively.

There is a variety of constraint-handling techniques for constrained optimization problems (Mezura-Montes and Coello 2011; Kramer 2010). The marjority of these are based on the penalty function method because of its simplicity. However, even though the penalty function method is simple and competitive in some numerical optimization problems, defining the appropriate penalty parameters is significant challenge, and it influences the feasible and infeasible solutions severely. Deb (2000) proposed a parameter-less penalty strategy. The primary idea of Deb’s method is to distinguish infeasible and feasible solutions and to select a feasible solution or the relatively best infeasible solution. However, the main drawback of this method is that it is prone to cause premature convergence, probably owing to its strong preference for feasible solutions. Mohamed and Sabry (2012) thus proposed a modified Deb’s method to address this problem. According to the new comparison rules, new improvised harmonies can replace the harmonies stored in the \({{\varvec{HM}}}\) if any of the following rules are true:

  1. 1.

    The new improvised harmony is feasible and the corresponding harmony in \({{\varvec{HM}}}\) is infeasible.

  2. 2.

    The new improvised harmony and the corresponding harmony in \({{\varvec{HM}}}\) are both feasible; however, the new improvised harmony has a smaller or equal fitness value compare to the corresponding harmony in \({{\varvec{HM}}}\).

  3. 3.

    The new improvised harmony and the corresponding harmony in \({{\varvec{HM}}}\) are both infeasible; however, the new improvised harmony has a smaller or equal overall constraint violation to the corresponding harmony in \({{\varvec{HM}}}\).

The overall constraint violation is calculated by the following steps: first, a tolerance \(\varepsilon \) is allowed for equality constraints; the constraint violation of a decision vector or an individual \({{{\varvec{x}}}}\) on the jth constraint is calculated by

$$\begin{aligned} cv_j({{{\varvec{x}}}}) = \left\{ \begin{array}{rcl} &{} &{}{max(0,g_j({{{\varvec{x}}}})),j=1,2,\ldots ,q} \\ &{} &{}{max(0,|h_j({{{\varvec{x}}}})|-\varepsilon ),j=q+1,q+2,\ldots ,n} \end{array} \right. \end{aligned}$$
(10)

Then, the overall violation of n constraints can be calculated by:

$$\begin{aligned} cv({{{\varvec{x}}}}) = \sum _{j=1}^{n} cv_j({{{\varvec{x}}}}) \end{aligned}$$
(11)

This simple modification can reduce the probability of stagnation and help the algorithm to spread out and search through the entire solution space. So this method is adopted in MHS–PCLS to handle the constraints. To validate the performance of the combined method for solving constrained problems, several engineering design problems were tested and the results are shown in the next sections.

Tension/compression string design problem

The tension/compression string design problem was first introduced by Arora (2004), as illustrated in Fig. 6. In this problem, the weight of a string is subject to constraints on minimum deflection, shear stress, surge frequency, and limits on the outside diameter. The design variables are: the wire diameter \(d(x_1)\), the mean coil diameter \(D(x_2)\) and the number of active coils \(P(x_3)\). The mathematical model of this problem can be found in “Appendix”.

This problem has been previously solved by GA through the use of dominance-based tournament selection (DGA) proposed by Coello and Montes (2002), an effective co-evolutionary particle swarm optimization approach (CPSO) proposed by He and Wang (2007a), hybrid particle swarm optimization (HPSO) proposed by He and Wang (2007b), co-evolutionary differential evolution (CDE) proposed by Huang et al. (2007), improved harmony search (IHS) proposed by Mahdavi et al. (2007),hybrid Nelder-Mead simplex search and particle swarm optimization (NM-PSO) proposed by Zahara and Kao (2009), another improved harmony search (IPHS) variant proposed by Jaberipour and Khorram (2010), teaching and learning based optimization (TLBO) proposed by Rao et al. (2011), articial bee colony (ABC) algorithm proposed by Akay and Karaboga (2012), upgraded articial bee colony (UABC) algorithm proposed by Brajevic and Tuba (2013), mine blast algorithm (MBA) proposed by Sadollah et al. (2013), hybrid cuckoo search algorithm based on Solis and Wets local search technique that relies on an augmented Lagrangian function for constraint handling (HCS-LSAL) proposed by Long et al. (2014), social spider optimization (SSO-C) proposed by Cuevas and Cienfuegos (2014), and adaptive firefly algorithm (AFA) proposed by Baykasoğlu and Ozsoydan (2015). These algorithms are also used to discuss the welded beam design problem and the pressure vessel design problem. A comparison of the best solution among these algorithms is given in Table 12. The results obtained by the compared methods are taken from the previous literature. It should be noted that the results obtained by NM-PSO are infeasible because the first two constraints were violated.

The statistical results of 30 independent runs obtained by the considered methods and MHS–PCLS are presented in Table 13. As can be observed in Table 13, MHS–PCLS obtained the function value 0.0126652 in every independent run with 10,000 function evaluations. MBA required only 7650 function evaluations to obtain the function value of 0.012665; however, its performance was less robust than MHS–PCLS from the perspective of statistics. The performance of TLBO is virtually the same as MHS–PCLS, whereas the performances of other algorithms such as ABC, CPSO, HPSO, CDE, and SSO-C are less robust than MHS–PCLS with more function evaluations.

Table 12 Comparison of the best solution obtained by various algorithms for the tension/compression spring design problem
Fig. 6
figure 6

The tension/compression string design problem

Table 13 Comparison of the statistical results of various algorithms for the tension/compression spring design problem

Welded beam design problem

The welded beam design problem was first proposed by Coello (2000). The aim of the problem is to design a welded beam that has minimum cost subject to constraints on shear stress\((\tau )\), bending stress in the beam \((\sigma )\), buckling load on the bar \(P_c\), end deflection of the beam \((\delta )\), and side constraints. There are four design variables: \(h(x_1)\) , \(l(x_2), t(x_3)\) and \(b(x_4)\), as illustrated in Fig. 7. The mathematical model of this problem can be found in “Appendix”.

A comparison of the best solutions given by the mentioned algorithms is presented in Table 14. A comparison of the statistical results is given in Table 15. The best solution was obtained by NM-PSO with an objective function value of 1.724717 after 80,000 function evaluations, whereas the best solution obtained by MHS–PCLS was 1.724852 with only 10,000 function evaluations. From Table 15, it can be observed that the performance of MHS–PCLS was more stable than the other algorithms and the mFEs of MHS–PCLS were less than the other algorithm, except for TLBO.

Fig. 7
figure 7

Welded beam design problem

Table 14 Comparison of the best solution obtained by various algorithms for the welded beam problem
Table 15 Comparison of the statistical results given by various algorithms for the welded beam problem
Fig. 8
figure 8

Pressure vessel design problem

Table 16 Comparison of the best solution obtained by various algorithms for the pressure vessel design problem
Table 17 Comparison of the statistical results given by various algorithms for the pressure vessel design problem

Pressure vessel design problem

In the pressure vessel design problem, proposed by Kannan and Kramer (1994), the aim is to minimize the total cost, which consists of the material, forming cost and welding cost. A cylindrical vessel is capped at both ends by hemispherical heads, as displayed in Fig. 8. There are four design variables: \(T_s(x_1\), thickness of the shell), \(T_h(x_2\), thickness of the head), \(R(x_3\) , inner radius) and \(L(x_4\), length of the cylindrical section of the vessel, not including the head). Among the four variables, \(T_s\) and \(T_h\) are integer multiples of 0.0625in; the available thicknesses of rolled steel plates. R and L are continuous variables. The mathematical model of this problem can be found in “Appendix”.

A comparison of the best solutions given by the mentioned algorithms is presented in Table 16. A comparison of the statistical results is given in Table 17. The best solutions obtained by NM-PSO and MBA are infeasible because the first two design variables of NM-PSO and MBA are not integer multiples of 0.0625. From Table 17, it can be observed that MHS–PCLS and TLBO obtained more stable results with less mFEs compared to the other algorithms.

Fig. 9
figure 9

Speed reducer design problem

Table 18 Comparison of the best solutions obtained by various algorithms for the speed reducer design problem
Table 19 Comparison of the statistical results given by various algorithms for the speed reducer design problem

Speed reducer design problem

A speed reducer is part of the gear box of mechanical systems; it is also used for many other applications (Cuevas and Cienfuegos 2014). The design of a speed reducer is considered a challenging optimization problem in mechanical engineering (Jaberipour and Khorram 2010). The aim of this problem is to minimize the weight of the speed reducer while considering constraints on the bending stress of the gear teeth, surface stress, transverse deflections of the shafts, and stresses in the shafts (see Fig. 9). The problem contains seven variables: \(b(x_1\), face width), \(m(x_2\), module of teeth), \(z(x_3\), number of teeth in the pinion), \(l_1(x_4\), length of the first shaft between bearings), \(l_2(x_5\), length of the second shaft between bearings),\(d_1(x_6\), diameter of the first shaft), and \(d_2(x_7\), diameter of the second shaft).This is a mixed integer programming problem. The third variable \(x_3\)(number of teeth) is an integer value and all other variables are continuous. The mathematical model of this problem is given in “Appendix”.

Fig. 10
figure 10

The finite element model of the car side impact design (Gandomi et al. 2013a)

A comparison of the best solutions compared to previous methods is presented in Table 18. Table 19 presents a comparison of the statistical results obtained by the different algorithms. The best solution of this problem was obtained by DSS-DE, DELC and MAL-DE with an objective function value of 2994.471066. As can be seen in Table 19, MHS–PCLS obtained similar results to DSS-DE, DELC and MAL-DE with significant fewer mFEs.

Table 20 Statistical results of the car side impact design example by different methods

Case study: car side impact design

In this section, a case study of car side-impact design is investigated. A car side is a weak part of the entire car body. According to a Chinese accident statistical report for the year 2007 (Zhou 2015), the proportion of casualties caused by car side impact represented 36 % of the total accident casualties. Many countries have established their test standard of car side impact. In this paper, the European Enhanced Vehicle-Safety Committee (EEVC) side impact procedure is used. The EEVC side impact procedure offers the lowest safety standard for the dummy, including head injuries, load in abdomen, pubic force and rib deflection. The main objective of the design is to maintain the total weight minimized while additional constraints are addressed. These constraints include load in the abdomen(\(g_1\)), dummy upper chest (\(g_2\)), dummy middle chest (\(g_3\)), dummy lower chest (\(g_4\)), upper rib deflection(\(g_5\)), middle rib deflection (\(g_6\)), lower rib deflection (\(g_7\)), pubic force (\(g_8\)), velocity of V-Pillar at middle point (\(g_9\)), and velocity of front door at V-Pillar (\(g_{10}\)). There are 11 decision variables. These are the thicknesses of B-Pillar inner(\(x_1\)), B-Pillar reinforcement(\(x_2\)), floor side inner(\(x_3\)), cross members(\(x_4\)), door beam(\(x_5\)), door beltline reinforcement(\(x_6\)), roof rail (\(x_7\)), material of B-Pillar inner(\(x_8\)), floor side inner (\(x_9\)), barrier height(\(x_{10}\)), and hitting position (\(x_{11}\)). The finite element model of a test case is illustrated in Fig. 10. Gu et al. (2001) developed the response surface model (RSM) of the objective function and the constraints based on the Latin hypercube sampling method. The mathematical model of this problem is provided in “Appendix”.

In this case study, the result obtained by MHS–PCLS is compared with PSO, DE, GA, firefly algorithm (FA)and cuckoo search algorithm (CS). The results of these algorithms are taken from the previous literature (Gandomi et al. 2011, 2013a). The simulations were conducted with 20,000 FEs for all the algorithms. Because the number of independent runs are not reported in Gandomi et al. (2011), Gandomi et al. (2013a), 30 independent runs were conducted for MHS–PCLS. The computation results are presented in Table 20. The results confirm that the proposed MHS–PCLS can achieve the best results in terms of MeanWorst, and SD indexes. Although MHS–PCLS cannot obtain the best result of Best, it has the minimum solution differences, which indicates that it has the best robustness. It can be concluded that the MHS–PCLS is competitive and more robust compared to PSO, DE GA, FA, and CS in solving this complex design problem.

Conclusions

A parallel chaotic local search enhanced harmony search (MHS–PCLS) algorithm was proposed in this paper. This algorithm conducts a parallel chaotic local search from several different initial points and thus reduces the sensitivity of the initial chaotic maps. This mechanism provides improved robustness in the search process. In the numerical simulations, the performance of MHS–PCLS with different chaotic maps was investigated. The ICMIC map performed marginally better than the other maps. Several well-known benchmark problems were tested, and the results confirmed that MHS–PCLS achieved statistically superior performance compare to the other HS variants with a significance level at 5 %. MHS–PCLS was further combined with a modified Deb’s constraint handling method to adapt to constrained optimization problems. The test results of several engineering design optimization problems and a complex case study of car side impact design validated its effectiveness. MHS–PCLS not only obtains competitive results compared to the previous methods, but also requires fewer function evaluations in the majority of cases.The proposed method is thus an efficient solver in constrained optimization problems. In the future, MHS–PCLS can be extended to solve engineering design optimization problems involving expensive simulation.