1 Introduction

Meta-heuristic algorithms (MAs) inspired by the phenomena of biological or physical are trending tools to solve complex optimization problem. Some well-known examples of metaheuristics are genetic algorithm [43], particle swarm optimization [19], ant colony optimization [8], artificial bee colony algorithm [18], and so on. In the field of metaheuristics, the No Free Lunch theorem [44] plays an important role and allows the development of new algorithms by claiming the fact that there does not exist and even not possible to design a single optimizer, which can solve all the optimization problems. Some recently developed but efficient optimization algorithms are grey wolf optimizer (GWO) [34], monarch butterfly optimization (MBO) [41], slime mould algorithm (SMA) [27], moth search algorithm (MSA) [40], hunger games search (HGS) [46], Runge–Kutta optimizer (RUN) [1], and Harris Hawks optimization (HHO) [15].

The concept of evolution in nature has been mimicked by many researchers to develop optimization algorithms, which can solve the real-world problems where the conventional approaches of optimization fail. Harmony search algorithm (HSA) is one of the well-known evolutionary algorithms developed by Geem et al. [11] from the inspiration of music improvisation process. In this algorithm, the population of search agents is referred by harmony memory and candidate solutions are referred by harmony. HSA has very easy implementation process, which involves the memory consideration, pitch adjustment, and random process of generating new harmony. In the literature, HSA has shown its impressive performance in terms of solution quality and convergence rate on several benchmarks and real-life problems [38, 42]. Among these benefits, HSA is having relatively few and easy mathematical equations, which utilizes all the existing harmonies while producing a new harmony.

However, the HSA confronts several challenging and serious issues. One issue is parameter tuning of its control parameters that remarkably affects its search performance. The second and main issue is its prone towards the local optima for multimodal problems during the search procedure, which is the cause of inappropriate balance between exploitation (intensification) and exploration (diversification). The exploration or diversification is a process of discovering new and promising search regions of the search space, while the exploitation or intensification refers to the process of extracting useful information from the discovered search areas.

To overcome the issues of getting trapped at local optima and to minimize the efforts of tuning the parameters for the search procedure, the present article proposes a comparatively efficient and alternative variant of the HSA, called modified harmony search algorithm (MHSA). In this proposed variant, the non-linear functions are used to update the parameters HMCR and PAR. The parameter HMCR is designed based on the dimension of the problem and the parameter PAR is chosen as a decreasing exponential function over the growth of iterations/function evaluations. In the MHSA, both the pitch adjustment process and process of generating new random memory are modified. The proposed study of generating a harmony, which replaces the concept of bandwidth, discovers the search space from a wider range of search region to a narrower range. The opposition-based learning and random generation using uniform distribution is used to generate a new harmony, when the HMCR disallow to use the harmony memory. This randomization is used to propagate the search in partially opposite regions of the search space. It can be noticed from the framework of the proposed variant of the HSA that it is not destroying the original structure of the algorithm. In this proposal, it has been tried to keep the structure of the algorithm simple because of the fact that the practitioners are not experts in programming and their aim is to apply a simple and efficient algorithm for their optimization purposes [7]. Overall, major contribution of the proposed study can be summarized as follows:

  • The modified HSA is proposed by adopting non-linear nature of the parameters HMCR and PAR, which avoids the tedious task of tuning them and maintain the balance between exploitation and exploration during the search process.

  • The pitch adjustment process is modified by inspiring the search mechanism of the GWO. This integration effectively improves the harmony by providing a balanced transition from exploration to exploitation.

  • In the end, the concept of opposite numbers is used to generate a opposite harmony, which explores broader area of the search space and helps to speed up the convergence rate.

The rest of the paper is organized as follows: Sect. 2 summarizes the conventional HSA and review some important developments of the HSA. In Sect. 3, a new modified variant of the HSA called MHSA is presented. Section 4 analyzes the performance of the proposed MHSA and compares it with developed variants of the HSA and other metaheuristic optimization algorithms. This section also analyzes the convergence behaviour of the MHSA and sensibility to the harmony memory size. Furthermore, some real structural engineering design problems are also solved using the proposed MHSA. Finally, the present work concludes in Sect. 5 with some future research directions.

2 Preliminaries

2.1 Harmony search algorithm

In this section, the basic version of the HS algorithm is introduced. Readers may review the more details of the algorithm from Geem et al. [11]. Similar to other evolutionary algorithms, the HS algorithm is also a population-based stochastic algorithm, which involves a simple strategy of evolving the candidate solution. Its search procedure starts with an initialization of harmony memory (HM) using the search Eq. (1)

$$\begin{aligned} \mathrm{HM}_{i,j}= \mathrm{HM}_{j}^{\min }+\mathrm{rand}(0,1)\times (\mathrm{HM}_{j}^{\max }-\mathrm{HM}_{j}^{\min }). \end{aligned}$$
(1)

Here, \(\mathrm{rand}(0,1)\) is a uniformly distributed random number from the interval (0, 1), and \(\mathrm{HM}_{j}^{\max }\) and \(\mathrm{HM}_{j}^{\min }\) are the upper and lower boundary limits for the \(j{\mathrm{th}}\) component of harmony memory vector. The index i runs over the size of the harmony (HMS), and j runs over the dimension (d) of the problem or number of components in any harmony, i.e., \(i \in {1,2,\ldots ,\mathrm{HMS}}\) and \(j \in {1,2,\ldots ,d}\).

After the initialization of harmony, the HS executes its search procedure under iterative process. The search operators, which play a major role in search procedure, are the memory consideration, pitch adjustment, and random process of generating the harmony. In each iteration of the HSA, a new harmony is generated by performing these three operations. This newly generated harmony replaces the worst harmony of the memory if its fitness is better than that fitness; otherwise, it is discarded. This search procedure of generating the new harmony is repeated iteratively until the termination criteria is not met or maximum number of iterations are not reached.

During the generation of new harmony, the memory consideration operation is executed with a probability of harmony memory considering (accepting) rate (HMCR), while the random process of generating the harmony has the probability (\(1-\mathrm{HMCR}\)). After the harmony memory consideration process, the pitch adjustment process is performed with probability called pitch adjustment rate (PAR). In detail, these processes are described as follows:

In the memory consideration process, a random harmony from the current harmony memory is selected as follows:

$$\begin{aligned} \mathrm{HM}_{\mathrm{rand},j}= \mathrm{HM}_{r_1,j}, \quad j={1,2,\ldots ,d}, \end{aligned}$$
(2)

where \(r_1\) is an integer selected randomly from \([1,\mathrm{HMS}]\). During the process of pitch adjustment, the randomly selected harmony component is adjusted as follows:

$$\begin{aligned} u_{j}=\mathrm{HM}_{\mathrm{rand},j}+\mathrm{rand}(-1,1)\times \mathrm{BW}, \quad j={1,2,\ldots ,d}, \end{aligned}$$
(3)

where the variable BW is known as bandwidth and it determines the step size taken during the search procedure, and \(\mathrm{rand}(-1,1)\) is a variable drawn randomly form the interval \((-1,1)\).

In the random process of generating the harmony, a new harmony is generated randomly within the search space as follows:

$$\begin{aligned} u_{j}= u_{j}^{\min }+\mathrm{rand}(0,1)\times (u_{j}^{\max }-u_{j}^{\min }), \end{aligned}$$
(4)

where \(u_{j}^{\min }\) and \(u_{j}^{\max }\) are the allowed lower and upper limits for the \(j{\mathrm{th}}\) component of the harmony u.

The pseudo-code of the conventional HSA based on the above description is provided in Algorithm 1.

figure a

2.2 Previous work

The HSA has gained wide attention by the researchers due to its simplicity, faster computation, and efficiency. This algorithm has been used widely in various fields [2, 5, 37]. However, similar to other metaheuristics, the HSA also feels the problem of getting trapped at local optima during the search procedure, and therefore, many researchers have attempted to improve its search efficiency.

2.2.1 Fine tuning of control parameters

It was clearly explain by the developers of the HSA [11] that the parameter HMCR supports to the exploration and the parameters PAR and BW helps in exploiting the search space. Therefore, these parameters are crucial and responsible for the better performance of the HSA. Mahdavi et al. [32] have modified the search mechanism of the HSA by setting up a new formulation of the parameters PAR and BW given by

$$\begin{aligned} \mathrm{PAR}_{t}= \, & {} \mathrm{PAR}^{\min }+\frac{t}{T} \times (\mathrm{PAR}^{\max }-\mathrm{PAR}^{\min }), \end{aligned}$$
(5)
$$\begin{aligned} \mathrm{BW}_{t}= \, & {} \mathrm{BW}^{\max }\times e^{\frac{t}{T} \ln \big (\frac{\mathrm{BW}^{\min }}{\mathrm{BW}^{\max }}\big )}, \end{aligned}$$
(6)

where t indicate the current iteration and T is the maximum number of iterations. \(\mathrm{PAR}^{\min }\) and \(\mathrm{PAR}^{\max }\) are the minimum and maximum values of the parameter \(\mathrm{PAR}\), and \(\mathrm{BW}^{\min }\) and \(\mathrm{BW}^{\max }\) are the minimum and maximum values for the parameter BW, respectively. Kumar et al. [26] have introduced both linear as well as non-linear settings for the parameters HMCR and PAR as follows:

$$\begin{aligned} \mathrm{HMCR}_{t}^{\mathrm{linear}}= \, & {} \mathrm{HMCR}^{\min }+\frac{t}{T}\times (\mathrm{HMCR}^{\max }-\mathrm{HMCR}^{\min }), \end{aligned}$$
(7)
$$\begin{aligned} \mathrm{HMCR}_{t}^{{\text {non-linear}}}= \, & {} \mathrm{HMCR}^{\min }\times e^{-\frac{t}{T}\times \ln \Big ( \frac{\mathrm{HMCR}^{\min }}{\mathrm{HMCR}^{\max }}\Big )}, \end{aligned}$$
(8)
$$\begin{aligned} \mathrm{PAR}_{t}^{\mathrm{linear}}= \, & {} \mathrm{PAR}^{\min }+\frac{T-t}{T}\times (\mathrm{PAR}^{\max }-\mathrm{PAR}^{\min }), \end{aligned}$$
(9)
$$\begin{aligned} \mathrm{PAR}_{t}^{{\text {non-linear}}}= \, & {} \mathrm{PAR}^{\min }\times e^{\frac{t}{T}\times \ln \Big ( \frac{\mathrm{PAR}^{\min }}{\mathrm{PAR}^{\max }}\Big )}, \end{aligned}$$
(10)

where \(\mathrm{PHMCR}^{\min }\) and \(\mathrm{HMCR}^{\max }\) are the minimum and maximum values of the parameter HMCR. In Khalili et al. [25], the parameter HMCR is updated using Eq. (7)

$$\begin{aligned} \mathrm{HMCR}_{t}=0.9+0.2 \sqrt{\frac{t-1}{t-1}\times \Big (1-\frac{t-1}{T-1} \Big )}. \end{aligned}$$
(11)

Several other researchers also tried to improve the search performance of the HSA by fine-tuning these control parameters [3, 14, 16, 18,19,20,23, 29, 36]. Luo et al. [30] have tried to modify the search strategy of the HSA by introduced an modified variant of the HSA, where the self-adaptive and parameter-free approaches are used to fine-tune the parameters HMCR and PAR. To improvise the harmony in pitch adjustment process, the bandwidth parameter has been replaced by additional term called exponential term with some constant factor. The random process of generating the harmony is also modified and the Gaussian distribution is used to generate a new harmony component. This work motivates us to develop a new and alternative variant of the HSA which is more efficient in determining the better solution quality, convergence rate and sufficient enough to avoid the prone of solutions towards the local optima.

2.2.2 Modifying the pitch adjustment operation

To improvise the harmony, several variants by modifying the control parameter BW are proposed in the literature, which show the improvement on global optimization problems. Chakraborty et al. [6] have introduced a mutation strategy, which is used in the DE algorithm, during the pitch adjustment operation. In this, harmony is improvised using the following equation:

$$\begin{aligned} u_j=\mathrm{HM}_{r_1,j}+\mathrm{rand}(0,1)\times (\mathrm{HM}_{r_2,j}-\mathrm{HM}_{r_3,j}), \end{aligned}$$
(12)

where \(\mathrm{HM}_{r_1,j}\), \(\mathrm{HM}_{r_1,j}\), and \(\mathrm{HM}_{r_3,j}\) are the \(j{\mathrm{th}}\) components of the randomly selected but different harmonies from the current harmony memory. Although, this strategy enhances the exploration ability, but sometimes, this leads to the stagnation and long perturbation. By inspiring this mutation scheme, Guo et al. [12] have introduced DE/Best/1 scheme, given by Eq. (13), to improvise the harmony

$$\begin{aligned} u_j=\mathrm{HM}_{\mathrm{best},j}+\mathrm{rand}(0,1)\times (\mathrm{HM}_{r_2,j}-\mathrm{HM}_{r_3,j}), \end{aligned}$$
(13)

where \(\mathrm{HM}_{\mathrm{best},j}\) is the \(j{\mathrm{th}}\) component of the best harmony from the current harmony memory. This adjustment weakens the diversification of search space when the perturbation is very low or the algorithm stagnate at local optima. In some cases, this greedy direction of improvisation also leads to the premature convergence. El-Abd [9] has proposed a new adjustment to improvise the harmony. In his scheme, a transition from the exploration to exploitation is tried to maintain by improvising the harmony initially around the random harmony and later around the best harmony. The proposed scheme is explained by Eqs. (14) and (15)

$$\begin{aligned} u_j= \, & {} \mathrm{HM}_{\mathrm{rand},j}+\mathrm{Gaussian}(0,1)\times \mathrm{BW}, \end{aligned}$$
(14)
$$\begin{aligned} u_j= \, & {} \mathrm{HM}_{\mathrm{best},j}+\mathrm{rand}(-1,1)\times \mathrm{BW}, \end{aligned}$$
(15)

where \(\mathrm{Gaussian}(0,1)\) is Gaussian distributed random number with mean 0 and variance 1. \(\mathrm{rand}(-1,1)\) is a uniformly distributed random number from the interval \((-1,1)\). In this algorithm, parameter PAR has been linearly decreased and the parameter BW is exponentially decreased over the course of iterations of the search procedure. In Zou et al. [48], an improved HSA is proposed by inspiring the PSO mechanism. The parameters PAR and HMCR are excluded in this variant and a genetic mutation probability \((p_m)\) is introduced. To improvise the harmony, the global best and the worst harmony are utilized with the help of Eq. (16)

$$\begin{aligned} u_j=\mathrm{HM}_{\mathrm{worst},j}+\mathrm{rand}(0,1)\times (\mathrm{HM}_{R,j}-\mathrm{HM}_{\mathrm{worst},j}), \end{aligned}$$
(16)

where \(\mathrm{HM}_{R,j}=(2\mathrm{HM}_{\mathrm{best},j}-\mathrm{HM}_{\mathrm{worst},j})\).

In Wang and Huang [39], a self-adaptive approach is proposed to modify the pitch adjustment process. In this approach, the bandwidth parameter is modified with the help of maximum and minimum values, namely \(\mathrm{HM}_{j}^{\max }\) and \(\mathrm{HM}_{j}^{\min }\) of the harmony variables. This modification is presented by Eqs. (17) and (18)

$$\begin{aligned} u_j= \, & {} \mathrm{HM}_{\mathrm{rand},j}+\mathrm{rand}(0,1)\times (\mathrm{HM}_{j}^{\max }-\mathrm{HM}_{\mathrm{rand},j}), \end{aligned}$$
(17)
$$\begin{aligned} u_j= \, & {} \mathrm{HM}_{\mathrm{rand},j}-\mathrm{rand}(0,1)\times (\mathrm{HM}_{\mathrm{rand},j}-\mathrm{HM}_{j}^{\min }). \end{aligned}$$
(18)

Gao et al. [10] have used the concept of opposition-based leaning at the initialization of the harmony to reach the far points of the solution space, which may have a greater fitness. The parameter bandwidth is also modified in this variant based on the following equation:

$$\begin{aligned} \mathrm{BW}_{j}=\sqrt{\gamma \cdot \bar{x_{j}}}, \end{aligned}$$
(19)

where \(\bar{x_{j}}=\frac{1}{\mathrm{HMS}} \sum _{k=1}^{\mathrm{HMS}} x_{k,j}\).

In addition to these variants, some other variants of the HSA are also proposed in the literature, where not only the algorithm parameters, but the search mechanism is also modified. For example, Nehdi et al. [35] have introduced a new variant of HSA called dynamical self-adjusted harmony search optimization (DSAHS) to dynamically adopt the algorithm parameters such as HMCR, PAR, and BW as follows:

$$\begin{aligned} \mathrm{HMCR}_{t}= \, & {} 0.95+0.1 \times \beta _t \sqrt{\frac{t}{T}}, \end{aligned}$$
(20)
$$\begin{aligned} \mathrm{PAR}_{t}= \, & {} 0.5+0.4 \times (1-\beta _t), \end{aligned}$$
(21)
$$\begin{aligned} \mathrm{BW}_{j,t}= & {} \frac{\mathrm{HM}_j^{M}-\mathrm{HM}_j^{m}+0.001}{10} \times \exp \Big ( -10 \frac{t}{T}\Big ), \end{aligned}$$
(22)

where \(\mathrm{HM}_j^{M}\) and \(\mathrm{HM}_j^{m}\) are maximum and minimum unknown coefficients. The value of \(\beta _t\) can be determined as follows:

$$\begin{aligned} \beta _t=\sqrt{1-\frac{t}{T}}. \end{aligned}$$
(23)

Based on the above modified parameters, the new harmony memory for unknown coefficients is determined as follows:

$$\begin{aligned} \hat{\mathrm{HM}}_{i,j}=\left\{ \begin{array}{ll} \mathrm{HM}_{i,j}+(2\times \mathrm{rand}(0,1)-1)\times \beta _t \times (\mathrm{HM}^{M}-\mathrm{HM}^{m}) &{} {\text {with probability }}\mathrm{HMCR}_t \\ \mathrm{HM}_{j}^{\min }+\mathrm{rand}(0,1)\times ({\mathrm{HM}_{j}^{\max }-\mathrm{HM}_{j}^{\min }}) &{} {\text {with probability }}(1-\mathrm{HMCR}_t). \end{array}\right. \end{aligned}$$
(24)

After that, the new harmony is adjusted using the following equation:

$$\begin{aligned} \hat{\mathrm{HM}}_{i,j}=\left\{ \begin{array}{ll} \hat{\mathrm{HM}_{i,j}}+(2\times \mathrm{rand}(0,1)-1)\times \beta _t \times \mathrm{BW}_{j,t} &{} {\text {with probability }}\mathrm{PAR}_t \\ \hat{\mathrm{HM}_{i,j}} &{} {\text {with probability }}(1-\mathrm{PAR}_t). \end{array}\right. \end{aligned}$$
(25)

In Keshtegar and Sadeq [24], Gaussian global-best harmony search algorithm (GGHS) is introduced to deal with complex optimization problems. This algorithm is an enhanced version of the El-Abd [9], where the Gaussian distributed random numbers are used to update the harmony. In the GGHS, the harmony is updated in two stages. In the first stage, the new harmony is obtained using the following equations:

$$\begin{aligned} \mathrm{HM}_{i,j}^{\mathrm{new}}=\left\{ \begin{array}{ll} \mathrm{HM}_{i,j}+\mathrm{Gaussian} (0,1)\times \mathrm{BW}_{j,t} & {} {\text {with probability }}\mathrm{HMCR}_t \\ \mathrm{HM}_{j}^{\min }+\mathrm{rand}(0,1)\times (\mathrm{HM}_{j}^{\max }-\mathrm{HM}_{j}^{\min }) & {} {\text {with probability }}(1-\mathrm{HMCR}_t). \end{array}\right. \end{aligned}$$
(26)

where \(\mathrm{Gaussian}(0,1)\) is a Gaussian distributed random number with mean 0 and variance 1. The parameter bandwidth \(\mathrm{BW}_{j,t}\) is updated as follows:

$$\begin{aligned} \mathrm{BW}_{j,t}=\frac{|\mathrm{HM}_{j}^{M}-\mathrm{HM}_{j}^{m}+0.0001|}{10} \times \exp \Big (-\frac{10t}{T} \Big ), \end{aligned}$$
(27)

where \(\mathrm{HM}_{j}^{M}\) and \(\mathrm{HM}_{j}^{m}\) are the maximum and minimum values of memory component \(\mathrm{HM}_j\). In the second stage of harmony update process, the best harmony is adjusted to obtain new harmonies as follows:

$$\begin{aligned} \mathrm{HM}_{i,j}^{\mathrm{new}}=\left\{ \begin{array}{ll} \mathrm{HM}_{\mathrm{best},j}+\gamma _t \times \mathrm{Gaussian} (0,\beta _j) & {} {\text {with probability }}\mathrm{PAR}_t \\ \mathrm{HM}_{i,j} & {} {\text {with probability }}(1-\mathrm{PAR}_t). \end{array}\right. \end{aligned}$$
(28)

where \(\mathrm{Gaussian}(0,\beta _j)\) is a Gaussian distributed random number with mean 0 and variance \(\beta _j\). The value of \(\beta _j\) and \(\gamma _t\) is obtained as follows:

$$\begin{aligned} \beta _j= \, & {} \delta \times \Big (1-\frac{t}{T} \Big )^{\delta }, \end{aligned}$$
(29)
$$\begin{aligned} \delta= \, & {} |\mathrm{HM}_j^{M}-\mathrm{HM}_j^{m}+0.0001|, \end{aligned}$$
(30)
$$\begin{aligned} \gamma _t= & {} \sqrt{1-\frac{t}{T}}. \end{aligned}$$
(31)

By analyzing all these variants of the HSA, an alternative variant of the HSA has been proposed in the present study with an aim of achieving better quality of transition from the exploration to the exploitation with less number of parameters. In our approach, the parameter tuning of the control parameters of HSA is not needed by the users except the step-size control parameter of newly proposed pitch adjustment scheme. However, in our experiments, it has been tried to provide better transition scheme, so that no extra efforts have to be performed by the user. Hence, this makes the algorithm very convenient to use for optimization purpose.

3 Proposed modified harmony search algorithm (MHSA)

The search strategy of the conventional HSA is affected by the three process, namely, harmony memory consideration, pitch adjustment, and random generation of new harmony. In our proposal, each process has been modified by either making them parameter independent or providing a new efficient search procedure.

3.1 Parameter setting free control parameters

First, a parameter harmony memory considering rate (HMCR) has been adopted by a normal random number, which is mathematically stated in Eq. (32) [30]

$$\begin{aligned} \mathrm{HMCR}_t=N\Big (\frac{d}{1+d},\frac{1}{1+d}\Big ), \end{aligned}$$
(32)

where \(N(\mu ,\sigma ^2)\) indicate the Gaussian distribution with mean \(\mu\) and variance \(\sigma\). During the search procedure, if the value of the HMCR exceeds the range [0, 1], it should be truncated. The dynamic change can be visualized in Fig. 1 for different iterations and for the dimension 10, 30, and 50. In this figure, the sampling values of the parameter HMCR are shown. From this figures, it can be seen that when the dimension increases, the values of the parameter approach to 1. One of the main advantages of this setting is that it avoids the burden of tuning parameter HMCR. It also follows the suggestion of bigger values of this parameter, so that the chance of getting a good harmony from the memory by improvisation. On the other hand, this parameter also allows the random generation of harmony by the occasional exceeding of the uniform distributed random number from the parameter value of HMCR.

Fig. 1
figure 1

Distribution of the parameter HMCR for the dimension 10, 30, and 50, respectively

In the second modification of the MHSA, a parameter PAR is modified using a non-linear decreasing exponential function, which is given by

$$\begin{aligned} \mathrm{PAR}_t= \exp \Big (\frac{-t^2}{(k\cdot T)^2}\Big ), \end{aligned}$$
(33)

where k is parameter that decides that how much iterations are devoted to the exploration and how much for the exploitation. In our algorithm, we have fixed this to 0.6 to perform exploration and exploitation equally. This newly proposed parameter can be visualized in Fig. 2, where this has been compared with linear adaptation. This non-linear PAR does not change its value suddenly as compared to linear one and allows slow rate of change to simulate the non-linear process of search in the MHSA. In the first half iterations of the MHSA, the value of PAR is higher than the linear PAR, which allows comparatively better diversification in the MHSA. After the half of the iterations, the value of the PAR is lower than the linear one, which allows more intensification of the discovered promising harmonies. Moreover, at the end of the maximum number of iterations, the value of the proposed PAR is not approaching to zero as compared to the linear PAR and this selection allows to the improvisation of the harmony, when the HMCR allows.

Fig. 2
figure 2

Proposed non-linear PAR

These newly proposed parameter values for the HMCR and PAR are decreased over the course of iterations, which follows the realistic nature of the metaheuristic algorithms.

3.2 Modification in the pitch adjustment process and random generation of harmony

In the third modification of the MHSA, the pitch adjustment process is modified, which can be demonstrated by Eq. (34)

$$\begin{aligned} u_{j}=\mathrm{HM}_{\mathrm{rand},j}+A_{j}\times |C_{j}\cdot \mathrm{HM}_{\mathrm{best},j}-\mathrm{HM}_{\mathrm{rand},j}|, \end{aligned}$$
(34)

where \(\mathrm{HM}_{\mathrm{rand},j}\) is the \(j{\mathrm{th}}\) component of the harmony selected by the harmony memory consideration process. The scalars \(A_{j}\) and \(C_{j}\) are defined as follows:

$$\begin{aligned} A_{j}= \, & {} 2\cdot \alpha \cdot \mathrm{rand}(0,1)-\alpha , \end{aligned}$$
(35)
$$\begin{aligned} C_{j}= \, & {} 2\cdot \mathrm{rand}(0,1), \end{aligned}$$
(36)

where \(\mathrm{rand}(0,1)\) is a random number selected from the interval (0,1) and \(\alpha\) is a parameter which decreases linearly to provide an appropriate transition from the exploration to the exploitation. Beside this parameter, the coefficient \(C_{j}\) also provides an exploration and exploitation during the search. One of the main advantages of this \(C_{j}\) is that it provides a diversification of the search space even when the parameter \(A_{j}\) fails. This new equation of pitch adjustment process is inspired by the encircling behaviour organized by the grey wolves in nature [34]. This equation has shown its outperform ability of search to explore, exploit, and in maintaining an appropriate balance between them.

In the fourth and last modification, the opposite numbers are used to generate an opposite harmony, which helps in generating a new random harmony. The advantage of this opposite harmony is to perform the search far from the current search region and to discover more promising regions in opposite directions of the current harmonies. In this randomization, a new harmony is generated based on the hybridization of opposite harmonies and the conventional process of the HSA. This can be understood as follows:

$$\begin{aligned} u_{j} = \left\{ \begin{array}{ll} u_{j}^{\min }+u_{j}^{\max }-u_{\mathrm{rand},j} & {} \mathrm{rand}(0,1) < 0.5 \\ u_{j}^{\min }+\mathrm{rand}(0,1)\times (u_{j}^{\max }-u_{j}^{\min }) & {} \mathrm{otherwise}, \end{array}\right. \end{aligned}$$
(37)

where \(u_{j}^{\max }\) and \(u_{j}^{\min }\) are the allowed upper and lower bounds for the \(j{\mathrm{th}}\) component of the harmony u, and \(u_{\mathrm{rand},j}\) is a harmony component of a random harmony selected from current harmony memory.

In this way, the proposed MHSA updates the harmony. First, it initializes the harmony memory and parameters, and then repeats the process of improving the harmony and memorizing the best harmony until the maximum number of iterations are not reached or the termination criteria are not fulfilled. The complete search procedure of the proposed MHSA can be understood by Algorithm 2. The flowchart for the proposed MHSA is provided in Fig. 3.

From this pseudo-code, the complexity of the proposed MHSA can be calculated easily. The complexity of the improvisation phase is O(d) and for the updating process it is \(O(\mathrm{HMS})\). Hence, the overall complexity is equal to \(O\big ((d+\mathrm{HMS})\cdot T\big )\). This complexity of the MHSA is same as the complexity of the conventional HSA, because this does not needed ant extra or complicated process during the search.

figure b
Fig. 3
figure 3

Flowchart of the proposed MHSA

4 Experimental results

In this section, the performance of the proposed MHSA is evaluated and analyzed on a set of 23 benchmark test functions, which are unimodal and multimodal in nature. This variety of difficulty level will help to analyze the exploration and exploitation abilities of the proposed algorithm. The list of benchmark test problems is presented in Tables 1 and 2 with the search range and optimal solution. In this study, the performance comparison of the MHSA is performed with the conventional HSA, variants of MHSA, and some other algorithms. Hence, the comparison section is divided into two parts. In the first part, the MHSA is compared with conventional HSA on 10-, 30-, and 50-dimensional problems. In the second part, the variants of the HSA which are developed in the literature and other algorithms are compared with the MHSA.

Table 1 Unimodal benchmark functions
Table 2 Multimodal benchmark functions

4.1 Comparison of the MHSA with conventional HSA

This section compares the results of the MHSA with the conventional HSA on 10-, 30-, and 50-dimensional problems, which are given in Tables 1 and 2. The results are obtained by conducting 30 independent trials of each algorithm with \(10^4\times d\) function evaluations. Size of the harmony memory is fixed to 5 for the proposed MHSA and conventional HSA. In this experiment, the mean and standard deviation value of the set of objective function values obtained over 30 runs are calculated and presented in Tables 3 and 4. In these tables, the better results are highlighted in bold face. 

In Table 3, the results are shown for the unimodal problems, and in Table 4, the results on multimodal problems are presented. These results are also compared by Wilcoxon rank sum test at \(5\%\) significance level to analyze the significant difference between the conventional HSA and the proposed MHSA. These statistical results are indicated by ‘stat. out.’ which provide the p value and the outcome of the results. The outcomes ‘\(+/=/-\)’ indicate that the proposed MSA is significantly better, equal, or worst than the conventional HSA. By comparing the results on unimodal test functions, it can be analyzed that in all the problems with varying dimension size, the proposed MHSA has significantly outperformed the conventional HSA except for 10-dimensional F6. In this problem, the proposed MHSA provides better value of mean and standard deviation of objective function values than the conventional HSA, but this improvement is not statistically significant. The low value of standard deviations shows the robustness of results in all the problems by the proposed MHSA. Moreover, on the functions F1, F2, and F3, the MHSA is able to provide the optima irrespective of the dimension, while the conventional HSA is unable to locate the optima even for a single problem. On multimodal problems F8, F10, F12, F14, F17, F18, F20, and F21, the proposed MHSA locate the global optima, while the conventional HSA is able to do this only for F21. Obviously, these results demonstrate the better exploration ability by the MHSA as compared to the conventional HSA. The statistical outcomes demonstrate that in most of the problems, the MHSA has obtained significantly better results than the conventional HSA. As an example to show the search history of the MHSA, Fig. 4 is plotted. In this figure, the harmonies of the HM are shown for 2-dimensional Rastrigin function (F12). The figures show that, initially, the harmonies are diversified and after some iterations, they try to converge the optima and finally at iteration 500, and they all converge to the global optima (0) of the problem.

Overall analysis conclude that the proposed MHSA improves the exploitation as well as the exploration ability of the conventional HSA using the modified search mechanism. On some cases like on problem F6, which is complex and have massive local optima, the algorithm is unable to achieve the global optima. Although the results are better than the conventional HSA, but the proposed approaches are not sufficient enough to determine the near optimal solution. One reason for this may be the proposed pitch adjustment process, which is inspired by the encircling behavior of the GWO. In this approach, at later iterations of the algorithm, the coefficient A is unable to explore the search space, and therefore, when the best solution trapped at local optima, then there are high chances that the whole harmony memory may stuck at local optima. On some highly non-linear problems, this reason may affect the performance of the algorithm, and therefore, in future research work, this shortcoming can be reduced using other evolutionary operators like mutation.

To validate the capacity of the proposed MHSA, the next subsection provides a comparison of the MHSA to the other variants of the HSA developed in the literature.

Fig. 4
figure 4

Search history of the proposed MHSA, while solving the Rastrigin problem \(\big (\mathrm{optima}: \blacksquare , \mathrm{HM}: {\Diamond } \big )\)

Table 3 Comparison of results on 10, 30 and 50-dimensional unimodal benchmark problems
Table 4 Comparison of results on 10, 30 and 50-dimensional multimodal benchmark problems

4.2 Comparison of the MHSA with other variants of the HSA

In this section, different variants of the HSA developed in the literature are used to compare the performance of the proposed MHSA. The comparison is performed on the same set of benchmark problems and same setting of function evaluations as fixed in the previous subsection. Table 5 reports the mean and standard deviation of objective function values yields by the MHSA and other variants of HSA such as adaptive harmony search with best based search strategy (ABHS) [12], enhanced self-adaptive global-best harmony search (ESGHS) [30], novel self-adaptive harmony search (NSHS) [29], parameter adaptive harmony search (PAHS) [26], and Gaussian global-best harmony search algorithm (GGHS) [24] on 30-dimensional test problems. In the table, the better results are highlighted in bold face. In the  same table, the statistical outcomes obtained by applying the Wilcoxon rank sum test between the proposed MHSA and its competitive algorithm are also presented. The symbols ‘\(+/=/-\)’ are used to demonstrate that the MHSA is better, equal, or worse than its competitive algorithm. Moreover, the bottom part of the table provides the rank sum and overall rank (rank) of the algorithms to pick up the best performer algorithm. The rank sum is the number which denotes the sum of the rank for each test problem obtained by ascending order of the objective function values.

It can also be seen from Table 5 that the proposed MHSA is significantly better than the ABHS on all of the test problems except F21. In this problem, all the algorithms MHSA, ABHS, ESGHS, NSHS, PAHS, and GGHS provide the global optimal solution. The comparison between the ESGHS and MHSA shows that the proposed MHSA provides better results than the ESGHS in all the problems except F7 and F16. The NSHS algorithm is better than the MHSA on F7 and F15. When the comparison is performed between the PAHS and MHSA, the outperform search ability of the MHSA can be verified. The comparison with the GGHS shows that the proposed MHSA is better in all the problems except F7, F11, F15, and F16. The statistical results also validate this improvement in the search strategy of the proposed MHSA as compared to the other algorithms. The ranking of the algorithms shows that the proposed MHSA is the best performer algorithm than the ABHS, ESGHS, NSHS, PAHS, and GGHS. The methods next to the MHSA are GGHS, ESGHS, NSHS, ABHS, and PAHS, respectively.

To observe the impact of the proposed MHSA in solving the benchmark problems with dimensions 10 and 50, and to compare it with the other variants of HSA, Figs. 5 and 6 are presented. The experiments are carried out with same parameter setting as used for solving 30-dimensional problems. In Figs. 5 and 6, the average error has been reported for each of the optimization method over 23 problems. Also, the obtained average error data have been shown with the bars in the figure, which clearly demonstrate that the proposed MHSA is superior to all other variants of the HSA.

Fig. 5
figure 5

Comparison of average error values between proposed MHSA and other variants of HSA for 10-dimensional problems

Fig. 6
figure 6

Comparison of average error values between proposed MHSA and other variants of HSA for 50-dimensional problems

To analyze and compare the convergence rate among the algorithms ABHS, ESGHS, NSHS, PAHS, conventional HSA, and the proposed MHSA, the convergence curves are plotted in Figs. 7 and 8 corresponding to unimodal and multimodal benchmark problems. These variants are used for comparison of convergence rate because of their similar in structure. In these figures, each chart corresponds to one test functions that is used in the experiments. The horizontal axis represents the number of function evaluations and the vertical axis represents the the best value of the objective function obtained so far. Figure 7 clearly demonstrates that in most of the test functions, the ABHS, conventional HSA, PAHS, and NSHS algorithms exhibit similar search behavior during the search procedure. In problem F1 and F23, the convergence behavior of the ESGHS is better than ABHS, NSHS, conventional HSA, and PAHS. On the other hand, in all of the problems, the convergence behavior of the proposed MHSA is better than all other variants of the HSA and this shows a better global search performance of the MHSA. In problems F1, F2, F17, and F20, the proposed MHSA shows outperformed convergence rate and locates the optima within \(1/5^{\mathrm{th}}\) of the total number of function evaluations. Hence, the convergence behavior analysis demonstrates the better convergence rate of the MHSA not only than conventional HSA but than other variants such as ABHS, ESGHS, NSHS, and PAHS.

Table 5 Comparison of mean and standard deviation values obtained by the proposed MHSA and other variants of the HSA for 30-dimensional benchmark problems
Fig. 7
figure 7

Convergence curves for unimodal test functions

Fig. 8
figure 8

Convergence curves for multimodal test functions

4.3 Effect of the harmony memory size

To analyze the performance of the proposed MHSA on varying the harmony memory size HMS, the results corresponding to the harmony memory size 5, 10, 20, and 50 are calculated on benchmark test problems given in Tables 1 and 2. This experiment is conducted by repeating the proposed MHSA 30 times independently on 30-dimensional problems. The Wilcoxon rank sum test at 5% significance level is used to signify the difference in results. These statistical results are shown in symbols ‘\(+/=/-\)’ to indicate that the MHSA with 5 harmony memory size is better, equal, or worse than the same algorithm but with harmony memory sizes 10, 20, or 50. From Table 6, it can be observed that on almost all of the test functions, the proposed HMSA is either performing equal or significantly outperforming other cases with different harmony sizes. The statistical outcomes show that increment in the harmony memory size of the MHSA degrades its solution accuracy in most of the test problems. The average rank and overall rank calculated based on sorting the mean objective function values also demonstrate that the HMSA with harmony memory size 5 is superior, while the next are MHSA with 10, 20, and 50 harmony memory sizes, respectively.

Table 6 Results of the proposed HMSA on different harmony memory size 5, 10, 15 and 50

4.4 Comparison of the MHSA with other metaheuristics

In this section, the performance of the MHSA is compared with some other metaheuristic algorithms such as sine cosine algorithm (SCA) [33], grey wolf optimizer (GWO) [34], comprehensive learning particle swarm optimization (CLPSO) [28, 31], gbest-guided artificial bee colony (GABC) [47], and covariance matrix adaptation evolution strategy (CMA-ES) [13]. The SCA and GWO are two relatively new metaheuristic algorithms. SCA is inspired from the mathematical features of sine and cosine trigonometric functions and the GWO is inspired by the social and hunting behavior of grey wolves in nature. The original versions of these algorithms are used in this section to compare the performance of the proposed MHSA. The GABC is an extended version of the artificial bee colony algorithm, when the best solution obtained so far is used to guide the search procedure. The CLPSO is an improved version of the conventional particle swarm optimization (PSO), where the best positions of other particles are used to update the position of current particle. The reason of employing this concept in the PSO was to enhance the exploration ability of particles by learning from others particles and to avoid the particles to prone towards the local optima during the search procedure. The CMA-ES is an adaptive method of the evolution strategy. This method is well popular among the researchers because of their search abilities.

All these algorithms compare the performance of the MHSA by comparing the mean and standard deviation of objective function values. The parameter setting used by all the algorithms is presented in Table 7. The stopping criteria for each algorithm are fixed to \(10^4\times d\) function evaluations (FEV). All the algorithms are executed 30 times independently, and the mean and standard deviation of objective function values is recorded, which are presented in Table 8. In this table, the best results are highlighted in bold face. The statistical analysis of results is also performed to signify the difference in results and the ranking of all the algorithms is also done to select the best performer algorithm among all the compared methods. From the table, it can be seen that the SCA performs very poor as compared to the MHSA, because it has not provided significantly better results on any of the problem. The CLPSO has performed better than the MHSA only on F6, F7, and F15, the GWO has performed better than the MHSA on F4, F9, F19, and F22. The CMA-ES performs significantly better than the MHSA for F4, F6, F7, F15, and F19. The GABC provides better results than the MHSA on F6, F7, F11, F15, and F16 due to their global-best guidance ability. The ranking of the algorithms demonstrates that the proposed MHSA is overall best performer algorithm, while the GWO, CMA-ES, CLPSO, SCA, and GABC are the next methods, respectively.

To analyze the performance of the proposed MHSA and to compare it with other metaheuristics for other dimensions of the problems, the experiments are carried out on 10- and 50-dimensional benchmark problems. The parameter settings are adopted same as used for solving 30-dimensional problems. The experimental results are obtained in terms of average error over all the 23 benchmark problems. These results are shown in Figs. 9 and 10, where the average error is indicated on each bar of the algorithms. From these figures, it can be easily concluded that the proposed MHSA provides better optimization results on the considered benchmark set. Hence, the proposed MHSA can be considered as a better optimizer than other comparative algorithms.

Table 7 Parameter setting for algorithms
Table 8 Comparison of mean and standard deviation values obtained by the proposed MHSA and other metaheuristic algorithms
Fig. 9
figure 9

Comparison of average error values between proposed MHSA and other metaheuristics for 10-dimensional problems

Fig. 10
figure 10

Comparison of average error values between proposed MHSA and other metaheuristics for 50-dimensional problems

4.5 Illustrative examples

In this section, to evaluate the performance of the proposed MHSA, three well-known benchmark structural engineering design problems are used. For this, different variants of the HSA and conventional HSA are simultaneously compared with the proposed MHSA to analyze its search performance. The harmony size for all the problems is fixed to 5 as suggested in Subsect. 4.3. One of the interesting features of this section is the employed constraint handling mechanism, where neither the penalty approach nor any other constraint handling is applied. In this approach, first, the value of the objective function and constraints are evaluated on harmonies, and then, the violation is recorded corresponding to each harmony. In this way, the better harmony can be recognized easily, which will be the one that having lesser constraint violation value. In the situation, where more than one harmonies are recognized as feasible one, i.e., having zero value of constraint violation, then the one having better objective fitness is picked as a best harmony of the memory. In this way, the constraints are tackled in the proposed MHSA, conventional HSA, and other variants. Obviously, this mechanism of handling the constraints does not involve any parameters. This mechanism evaluates the search strategy of the MHSA directly to the constrained problems by just picking the best one in terms of constraint violation value. For a general form of the optimization problem

$$\begin{aligned}&\begin{array}{cc} \min \;\;\;\;\;\; F(X), \quad X=(x_1,x_2,\ldots ,x_d)\in \mathbb {R}^d, \end{array} \end{aligned}$$
(38)
$$\begin{aligned} \mathrm{s.t.}\;\;\;\;\;&g_j(X)\le 0, \quad j=1,2,\ldots ,J\nonumber \\ \;\;\;\;\;&h_k(X) = 0, \quad k=1,2,\ldots ,K\nonumber \\ \;\;\;\;\;&x_{i}^{\min }\le x_i \le x_{i}^{\max }, \end{aligned}$$
(39)

the constraint violation corresponding to the harmony \(\hat{X}\) can be calculated as follows:

$$\begin{aligned} {\mathrm{viol}}_{\hat{X}}=\sum _{j=1}^{J} G_j(\hat{X})+\sum _{k=1}^{K} H_k(\hat{X}), \end{aligned}$$
(40)

where

$$\begin{aligned} G_j(\hat{X})= & {} \left\{ \begin{array}{ll} g_j(X) & {} g_j(X)>0 \\ 0 & {} \mathrm{otherwise}, \end{array}\right. \end{aligned}$$
(41)
$$\begin{aligned} H_k(\hat{X})= & {} \left\{ \begin{array}{ll} \left| h_k(\hat{X})\right| &{} \left| h_k(\hat{X})\right| -\epsilon >0 \\ 0 &{} \mathrm{otherwise}, \end{array}\right. \end{aligned}$$
(42)

where \(x_{i}^{\min }\) and \(x_{i}^{\max }\) are the lower and upper bounds for a harmony component \(x_i\) of X. Symbols j and k indicate the number of inequality and equality constraints, respectively, in the optimization problem. The symbol \(\epsilon\) is predefined tolerance parameter, which has been fixed to \(10^{-4}\) in the present study.

The description of the constrained engineering design problems is presented as follows.

4.5.1 Compression spring design

The objective of this problem [4] is to minimize the weight of a tension/compression spring with some constraints such as surge frequency, shear stress, and minimum deflection. In this problem, three decision variables, namely, wire diameter (d), mean coil diameter (D), and the number of active coils (N), are involved. Mathematically, the problem can be stated as follows:

$$\begin{aligned} \begin{array}{cc} \min \;\;\;\;\;\; f_1(X) =(x_3+2)x_1^2x_2, \end{array} \end{aligned}$$
(43)

where \(X=(x_1,x_2,x_3,x_4)=(d,D,N)\in \mathbb {R}^3\)

$$\begin{aligned} \mathrm{s.t.}\;\;\;\;\;&1-\frac{x_2^3x_3}{71785x_1^4}\le 0, \end{aligned}$$
(44)
$$\begin{aligned} \;\;\;\;\;&\frac{4x_2^2-x_1x_2}{12566x_1^3x_2-x_1^4}-\frac{1}{5108x_1^2}\le 0,\end{aligned}$$
(45)
$$\begin{aligned} \;\;\;\;\;&1-\frac{140.45x_1}{x_2^2x_3}\le 0,\end{aligned}$$
(46)
$$\begin{aligned} \;\;\;\;\;&\frac{x_1+x_2}{1.5}-1\le 0, \end{aligned}$$
(47)

where \(0.05 \le x_1\le 2.00\), \(0.25 \le x_2\le 1.30\), and \(2.00 \le x_3\le 15.00\).

The numerical results in terms of mean, median, minimum (best), and maximum (worst) and standard deviation values of the weights recorded over 30 trails are presented in Table 9 for the proposed MHSA. In this table, the results of the conventional HSA, ABHS, ESGHS, PAHS and GGHS are also reported and the best results are highlighted in bold face in order to compare the search efficacy of the MHSA. The statistical outcomes along with the p values are also shown in the same table, which clearly demonstrate the superior and significantly better search ability if the MHSA than other variants of the HSA and conventional HSA.

4.5.2 Pressure vessel design

In this problem [17], the goal is to minimize the cost of cylindrical pressure vessel, which is closely related to material, structure, and welding. The ends of the pressure vessel are covered. In this problem, “the thickness of the shell (\(T_{\mathrm{SH}}\))”, “head (\(T_{\mathrm{HD}}\))”, “inner radius (R)”, and “range of cross-section minus head (L)” are needed to be optimized. Mathematically, the problem is stated as follows:

$$\begin{aligned} \begin{array}{cc} \min \;\;\;\;\;\; f_2(X) =0.6224x_1x_3x_4+1.7781x_1^2x_3+3.1661x_1^2x_4+19.84x_1^2x_3, \end{array} \end{aligned}$$
(48)

where \(X=(x_1,x_2,x_3,x_4)=(T_{\mathrm{SH}},T_{\mathrm{HD}},R,L)\in \mathbb {R}^4\)

$$\begin{aligned} \mathrm{s.t.}\;\;\;\;\;&-x_1+0.0193x_3\le 0, \end{aligned}$$
(49)
$$\begin{aligned} \;\;\;\;\;&-x_3+0.00954x_3\le 0,\end{aligned}$$
(50)
$$\begin{aligned} \;\;\;\;\;&-\pi x_3^2x_4-\frac{4}{3} \pi x_3^3+12,96,000\le 0,\end{aligned}$$
(51)
$$\begin{aligned} \;\;\;\;\;&x_4-240\le 0, \end{aligned}$$
(52)

where \(0 \le x_1,x_2\le 99\), \(10 \le x_3,x_4\le 200\).

The numerical results obtained by the MHSA are presented in Table 9 along with the results of other algorithms. In the table, better results are highlighted in bold face. It can be observed from this table that the proposed MHSA provides significantly better results as compared to the conventional HSA and other variants of the HSA. The obtained p values and statistical outcomes by the Wilcoxon rank sum test validate this fact. The less standard value of the MHSA than other algorithms also verifies the robustness of the results. Therefore, to minimize the cost of pressure vessel, the proposed MHSA can be preferred over other algorithms.

4.5.3 Three-bar truss design

This problem, which considers a three-bar planner truss structure, was introduced by Nowcki [45], where the volume of a bar truss is minimized with constraints applied on stress of each truss member. This can be achieved by optimizing the cross-sectional area which is formulated mathematically as follows:

$$\begin{aligned} \min \;\;\;\;\;\;&f_3(x_1,x_2) =L\times (2\sqrt{x}_1 x_1+x_2), X=(x_1,x_2)=(A_1,A_2), \end{aligned}$$
(53)
$$\begin{aligned} \mathrm{s.t.}\;\;\;\;\;&\frac{\sqrt{2}x_1+x_2}{\sqrt{2}x_1^2+2x_1x_2}P-\sigma \le 0,\end{aligned}$$
(54)
$$\begin{aligned}&\frac{x_2}{\sqrt{2}x_1^2+2x_1x_2}P-\sigma \le 0,\end{aligned}$$
(55)
$$\begin{aligned}&\frac{1}{x_1+\sqrt{2}x_2}P-\sigma \le 0, \end{aligned}$$
(56)

where \(0 \le x_1,x_2 \le 1\) \(L=100 \, \mathrm{cm}, \sigma ,P=2 \, \mathrm{KN/cm}^2\).

The optimization results are presented in Table 9, where the better results are highlighted in bold face. In the same table, the comparison of results obtained from the MHSA is performed with the conventional HSA, other variants of the HSA such as ABHS, ESGHS, PAHS, and GGHS with same parameter setting, and same constraint handling technique used for the MHSA. The obtained p values and statistical outcomes by the Wilcoxon rank sum test verify that the proposed MHSA performs significantly superior than the conventional HSA, ABHS, PAHS, and GGHS. The comparison between the MHSA and ESGHS shows that both the algorithms are significantly similar to provide the optimum value of the cross-sectional area for three-bar truss structure.

Table 9 Comparison of results on constrained engineering design problems

5 Conclusions and future research directions

This paper proposes a modified variant of the harmony search algorithm, called MHSA, for solving global optimization problems. In the direction of improvement; first, a parameter-free approach has been proposed for the harmony memory considering rate (HMCR) and pitch adjustment rate (PAR), which are considered base factors for the performance of the HSA. These parameters are adopted as non-linear in nature to mimic the non-linearity of the search process. The main advantage of using this approach is that the user does not need to worry about the fine-tune setting of these core control parameters. Second, the improvisation process is modified, based on inspiring the encircling mechanism of the well-known grey wolf optimizer, to effectively increase the diversification and intensification of the search space by the harmony, and to provide a better transition from exploration to exploitation. In the last step, the random generation of harmony process is modified to maintain the randomness and to speed up the convergence rate by the help of partial opposite harmonies. The impact of all these strategies in improving the search efficiency of the conventional HSA is validated through a standard and well-known collection of benchmark set of 23 problems and three structural real engineering design problems. The statistical and convergence analysis has verified the significant improvement in the search procedure of the proposed MHSA and introduced it as a superior global optimizer than the conventional HSA, other variants such as ABHS, ESGHS, NSHS, PAHS, and GGHS, and other metaheuristics such as CLPSO, GWO, CMA-ES, GABC, and SCA. The engineering design problems also demonstrate the significantly superior performance of the proposed MHSA than conventional HSA and other variants of the HSA.

The present study can be extended for binary, discrete, multi-objective, and many-objective optimization tasks by integrating necessary operators. Future research may also include the validation on other complex benchmark problems and real-world application problems like vehicle scheduling, aircraft streamline modeling problems, feature selection, and some others.