Keywords

1 Introduction

A broad class of population-based algorithms for solving global optimization problems has been developed till date. Some of them are genetic algorithms (GA) [1], differential evolution (DE) [2], particle swarm optimization (PSO) [3], ant colony optimization (ACO) [4], and self-organizing migrating algorithm (SOMA) [5], etc. Among the above mentioned algorithms, SOMA is comparatively a new comer to the class of population-based stochastic search technique capable of handling all type of functions. SOMA can be classified as an evolutionary algorithm, regardless of the fact that no evolution takes place, i.e., no new generations of individuals are created during the search; only the positions of the individuals in the search space are changed during a generation called ‘migration loop’. The main features of this algorithm are:

  1. (i)

    It works efficiently for very low population size.

  2. (ii)

    It quickly converges to global optimal solutions.

Despite the fact of several attractive features, sometimes SOMA may converge prematurely and the solution may trap to local optima and this situation arises with the increase of dimensionality. As a result, there is diversity loss in the population. To maintain the diversity mechanism, SOMA can be hybridized with local search techniques or other population-based techniques. Hybridization is a grouping of two or more algorithms, in which one seeks a promising region within the large solution space expected to contain global minima, and the other makes use of the search domain to find the best solution rapidly and more precisely. Several attempts have been made earlier to hybridize population-based techniques with other existing approaches [612]. First variant of SOMA was developed by Deep and Dipti which is the hybridization of GA and SOMA [13].

Recently, Dipti and Seema developed a number of variants of SOMA, named SOMAQI, SOMA-M, and M-SOMAQI [1416]. In this paper, a novel variant of SOMA (M-NM-SOMA) based on Nelder-Mead crossover operator and non-uniform mutation operator is proposed. The performance of M-NM-SOMA has been evaluated on the set of 17 benchmark problems and the comparison of it is made with standard PSO and SOMA.

The paper is structured in the following manner: In Sect. 2, preliminaries are presented. M-NM-SOMA is presented in Sect. 3. In Sect. 4, the experimental results are shown. Finally, the paper concludes with Sect. 5 depicting the outcome of the current study.

2 Preliminaries

2.1 Self-Organizing Migrating Algorithm

Zelinka and Lampinen [17] first introduced SOMA, which is inspired by the collective behavior of intelligent creatures. This algorithm travels in migration loops and in each migration loop, active individual (individual having worst fitness value) travels a finite distance toward leader (individual having best fitness value) in N (path length/step size) moves of defined length (move size). This path is perturbed randomly by perturbation parameter (PRT) which is defined in the range [0, 1]. Perturbation vector (PRT) controls the perturbation and is created before an individual proceeds toward leader in the following manner:

$$\begin{array}{*{20}l} {{\text{PRTVector}}_{j} = 1\,{\text{if}}\,{\text{rnd}}_{j} < {\text{PRT}},} \hfill \\ {{\text{PRTVector}}_{j} = 0,{\text{otherwise}}} \hfill \\ \end{array},\quad j = 1,2,3 \ldots n$$

where rnd j is uniformly distributed random number in (0, 1) and n is the number of decision variables.

More information regarding SOMA can be obtained from [18].

2.2 Nelder-Mead Crossover Operator

The Nelder-Mead simplex search method is a direct search method, originally proposed by Spendley et al. and later modified by Nelder and Mead [19]. First of all, a population is initialized and a simplex is created using (n + 1) points (n: the number of variables of a function) chosen arbitrarily from the population. In each migration, the worst point in the simplex is selected first. Then, a new simplex is formed from the old simplex through a sequence of elementary geometric transformations (reflection, contraction, expansion). After each transformation, the current worst point is replaced by a better one. In the proposed algorithm, Nelder-Mead simplex search method is used as a linear Nelder-Mead crossover operator which creates a new point using two out of three randomly chosen points from population.

The computational steps of NM crossover operator method are as follows:

  1. Step1:

    choose parameters γ > 1, β > 0;

  2. Step2:

    create an initial simplex with randomly chosen three vertices;

    find x h (the worst point), x l (the best point), x g (next to the worst point);

    calculate their function values f r, f l, f g; the worst point x h is reflected with respect to the centroid (x c ) of other two points;

    $$\begin{aligned} & x_{\text{r}} = 2x_{c} {-}x_{{{\text{h}}{.}}}\;({\text{reflection}}) \\ & {\text{if}}\,f_{\text{r}} < f_{\text{l}} \\ & x_{\text{new}} = ( 1+\upgamma)x_{\text{c}} {-} \,\upgamma\,x_{{{\text{h}} .}} ({\text{expansion}}). \\ & {\text{else}}\,{\text{if}}\,f_{\text{r}}\;{>}{=}\;f_{\text{h}} \\ & x_{\text{new}} = { (1} -\upbeta )x_{\text{c}} + \,\upbeta\,x_{\text{h}}{.}\;({\text{contraction}}). \\ & {\text{else}}\,{\text{if}}\,f_{\text{g}} < f_{\text{r}} < f_{\text{h}} \\ & x_{\text{new}} = ( 1+\upbeta)x_{\text{c}} {-} \,\upbeta\,x_{{{\text{h}}{.}}}\;({\text{contraction}}). \\ \end{aligned}$$
    (1)

    calculate f new and replace x h by x new.

  3. Step3:

    this process continues until termination criterion is satisfied.

2.3 Non-uniform Mutation Operator

Non-uniform mutation operator was proposed by Michalewicz [20] to decrease the weakness of random mutation in the real-coded GA. Non-uniform mutation randomly selects one solution x k from the population and its value is created according to the following rule:

$$\begin{aligned} & x_{\text{k}}^{{\prime }} = x_{\text{k}} + (ub_{\text{k}} - x_{\text{k}} ) \cdot T\;{\text{if}}\;\gamma \prec 0.5 \\ & {\text{Or}}\quad x_{\text{k}}^{{\prime }} = x_{\text{k}} + (x_{\text{k}} + lb_{\text{k}} ) \cdot T\,{\text{if}}\,\gamma \ge 0.5 \\ \end{aligned}$$
(2)

where \(T = \left( {\mu \left( {1 - \frac{t}{{t_{\hbox{max} } }}} \right)} \right)^{b}\) with γ and μ two uniformly distributed random numbers in the interval [0, 1], \(lb_{\text{k}}\) and \(ub_{\text{k}}\) are the lower and upper bound of x k, b > 0 is a parameter determining the degree of uniformity, t is a migration number, and t max the maximum number of migrations allowed to run. Non-uniform mutation has fine-tuning capabilities to achieve high precision.

3 Proposed Hybrid M-NM-SOMA Algorithm

In this section, a variant of SOMA, M-NM-SOMA has been proposed which is the hybridization of SOMA with Nelder-Mead crossover operator and non-uniform mutation operator. The convergence of standard SOMA is so fast that all other individuals move closer to the best individual very quickly. This causes the population diversity decrease and leads to the premature convergence. To overcome the above problems, SOMA is hybridized with Nelder-Mead crossover operator and non-uniform mutation operator to maintain the diversity among the solutions in the search space.

3.1 Methodology of Hybridization

First, the population is initialized randomly spread over the search domain. At each migration the individuals having highest fitness value as leader and having least fitness value as active are selected. Now the active individual travels a finite distance towards leader in N moves of defined length. Among the positions created, the best position is selected and replaces the active individual if it is better than active individual. Now, leader and active individuals are selected again from the population and a new point is created using Nelder-Mead crossover operator using Eq. (1). This new point is accepted only if it is better than active individual and is replaced with active individual. Then leader and active individuals are selected again from the population and a new point is created using non-uniform mutation using Eq. (2). This new point is accepted only if it is better than active individual and is replaced with active individual. The process is continued until some termination criterion is satisfied.

4 Experimental Results

The presented algorithm M-NM-SOMA is programmed using C++ and is executed on a Pentium III PC. M-NM-SOMA is used to obtain the results of 17 benchmark problems taken from the literature. All the problems are of minimization with minimum value 0. The seventeen problems with initialization range are given in Table 1. M-NM-SOMA is probabilistic technique and relies a lot on the generation of random numbers; therefore 30 trials of each problem are carried out. A run is measured to be a success if the solution obtained is within 1 % of the preferred precision. The termination criterion of the proposed algorithm in either a run is a success or a preset number of migrations (10,000) are performed.

Table 1 Benchmark functions

In order to make a comparative analysis of M-NM-SOMA with SOMA and standard PSO, various performance measures are considered like mean objective function value to check the efficiency and reliability, average number of function evaluations to check the convergence speed, and one more measure success rate is also considered.

The main parameters of M-NM-SOMA are population size, PRT, move size, and path length. The population size is taken as ten for all the problems. PRT parameter varies from 0.1 to 0.9 depending on the problem. The other parameters, move size and path length are taken as 0.31 and 3. Trials for the 17 problems are performed for dimensions (dim) n = 30, 50 and 100.

Table 2 shows successful runs of a total of 30 runs, corresponding to M-NM-SOMA, PSO, and SOMA. Results show that M-NM-SOMA is best in all 17 problems for dim 30 and 100 and it is best in 16 problems for dim 50.

Table 2 Successful runs of M-NM-SOMA, SOMA and PSO for Dim. 30, 50 and 100

Table 3 shows the average number of function evaluations corresponding to M-NMSOMA, PSO, and SOMA. Results show that M-NM-SOMA is best in 16 problems for all the three dim 30, 50, and 100. Hence on the basis of results, we can say that M-NM-SOMA shows better convergence accuracy.

Table 3 Average number of function evaluations of M-NM-SOMA, SOMA, and PSO for Dim. 30, 50, and 100

Table 4 shows the mean objective function value corresponding to M-NM-SOMA, PSO, and SOMA. Results show that M-NM-SOMA is best in 17 problems for dim 30 and is best in 16 problems for dim 50 and 100. Hence, M-NM-SOMA is most reliable and efficient. The problems which could not be solved by the particular algorithm is given the symbol (*) at the corresponding entries. The best results are highlighted in bold characters.

Table 4 Mean objective function value of M-NM-SOMA, SOMA, and PSO for Dim. 30, 50, and 100

Figures 1, 2 and 3 show the mean best objective function value curves for selected benchmark problems and from the figures it is very clear that M-NM-SOMA converges very fast. Hence the presented algorithm M-NM-SOMA shows its superiority over other algorithms PSO and SOMA.

Fig. 1
figure 1

Convergence graph of Ackley function for dim. 30

Fig. 2
figure 2

Convergence graph of Rastrigin function for dim. 50

Fig. 3
figure 3

Convergence graph of Dejong function with noise for dim. 100

5 Conclusions

In this paper, a new variant of SOMA, M-NM-SOMA has been proposed. The proposed algorithm is evaluated on 17 unconstrained benchmark problems and obtained results are compared with the results of standard PSO and SOMA. Population size 10 only has been used to evaluate the performance of M-NM-SOMA. On the ground of the results obtained, it can be concluded that the proposed algorithm outperforms PSO and SOMA in terms of population size, efficiency, reliability, accuracy.