Keywords

1 Introduction

Hybrid technique is regarded as an effective method of improving the performance of a population based technique for solving complex optimization problems. The good exploration capabilities are often used to locate some promising zones within the wide solution space, while the local optimization methods exploit the located promising zones to achieve the best solution quickly and accurately. Many attempts have been made in literature to hybridize population based techniques with other exiting approaches such as Quadratic Approximation. A random search technique for global optimization based on the quadratic approximation was developed by Mohan and Shankar (now Deep) [1]. Deep and Das proposed four variants of quadratic approximation based hybrid genetic algorithm for function optimization and tested its performance on the set of benchmark test problems [2]. Deep and Bansal [3] presented the hybridization of PSO with quadratic approximation operator (QA) and the presented results showed the efficiency of hybrid technique [4]. A variant of Quantum behaved Particle Swarm optimization for solving global optimization problems was presented by Millie et al. To improve the performance of real coded genetic algorithm Deep and Das [5] hybridized it with quadratic approximation. Millie et al. presented a new variant of Particle Swarm optimization named QPSO for solving global optimization problems and tested it on various benchmark problems. Analysis of results showed that the use of multiparent quadratic interpolation operator outperformed basic PSO [6]. Deep and Bansal [3] developed a variant of PSO with hybridization of quadratic approximation operator for economic dispatch problems with valve-point effects.

This paper presents a variant (SOMAQI) of Self Organizing Migrating Algorithm (SOMA) which uses the quadratic approximation or interpolation for creating a new solution vector in the search space. Quadratic Approximation is an operator which produces the minima of the quadratic curve passing through the three chosen individuals and substantially improves the performance of SOMA. SOMA is an emergent search technique based on the self-organizing behavior of groups of individuals in a social environment. Like other evolutionary algorithm it also works with a population of solutions. The main feature of this algorithm which makes it distinguished as compared to other algorithms is that no new solutions are created during the search. Instead, only the positions of the solutions are changed during a generation, called a migration loop. The proposed algorithm SOMAQI creates new point in the search space using Quadratic Interpolation in such a way that diversity of the search domain can be maintained by newly generated points and can be thoroughly exploited. The direction of the search can be a little bit guided. The Competitive- Cooperative behavior can achieve the global optimal solution with a small population size in less number of function evaluations. A set of 10 well known test problem has been used to evaluate the performance of SOMAQI.

The paper is organized as follows. In Sect. 2, preliminaries are presented. In Sect. 3, the proposed Algorithm SOMAQI is presented. In Sect. 4, the numerical results are discussed. Finally, the paper concludes with Sect. 5 drawing the conclusions of the present study.

2 Preliminaries

In this section two algorithms SOMA and PSO has been described. SOMA has been used for hybridization and PSO has been used for comparison.

2.1 Self Organizing Migrating Algorithm

Self-Organizing migrating Algorithm is a population based stochastic search technique which is based on the social behavior of a group of individuals [7, 8]. At each generation the individual with highest fitness value is known as leader and the worst is known as active. Rather than competing with each other, the active individual proceeds in the direction of the leader. This algorithm moves in migration loops and in each migration loop active individual travels a certain distance towards the leader in n steps of defined length. This path is perturbed randomly by PRT parameter. PRT vector is created before an individual proceeds towards leader. This parameter has the same effect as mutation in Genetic Algorithm (GA). It is defined in the range (0, 1). The movement of an individual is given as follows:

$$ x_{i,j}^{MLnew} = x_{i,j,start}^{ML} + (x_{L,j}^{ML} - x_{i,j,start}^{ML} )t\,PRTVector_{j} $$
(1)

where t \( \in \) (0, by step to, PathLength) and ML is actual migration loop

\( x_{i,j}^{MLnew} \) is the new positions of an individual, \( x_{i,j,start}^{ML} \) is the positions of active individual.

\( x_{L,j}^{ML} \) is the positions of leader.

The computational steps of SOMA are given as follows:

  1. Step 1:

    generate initial population;

  2. Step 2:

    evaluate all individuals in the population;

  3. Step 3:

    generate PRT vector for all individuals;

  4. Step 4:

    sort all of them;

  5. Step 5:

    select the best fitness individual as leader and worst as active;

  6. Step 6:

    for active individual new positions are created using Eq. (1). Then the best position is selected and replaces the active individual by the new one;

  7. Step 7:

    if termination criterion is satisfied stop else go to step 2;

  8. Step 8:

    report the best individual as the optimal solution.

2.2 Particle Swarm Optimization

Particle swarm optimization is a population based stochastic search algorithm developed by Eberhart et al. [9]. It is inspired by the social behavior of bird flocking and fish schooling. PSO consists of a swarm of particles. Particles move towards the best position in the search space with a velocity remembering each particle’s best known position (pbest) and global best known position (gbest). The velocity and position of a particle is updated according to the following equations:

$$ v_{i}^{k + 1 } = wv_{i}^{k} + c_{1} r_{1} (p_{i} - x_{i}^{k} ) + c_{2} r_{2 } \left( {p_{g} - x_{i}^{k} } \right) $$
(2)
$$ x_{i}^{k + 1} = x_{i}^{k } + v_{i}^{k} $$
(3)

In which: i = 1……N; N-the population of the group particles: k-the maximum number of iteration; r1, r2-the random values between [0,1], which are used to keep the diversity of the group particles; c1,c2-the learning coefficients, also are called acceleration coefficients; \( v_{i}^{k} \)—the velocity of particle i in k-th iteration; \( x_{i}^{k} \)—the position of particle i in k-th iteration; p i —the best known position of particle i; p g —global best known position of the group particles. The pseudo-code of PSO is given in Fig. 1.

Fig. 1
figure 1

PSO pseudo code

3 Proposed SOMAQI Algorithm

In this section SOMAQI has been presented which uses the quadratic interpolation for creating the new solution member in the search space. As discussed in the introductory section in the working of SOMA, no new solutions are created during the search instead only the positions of the solutions are changed. For maintaining the diversity of the population, new points using Quadratic Interpolation are created in the search space. The methodology of this algorithm is given as follows:

First the individuals are generated randomly. At each generation the individual with highest fitness value is selected as leader and the worst one as active individual. Now the active individual moves towards leader in n steps of defined length. The movement of this individual is given in Eq. (1). Then we again select the best and worst individual from the population. A new point is created using quadratic interpolation at the end of each generation using Eq. (4). For this we choose three particles R1, R2 and R3, where R1 is the leader and R2 and R3 are randomly chosen particles from the remaining population. This new point is accepted only if it is better than active individual and is replaced with active individual.

Mathematically, the new point \( x^{\prime } = \left( {x_{1}^{\prime } ,x_{2}^{\prime } , \ldots ,x_{n}^{\prime } } \right) \) is given as

$$ x^{'} = \frac{1}{2}\frac{{[\left( {R_{2}^{2} - R_{3 }^{2} } \right)*f\left( {R_{1} } \right) + \left( {R_{3}^{2} - R_{1 }^{2} } \right)*f\left( {R_{2} } \right) + \left( {R_{1}^{2} - R_{2 }^{2} } \right)*f\left( {R_{3} } \right)}}{{\left[ { (R_{2} - R_{3} } \right)*f\left( {R_{1} } \right) + (R_{3} - R_{1} )*f\left( {R_{2} } \right) + (R_{1} - R_{2} )*f\left( {R_{3} } \right)}} $$
(4)

The computational steps of SOMAQI are given as follows:

  1. Step 1:

    generate initial population;

  2. Step 2:

    evaluate all individuals in the population;

  3. Step 3:

    generate PRT vector for all individuals;

  4. Step 4:

    sort all of them;

  5. Step 5:

    select the best fitness individual as leader and worst as active;

  6. Step 6:

    for active individual new positions are created using Eq. (1). Then the best position is selected and replaces the active individual by the new one;

  7. Step 7:

    create new point by QI from R1, R2 and R3 using Eq. (4);

  8. Step 8:

    if new point is better than active replace active with the new one;

  9. Step 9:

    if termination criterion is satisfied stop else go to step2;

  10. Step 10:

    report the best individual as the optimal solution.

4 Main Results

The proposed algorithm is coded in C++ and run on a Pentium IV 2.39 GHz computer. We have tested the proposed SOMAQI algorithm on the 10 unconstrained benchmark problems given below. All the problems are highly multimodal. Since SOMAQI is probabilistic technique and rely heavily on the generation of random numbers, therefore 50 trials of each are carried out, each time using a different seed for the generation of random numbers. A run is considered to be a success if the optimum solution obtained falls within 1 % accuracy of the known global optimal solution. The stopping criteria is fixed number of iterations are performed.

$$ f_{1} (x) = \mathop \sum \limits_{i = 1}^{n} ( x_{i}^{2} - 10\cos ( 2\pi x_{i} ) + 10 ) $$
$$ f_{2} \left( x \right) = \frac{1}{4000}\mathop \sum \limits_{i = 0}^{n - 1} x_{i}^{2} - \mathop \prod \limits_{i = 0}^{n - 1} \cos ( \frac{{x_{i} }}{{\sqrt {i + 1} }} ) + 1 $$
$$ f_{3} \left( x \right) = \mathop \sum \limits_{{{\text{i}} = 0}}^{{{\text{n}} - 1}} 100 (x_{i + 1} - x_{i}^{2} )^{2} + ( x_{i} - 1)^{2} $$
$$ f_{4} \left( x \right) = - \mathop \sum \limits_{i = 1}^{n} x_{i } \sin \left( {\sqrt {\left| {x_{i} } \right|} } \right) $$
$$ f_{5} \left( x \right) = \left( {\mathop \sum \limits_{i = 0}^{n - 1} \left( { i + 1} \right)x_{i}^{4} } \right) + rand\left[ {0,1} \right] $$
$$ f_{6} \left( x \right) = 20 + e - 20\exp \left( { - 0.2 \sqrt \frac{1}{n} \mathop \sum \limits_{i = 1}^{n} (x_{i}^{2} )} \right) - { \exp }(\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} { \cos }(2\pi x_{i} )) $$
$$ f_{7} \left( x \right) = \hbox{max} |x_{i} |, 0 \le i < n $$
$$ f_{8} \left( x \right) = \mathop \sum \limits_{i = 0}^{n - 1} \left[ {x_{i} + \frac{1}{2}} \right]^{2} $$
$$ f_{9} \left( x \right) = \mathop \sum \limits_{i = 0}^{n - 1} |x_{i} | + \mathop \prod \limits_{i = 0}^{n - 1} |x_{i} | $$
$$ f_{10 } \left( x \right) = \mathop \sum \limits_{i = 0}^{n - 1} ( \mathop \sum \limits_{j = 0}^{i} x_{i} )^{2} $$

Initialization range and the optimum values for the functions are given in Table 1. The obtained results for functions f1–f3 are given in Table 2. Results for f4–f10 are given in Table 3 along with already published results.

Table 1 Initialization range and optimum values of the functions
Table 2 Comparison results of functions f1–f3(mean best)
Table 3 Comparison results of functions f4–f10(mean best)

We have tested functions f1, f2 and f3 for three dimension sizes 10, 20 and 30. The population size taken for all dimensions is 10. The maximum number of iterations corresponding to 10, 20 and 30 dimensions is set as 1000, 1500 and 2000. We have tested functions from f4 to f10 for three dimension sizes 10, 30 and 50 and the maximum number of iterations is set as 3000 for dimension 50. For the valid comparison all the parameters are taken same as in Ref. [10].

Different values of parameter PRT is taken as 0.005, 0.05, 0.1, 0.3, 0.5 and 0.9 for different functions. A total of 50 runs for each experimental setting are conducted. It is clear from the Table 2 that for function f1, the proposed algorithm is showing better results than BPSO and QPSO and giving comparative results than Q-QPSO for all the dimensions taken. For function f2 the proposed algorithm is giving better results than BPSO, QPSO and Q-QPSO for all dimensions taken. For function f3, though the percentage of success obtained by proposed algorithm is zero but mean best value is better than BPSO and QPSO and comparative to Q-QPSO. From Table 3, we can easily say that for functions f4–f10 the proposed algorithm outperforms BPSO, QPSO and Q-QPSO for all dimensions taken. Other existing algorithms use different population sizes for different dimensions. Whereas the main advantage of using proposed algorithm is that it works with very small population size (10) for all dimensions.

5 Conclusions

This paper presents a variant SOMAQI incorporating the concept of Quadratic Interpolation. The proposed algorithm is tested on ten standard unconstrained benchmark problems and results are compared with BPSO, QPSO and Q-QPSO. The results are obtained by using population size 10 only for all dimensions. On the basis of the results and discussion it is concluded that the proposed algorithm outperforms BPSO, QPSO and Q-QPSO in terms of mean best and population size.