Keywords

1 Introduction

A variety of computational techniques have appeared in literature for solving nonlinear optimization problems. However, there is no single technique that can claim to efficiently solve each and every nonlinear optimization problem. In fact, a technique, which is efficient for one type of nonlinear optimization problem, may be inefficient for solving other types of nonlinear optimization problems. Moreover, the computational time and memory space requirements of most of these techniques are quite large. Keeping this in view, it is desirable to develop more efficient and reliable computational techniques for solving a large variety of nonlinear real life optimization problems of practical interest which require less memory space so that these can be easily implemented on personal computers. Some of the general requirements of a computational algorithm for solving global optimization problems are: (i) wide applicability i.e., it should be applicable to a wide class of real life problems (ii) simplicity in structure, which would allow it to be easily implemented on a computer system (iii) minimum mathematical complexities, so that it can be used conveniently, even by non expert users.

Optimization problems arise in several fields such as system engineering, telecommunication and manufacturing systems etc. In fact the newly developed optimization techniques are now being extensively used in various spheres of human activity where decisions have to be taken in some complex situations that can be represented by mathematical models. A real life optimization problem may have a number of local as well as global optimal solutions. It is desired by most of the users to design optimization techniques which determine the global optimal solution rather than the local optimal solution of nonlinear optimization problems. With the advent of computers, population based heuristics are becoming popular day by day, not only because of their ease of implementation, but also due to their wide applicability. Population based stochastic search methods have been frequently used in the literature to solve real life global optimization problems. Although these probabilistic search algorithms do not give absolute guarantee to determine the global optimum solution, these methods are preferred over traditional methods. Evolutionary Algorithms, particularly Genetic Algorithms (GAs) are the most commonly used heuristics for solving real life problems. Though GAs are efficient to solve global optimization problem but usually converges very slow and generally required large population size also. Many attempts have been made in literature to improve the efficiency of Genetic algorithms either by designing new operators or by incorporating the features of other techniques. Grefensette [2] introduced a hybrid variant of GA which uses a traditional hill climbing routine for improving the fitness of newly generated points known as Fawwin and Lamarckian evolution approach and then the new offspring compete with their parents for selection as a member of the next population. Kasprzyk and Jasku [3] developed a variant of GA which is hybridization of GA and simplex method known as genetic—simplex algorithm, in this approach first the solution area is explored by GA operators and then simplex method uses starting points provided by the GA to determine an optimum solution. Chelouah and Siarry [4] also proposed a hybrid method by combining the features of continuous Tabu search and Nelder-Mead simplex algorithm. This approach is used to find the global minima of nonlinear optimization problems. Wang et al. [5] apply quantum computing to GAs to develop a class of quantum-inspired GAs.

Javadi et al. [6] presented a neural-network based genetic algorithm which uses neural network to improve solution quality and convergence speed of GAs. Fan et al. [7] integrate the Nelder–Mead Simplex search method with genetic algorithm and particle swarm optimization in an attempt to locate the global optimal solution of nonlinear continuous variable functions focusing mainly on response surface methodology. Comparative performance on ten test problems is demonstrated. Hwang and Song [8] present a novel adaptive real-parameter simulated annealing genetic algorithm which maintain the merits of GAs and simulated annealing. Zhang and Lu [9] define a new real valued mutation operator and use it to design a hybrid real coded GA with quasi-simplex technique. A nitche hybrid genetic algorithm is proposed by Wei and Zhao [10] and results are reported on 3 benchmark functions. Premalatha and Nataranjan [11] established hybrid PSO which proposes the modification strategies in PSO using GA to solve the optimization problems. Khosravi et al. [12] proposes a novel hybrid algorithm that uses the abilities of evolutionary and conventional algorithm simultaneously. Ghatei et al. [13] designed a new variant of particle swarm optimization by including Great Deluge Algorithm (GDA) as local search factor. Esmin and Matwin [14] presented a hybrid approach of using the features of PSO and GA known as HPSOM algorithm. The main idea of this approach is to integrate the PSO with genetic algorithm mutation method.

It is clear from the literature that several variants of GA are available in literature to improve the efficiency of these algorithms. Deep and Dipti [1] presented a variant of GA named as SOMGA in which GA has been hybridized with a new emerging population based technique, Self Organizing Migrating Algorithm (SOMA). SOMA is an emergent search technique in the field of population based techniques, developed by Zelinka and Lampinen [15]. This algorithm is based on the self organizing behavior of group of individuals looking for food. For this all individuals follow the path of one individual known as Leader selected among them based on the best fitness value. In the whole process of this algorithm, no new solutions are created during the search. Instead, only the positions of the solutions are changed during a generation, called a migration loop and it works with small population size. The details of this algorithm can be found in many research papers and books Oplatkova and Zelinka [16], Zelinka [17], Nolle and Zelinka [18], Nolle et al. [19], Nolle [20], Zelinka et al. [21, 22], Onwubolu and Babu [23], etc. The common feature between GA and SOMA is that both are population based stochastic search heuristics. Mutation and crossover is done (but the way in which they are applied is different). Some differences of the two algorithms are as follows:

  • In GA new points are generated, whereas in SOMA no new points are generated, but instead their positions are updated.

  • Individuals can proceed in any direction in GA, whereas in SOMA individuals proceed only in the direction of the leader.

  • GA has a competitive behavior, but SOMA has competitive-cooperative behavior.

  • GA works with a large population size, whereas SOMA works with a small population size.

2 Previous Work Done

The algorithm SOMGA has been designed to solve the unconstrained non-linear optimization problems of the type:

$$ \left. \begin{aligned} & {\text{Min}}\;f\left( X \right),X = \left( {x_{1} ,x_{2} , \ldots x_{n} } \right) \\ & \quad a_{i} \le x_{i} \le b_{i} ,i = 1,2, \ldots n. \\ \end{aligned} \right\} $$
(1)

where ai and bi are the lower and upper bounds on the variables.

As discussed above several variants of population based techniques are available in literature, for improving the convergence of these algorithms. The main reason of slow convergence and premature convergence of these algorithms is considered as diversity mechanism. If one algorithm fails to maintain the diversity during the search then there are more chances to be converging premature. SOMGA is an effort to improve the efficiency of both the algorithms GA and SOMA by extracting the best features of these algorithms. In the hybridization of SOMGA, binary coded GA and real coded SOMA has been used. It derives the features of selection, crossover and mutation from binary coded GA and derives the features of small population size, organization and migration from real coded SOMA. The features of GA and SOMA are combined in such a way that solution search space can be thoroughly exploited and diversity of the search domain can be preserved by generating new points in solution space. The methodology of this algorithm is:

  • Methodology

First the individuals are generated randomly. These individuals compete with each other through tournament selection; create new individuals via single point crossover and bitwise mutation. Then the best individual among them is considered as leader and the worst individual is considered as active. The active individual proceeds in the direction of the leader in n steps of the defined length. This path is perturbed randomly by a parameter known as PRT parameter. It is defined in the range \( \left\langle {0,1} \right\rangle . \) Using this PRT parameter value, PRT vector is created before an individual proceeds towards leader. This parameter has the same effect as mutation in GA. The movement of an individual is given as follows:

$$ x_{i,j}^{MLnew} = x_{i,j,start}^{ML} + \left( {x_{L,j}^{ML} - x_{i,j,start}^{ML} } \right)\,tPRTVector_{j} $$
(2)

where \( t \in \left\langle {0,by\;Step\;to,PathLength} \right\rangle , \)

\( ML \) :

is actual migration loop.

\( x_{i,j}^{MLnew} \) :

is the new positions of an individual.

\( x_{i,j,start}^{ML} \; \) :

is the positions of active individual.

\( x_{L,j}^{ML} \; \) :

is the positions of leader.

At last the best individuals (number equal to population size) from the previous and current generations are selected for the next generation. The computational steps of this approach are given below:

Step 1::

Generate the initial population.

Step 2::

Evaluate all individuals

Step 3::

Apply tournament selection on all individuals to select the better individuals for the next generation.

Step 4::

Apply crossover operator on all individuals with crossover probability Pc to produce new individuals.

Step 5::

Evaluate the new individuals.

Step 6::

Apply mutation operator on every bit of every individual of the population with mutation probability Pm.

Step 7::

Evaluate the mutated individuals.

Step 8::

Find leader (best fitted individual) and active (worst fitted individual) of the population.

Step 9::

For active individual a new population of size N is created. where N = (Path Length/step size). This population is nothing but the new positions of the active individual towards the leader in n steps of the defined length. The movement of this individual is given in Eq. (2).

Step 9.1::

Select the best individual of the new population and replace the active individual with this best individual.

Step 10::

Select the best individuals (in fitness) of previous and current generation for the next generation via tournament selection.

Step 11::

If termination criterion is satisfied stop else go to Step 3.

Step 12::

Report the best chromosome as the final optimal solution.

  • Salient Features of SOMGA

The salient features of the SOMGA are:

  1. 1.

    SOMGA attempts to determine the global optimal solution of nonlinear unconstrained optimization problems.

  2. 2.

    SOMGA does not require the continuity and/or differentiability conditions of the objective function.

  3. 3.

    SOMGA works on purely function evaluations and hence can be used in situations where the objective function is discontinuous in nature.

  4. 4.

    SOMGA does not require an initial guess value to start, but instead SOMGA requires a lower and upper bound for the unknown variables.

3 Section 1

3.1 Solution of Large Scale Problems Using SOMGA

Now a days, to solve large scale optimization problems using evolutionary algorithms has become the new field of research. In large scale problems, where the number of unknown variables vary up to 200 not only the computational time but also the memory space requirements becomes very large. Besides this the complexity of the problem also increases significantly that many algorithms fails to reach the optimal solution. One algorithm able to reach optimal solution requires large population size as well as functional evaluations. The possibility of using SOMGA for solving large scale problems is considered in this section. One possible way to reduce the computational time and also to reduce the memory space is to reduce the population size. Since SOMGA already works with less population size hence there is no need of reducing population size. Reduction in memory space is also not required because the population generated for the active individuals takes memory space only for short time. After choosing the best one of the generated population, active individual gets replaced by this best individual and other individuals release the memory space. So here the chances for using it to solve large scale problems are very strong. The problems up to 200 variables can be solved on Pentium IV normally configured system.

3.2 Results and Discussion

In order to observe the performance of SOMGA on large scale problems a set of thirteen test problems have been selected given in appendix. These problems are scalable in nature that is their size can be increased or decreased as per the user’s choice. In general, the complexity of the problem increases as the dimension of the problem increases. Here in Griewank function the complexity at dimension 10 is maximum. The complexity of this function decreases as the dimension increases above 10.

The parameters of SOMGA used to solve 100 and 200 dimension are same and are given in Table 1. These parameters are population size, crossover rate, mutation rate, string length, PRT, step size, path length and total number of function calls allowed. Main thing to notice in Table 1 is that for solving 100 and 200 dimensional problems, only 20 population size is required. The results for problem size 10 are given in Table 2 taken from Deep and Dipti [1].

Table 1 Parameters of SOMGA for problem size 100 and 200
Table 2 Results of large scale problems for problem size 10

Here, first we apply SOMGA for problem size 100. The percentage of success, average function evaluations, mean and standard deviation of the objective function values of 30 runs are recorded in Table 3. The same information for problem size 200 is shown in Table 4. These values are also shown graphically in Figs. 1 and 2. It can be observed in Table 3 that SOMGA gives 100 % success in 11 problems out of thirteen. In Griewank problem the success rate is 40 % and Rosenbrock could not be solved at all.

Table 3 Results of large scale problems for problem size 100
Table 4 Results of large scale problems for problem size 200
Fig. 1
figure 1

Performance of SOMGA in terms of success for 100 dimension and 200 dimension problems

Fig. 2
figure 2

Performance of SOMGA in terms of function evaluations for 100 dimension and 200 dimension problems

In Table 4, the results for a problem size of 200 are presented. It is notable that SOMGA gives 100 % success in 10 problems out of thirteen. In Griewank problem, the success rate is 60 % for problem size 200 where as for problem size 100 it is 40 %. The success rate in Griewank problem increases as the dimension of the problem increases. This shows the behavior of Griewank problem on increasing the dimension that is, the complexity of the Griewank problem decreases as the dimension increases. In Rosenbrock problem SOMGA fails. Schwefel problem gives 100 % success for problem size 100, but could not be solved at all for problem size 200. This shows that the complexity of the schewefel problem increases as the dimension increases.

In Fig. 1, the success rate obtained by SOMGA in 10 dimension, 100 dimension and 200 dimension problems is plotted. Since the complexity of the Griewank function decreases as the dimension of the problem increases. Same kind of behavior can be seen in Fig. 2 problem number 5. In Fig. 2, the function evaluations required by SOMGA in 10 dimension, 100 dimension and 200 dimension problems are plotted. The behavior is obvious. Function evaluations are increasing as the dimension of the problem is increasing. In Griewank problem the function evaluations in 100 dimesion and 200 dimensions are almost same.

4 Section 2

Solution of Two Real Life Problems

To show the efficiency of algorithm SOMGA, it has been tested to solve two real life problems from the field of engineering. The problems and their solutions are described as follows.

4.1 Optimal Thermohydraulic Performance of an Artificially Roughened Air Heater

This problem is taken from Prasad and Saini [24]. In this problem the optimal thermo hydraulic performance of an artificially roughened solar air heater is considered. Optimization of the roughness and flow parameters (p/e, e/D, Re) is considered to maximize the heat transfer while keeping the friction losses to be minimum. This is an unconstrained optimization problem. It has three decision variables. The mathematical model of the problem, as given in Prasad and Saini [24] is:

$$ {\text{Maximize L }} = 2{.}5{\text{log}}\,{\text {e}}^{ + } + 5. 5-0. 1 {\text{R}}_{\text{M}} -{\text{G}}_{\text{H}} $$

where

$$ \begin{aligned} {\text{R}}_{\text{M}} & = 0. 9 5x_{2}^{0.53} \\ {\text{G}}_{\text{H}} & = 4. 5\left( {{\text{e}}^{ + } } \right)^{0. 2 8} \left( {0. 7} \right)^{0. 5 7} \\ {\text{e}}^{ + } & = {\text{x}}_{ 1} {\text{x}}_{ 3} \left( {\overline{\text{f}} / 2} \right)^{ 1 / 2} \\ {\bar{\text{f}}} & = \left( {{\text{f}}_{\text{s}} + {\text{f}}_{\text{r}} } \right)/ 2\\ {\text{f}}_{\text{s}} & = 0.0 7 9 {\text{x}}_{ 3}^{ - 0. 2 5} \\ {\text{f}}_{\text{r}} & = 2\left[ { 0. 9 5 {\text{x}}_{ 3}^{ 0. 5 3} + 2{.}5 \log \left( { 1 / 2 {\text{x}}_{ 1} } \right)^{ 2} - 3{.}75} \right]^{ - 2} \\ \end{aligned} $$

The notations used are as follows:

e+ :

roughness height.

p:

pitch of the roughness element.

D:

the hydraulic diameter of solar heater.

x1 = e/D:

relative roughness height.

x2 = p/e:

relative roughness pitch.

x3 = Re :

Reynolds number.

The bounds on the variables are:

$$ 0.0 2\le {\text{x}}_{ 1} \le 0. 8, \quad 10 \le {\text{x}}_{ 2} \le 40, \quad 3000 \le {\text{x}}_{ 3} \le 20{,}000 $$

This is an unconstrained problem and has been solved by SOMGA. It is earlier solved by Prasad and Saini [24] and Pant [25]. The results obtained by SOMGA are compared with the available results and are presented in Table 5. It is clear from the Table 5 that the solution obtained by SOMGA is better than previously quoted results.

Table 5 Optimal thermo hydraulic performance of an artificially roughened air heater

4.2 Frequency Modulation Sounds Parameter Identification Problem

This problem has been taken from Tsutsui and Fujimoto [26]. It is an unconstrained optimization problem. This problem is to determine the six parameters \( a_{1} ,\;\omega_{1} ,\;a_{2} ,\;\omega_{2} ,\;a_{3} ,\;\omega_{3\;} \) of the Frequency Modulation Sound model represented by

$$ y\left( t \right) = a_{1} \cdot \sin \left( {\omega_{1} \cdot t \cdot \theta + a_{2} \cdot \sin \left( {\omega_{2} \cdot t \cdot \theta + a_{3} \cdot \sin \left( {\omega_{3} \cdot t \cdot \theta } \right)} \right)} \right), $$

with \( \theta = \frac{2 \cdot \pi }{100} \). An evaluation function Pfms is defined as the summation of 101 square errors between the evolved data and the model data as follows:

$$ P_{fms} \left( {a_{1} ,\omega_{1} ,a_{2} ,\omega_{2} ,a_{3} ,\omega_{3} } \right) = \sum\limits_{t = 0}^{100} {\left( {y\left( t \right) - y_{0} \left( t \right)} \right)}^{2} , $$

where the model data are given by the following equation:

$$ y_{0} \left( t \right) = 1.0 \cdot \sin \left( {5.0 \cdot t \cdot \theta - 1.5 \cdot \sin \left( {4.8 \cdot t \cdot \theta + 2.0 \cdot \sin \left( {4.9 \cdot t \cdot \theta } \right)} \right)} \right). $$

Each parameter is in the range −6.40 to 6.35. In this problem, a generated sound wave and its evaluation function Pfms are extremely sensitive to some of these parameters a 1, w 1, a 2, w 2, a 3, w 3. This makes the problem difficult to reach the optimal point. This problem is a highly complex multimodal one having strong epistasis, with minimum value Pfms(x*) = 0. Empirical results are shown in Table 6. Tsutsui and Fujimoto [26] have used this problem for the testing of their technique that is Phenotypic Forking Genetic Algorithm (p-fGA). This technique takes 22621 generations on average to solve this problem. On the other hand SOMGA solves this problem in 156 generation or 7207 function evaluation only.

Table 6 Solution of Frequency modulation sounds parameter identification problem

5 Section 3

5.1 Comparison with the Memetic Algorithm

Generally two kinds of local search methods can be used in GAs to improve the efficiency of these algorithms. One is the deterministic method and the other is to use another population based stochastic search strategies as local searches. Although deterministic methods have less exploration capabilities but they converges very fast in small neighborhood. In this section a comparison has been made between the two approaches of hybridization. One approach is SOMGA described in Sect. 2, Previous work done, in which GA is hybridized with another population based technique SOMA. Another approach is hybridization of GA with a deterministic local search method. This approach is based on the idea that first let the population evolve using GA and when the search domain become narrow and convergence slow down after certain number of generations apply local search on selected individual to faster the convergence of it. The methodology of this approach is as follows:

Step 1::

Generate the initial population.

Step 2::

Evaluate all individuals.

Step 3::

Apply selection operator on all individuals to select the better individuals for the next generation.

Step 4::

Apply crossover operator on all individuals with crossover probability Pc to produce new individuals.

Step 5::

Evaluate the new individuals.

Step 6::

Apply mutation operator on every bit of every individual of the population with mutation probability Pm.

Step 7::

Evaluate the mutated individuals.

Step 8::

If generation is less than the specified generations go to Step 3 else go to Step 9.

Step 9::

Select best q% individuals and apply LS.

Step 10::

If termination criterion is satisfied stop else go to Step 3.

The algorithm terminates as soon as the best fit solution is obtained within 1 % accuracy of the known global optimum solution or it crosses the allowed number of function evaluations. Any derivative free local search method can be used in this approach. We use the well known Hooke and Jeeves direct search method described in Bazaraa et al. [27]. It is multidimensional search method with discrete steps and works without using derivatives. Hence it is able to solve a wide range of problems. It is also very quick and robust at finding the local optimal solution of a problem. It is tested on five well-known benchmark test problems, namely, Ackley function, Schwefel’s function, Griewank’s function, Restrigin’s function, Rosenbrock’s function taken from Ali et al. [28]. These five test problems are most commonly used for evaluating the performance of evolutionary algorithms. The parameters used in this approach namely population size, probability of crossover (Pc), probability of mutation (Pm), specified number of generations (taken in our experiments after fine-tuning) after which LS has to be activated and total number of function calls allowed are given in Table 7 and the results obtained by this approach are presented in Table 8.

Table 7 Parameters of MA
Table 8 Comparative results of SOMGA and MA

5.2 Results and Discussion

For this approach, it is suggested that for solving an n dimensional problem, population size 5 * n should be taken and LS should be applied after 10 * n generations. But minimum population size 100 is required for 10 or more than 10 dimensional problem. All the results have been taken using these specifications. Figure 3 shows the success attained by MA and SOMGA. The greater the portion covered by an algorithm in a column, the better the method is, in that problem. It is clear from the graph that the portion covered by both the algorithms GA and SOMGA is almost equal, except in one problem. The main difference between the two is that MA is providing success at 100 population size for 10 variable problem and SOMGA is providing the same success at 10 population size for 10 variable problem. Hence on the basis of population size SOMGA is better than MA.

Fig. 3
figure 3

Graph showing success attained by MA and SOMGA. Greater portion \( \Rightarrow \) better method

Figure 4 shows the function evaluations taken by MA and SOMGA in successful runs. In this graph, the lesser the portion covered by an algorithm in a column, the better the method is in that problem. It is evident from the graph that the portion covered by SOMGA is much lesser than MA. This means SOMGA required less number of function evaluations than MA for obtaining the global optimal solution of unconstrained optimization problems. Hence SOMGA is more efficient than MA.

Fig. 4
figure 4

Graph showing the function evaluations taken by MA and SOMGA. Lesser portion \( \Rightarrow \) better method

On the basis of these results, it is concluded that SOMGA is far more superior to MA in terms of function evaluations. Hence SOMGA is recommended for solving the unconstrained nonlinear optimization problems.

6 Conclusions

In this chapter a variant of GA and SOMA known as SOMGA has been used to solve large scale optimization problems. From the discussion part of results for large scale optimization it is clear that for solving up to 200 dimension problem SOMGA requires very less population size as well as function evaluations. Later two real life problems from the field of engineering have been solved using the same technique and SOMGA proves its efficiency to solve these problems. At last a comparison between two variants of GA i.e. SOMGA and MA has been made. Though MA is able to solve all the test problems with good success rate but it is expensive in terms of function evaluations. One drawback of using deterministic methods is that they are usually problem specific hence restricted to solve some specific classes of problems. Another drawback of using these traditional local searches is that if LS is not implemented properly, the chances of getting trapped in local minima is more because these methods have less exploration qualities. It means that these hybridized algorithms demand careful fine tuning of local search parameters. On the other hand population based stochastic search techniques are not problem specific and are applicable to solve a wide range of problems. Another advantage of using these techniques is that they use multiple guesses to improve a solution. Hence have more exploration qualities.