Keywords

1 Introduction

Nature always being a good teacher and by taking inspiration from it, humans evolve unique algorithms commonly known as nature-inspired algorithms [13]. Population-based meta-heuristics are one of its dominant class. A bevy of particles when reform their location based on their intellectual and unified behavior fall under this category [8]. Artificial bee colony (ABC) [2], Particle swarm optimization (PSO) [7], Gravitational search algorithm (GSA) [10] are few population-based meta-heuristics. Spider Monkey Optimization (SMO) being a new addition to this arena lessens the flaws of older meta-heuristics like stagnation in its basic design and an efficient algorithm. SMO is inspired by the intelligent food foraging behavior of spider monkeys portraying the concept of fission-fusion society and is developed by J. C. Bansal et. al [3]. Besides, being an efficient algorithm it has some flaws too like being stuck in local optima [9].

This paper exhibits a newly developed variant of SMO namely Spider Monkey Optimization Algorithm based on Metropolis principle. This variant is evolved to strengthen the exploration capability of all spider monkeys which guide them to reach the optimal solution while maintaining their convergence speed.

Further, the structure of the paper is as: Sect. 2 presents an overview of the SMO algorithm proceeding to Sect. 3 representing SMO based on metropolis principle. Section 4 depicts the performance evaluation followed by conclusion in Sect. 5.

2 Spider Monkey Optimization

Stimulated from the intellectual conduct of spider monkeys, researchers evolved spider monkey optimization algorithm that portrays a perfect fission-fusion structure. It has six phases except initialization that is interpreted below, and its brief can be read in [3].

2.1 Local Leader Phase(LLP)

This phase presents the location amendment process of all spider monkeys (SM) which depends on SM’s persistence and social influence. Its social influence is based on their local leader and local group members experience. Location amendment depends on greedy approach by which prominent SM is chosen. Location amendment process is given in Eq. 1:

$$\begin{aligned} SMnew_{ij}=SM_{ij}+r_1\times (LL_{kj}-SM_{ij})+r_2\times (SM_{rj}-SM_{ij}) \end{aligned}$$
(1)

where, \(SM_{ij}\) is the persistence of \(i^{th}\) SM in \(j^{th}\) dimension, \(LL_{kj}\) represents the local leader of \(k^{th}\) group and \(SM_{rj}\) is \(r^{th}\) randomly selected SM. \(r_1\) is random number between (0,1) and \(r_2\) varies in the range of (−1,1).

2.2 Global Leader Phase (GLP)

Taking inspiration from global leader and get influenced from neighbour, SM update its position. In this phase, location is updated on the basis of fitness as SM having high fitness get more chance to update itself as compared to less fit SM. The location amendment process is given in Eq. 2:

$$\begin{aligned} SMnew_{ij}=SM_{ij}+r_1\times (GL_{j}-SM_{ij})+r_2\times (SM_{rj}-SM_{ij}) \end{aligned}$$
(2)

Here, \(GL_j\) is the location of global leader of the bevy. After amendment, greedy selection is prescribed on the selection of individuals, i.e. if the fitness of monkey is high, then its new location is selected else old one.

figure a

2.3 Global Leader Learning Phase (GLLP)

This phase is about learning of global leader in whole bevy and monkey with highest fitness get elected as global leader. If global leader doesn’t amend its location then global limit count is set to 0 else incremented by 1.

2.4 Local Leader Learning Phase (LLLP)

Every sub-group has its leader that is elected in this phase. If a local leader doesn’t amend itself then, a counter named local limit count is increased by 1.

2.5 Local Leader Decision Phase (LLDP)

If local leader of any bevy doesnt get relocated to a distinct edge known as Local Leader Limit (LLL), then all the monkeys of that group amend their locations either by random initialization or by using global leader wisdom through pr i.e. perturbation rate given in Eq. 3:

$$\begin{aligned} SMnew_{ij}=SM_{ij}+r_1 \times (GL_{j}-SM_{ij})+ r_1\times (SM_{ij}-LL_{kj}) \end{aligned}$$
(3)

2.6 Global Leader Decision Phase (GLDP)

If the global leader doesnt get relocated to a distinct edge known as Global Leader Limit (GLL), then the global leader splits the bevy into smaller subgroups or fuse subgroups into one unit group.

3 Spider Monkey Optimization Algorithm Based on Metropolis Principle

SMO has a major flaw of being stuck in local optima due to which it bounces the global optima. In global leader phase, there are possibility that global leader get stuck or do not explore the search space properly. For enhancing the exploration characteristics of the algorithm, spider monkey optimization algorithm based on metropolis principle is depicted.

3.1 Modified Global Leader Phase

In global leader phase, the location amendment process of SM is given in Eq. 2. From this equation, a new location of SM is evaluated, and then we have one old position represented by \(SM_{ij}\) and new location by \(SMnew_{ij}\). In the basic version of SMO greedy approach is used. The greedy approach has a dominant flaw that if the strength of the new solution is greater, then it is elected. Whereas, it may be possible that a non-prominent solution also covers the possibility of reaching to global optima.

For eradicating this flaw, new modification is intended i.e. based on metropolis principle taken from simulated annealing [5, 6]. It simulates the cooling behavior of the material. By this principle, non-prominent solutions are also accepted with a probability in the intended modification which is evaluated as shown in Eq. 5.

$$\begin{aligned} \varDelta C= & {} C_{new} - C_{old} \end{aligned}$$
(4)
$$\begin{aligned} P(\varDelta C)= & {} \exp (-\varDelta C/T) \end{aligned}$$
(5)

In Eq. 4, \(C_{new}\) and \(C_{old}\) are cost of new and old solutions respectively and their difference is saved in \(\varDelta C\). T represents the temperature i.e. used to evaluate probability exponentially. From Eqs. 4 and 5, it is possible that if the cost of a newly generated solution is less then also it is confirmed. This principle is the backbone of simulated annealing [4] as by this chance of stagnation is less because it has the power of accepting non-prominent solutions with probability.

Now, greedy selection of amended global leader phase is shown as:

figure b

Here, r is a random number between (0,1), and T is the temperature which is at 20. In above selection, there are two conditions of election reflecting the newly selected location of a SM. Firstly, if an SM attains high strength at an altered location, then it is elected. Secondly, in the case of non-prominent solutions if the random number is greater than probability then also the non-prominent solution is considered but with some probability. Through this modification, SM that are not coming in the range of strength are also elected that is solution giving high strength are chosen, but non-prominent are elected too which overcome the flaw of global leader phase. It results in bettering global search capability of global leader phase. By this modification, exploration ability is upgraded because of which premature convergence speed is maintained, and there are better possibilities to reach global optima.

4 Experimental Outcomes

4.1 Considered Test Problems

The proposed algorithm SMOM is tested over 12 benchmark functions to examine its indulgence among other rooted algorithms. These 12 benchmark functions are taken from reference papers of taken algorithms for testing and are depicted in Table 1.

Table 1. Test problems

4.2 Experimental setting

To substantiate that SMOM is a competed member in arena of population based meta-heuristics, comparative examination is performed among SMOM, SMO [3], PSO [7] and SaSMO [11]. Following experimental setup is done:

  • Population of Spider Monkeys (N) = 50;

  • Max number of groups = 5;

  • LLL = 1500

  • GLL = 50

  • Settings of SMO, PSO, and SaSMO are taken from their original papers [3, 7, 11].

4.3 Results

Table 2 unfolded the attained outcomes of all the taken algorithms SMO, PSO, SaSMO and SMOM based on above parameter settings. All taken algorithms are tested on 100 runs in C++. Results are shown in the form of standard deviation (SD), mean error (ME), average function evaluation (AFE) and success rate (SR).

Table 2. Comparison of outcomes of test problems

Results in above Table 2 exhibits that SMOM is a better variant than SMO, PSO, and SaSMO regarding reliability and accuracy at a cost of function evaluations in some benchmarks because it is giving a remarkable success rate. In addition to above results box-plots analysis of compared algorithms in terms of success rate is presented. Box-plots analysis [12] of SMO, PSO, SaSMO, and SMOM is shown in Fig. 1 representing the empirical distribution of data graphically. Figure 1 shows that variation, interquartile range and medians of developed SMOM is higher than other three. After this, a comparison is made by using the performance indices (PI) graph [1] based on ME, SR, and AFE. The computed values of PI for SMO, SaSMO, PSO and SMOM are portrayed in Fig. 2.

Fig. 1.
figure 1

Boxplot graph for success rate

Fig. 2.
figure 2

Performance index for test problems; (a) for case (1) (b) for case (2) and (c) for case (3).

Figure 2(a), (b) and (c) show the performance index of success rate, an average number of function evaluations and mean error respectively. Figure 2 indicates that PI of SMOM is notable as compared to other variants.

5 Conclusion

Eradicating the pitfalls of SMO, Metropolis step is applied to improve the exploration capability of SMO and avoiding stagnation in the population. This paper presents the modified version of SMO that is more reliable and accurate, namely metropolis operator based spider monkey optimization. This modification helps the global leader to achieve an optimal solution by accepting a non-prominent solution with some probability by using metropolis step. For testing the intensity of SMOM, it is examined over 12 benchmark functions, and results show that it is spell variant. Through statistical analysis, it is demonstrated that the proposed strategy is more reliable (better success rate) at the cost of function evaluations. In future, it can be applied to real-world optimization problems and complex optimization problems of continuous in nature.