Keywords

1 Introduction

Optimization works to unfolds all potential outputs meeting some stated constraints. From past years NIA have proposed various methods to explain NP-Hard and NP- complete optimization problems of real world [7]. Nature inspired algorithm, inspired by nature, is a stochastic approach wherein an individual or a neighbor’s interacts with each other intellectually to explain complicated preexisting mechanisms in an efficient manner. NIA is focused mainly on evolutionary based algorithm and swarm based algorithm. Evolutionary algorithm is a computational standard motivated by Darwinian Evolution [9]. Swarm intelligence assets in unlocking optimization problems considering collaborative nature of self-sustaining creatures like bees, ants, monkeys whose food-gathering capabilities and civilized characteristics have been examined and simulated [5, 6, 8]. SMO is a subclass of swarm intelligence, proposed by Jagdish Chand Bansal et al., in the year 2014 [4]. SMO is a food foraging based algorithm, considering nature and social frame work of spider monkeys. Fission-Fusion social system relates to social configuration of spider monkey. Many researchers have been studied that SMO algorithm is good at exploration and exploitation but there is possibilities of further improvements.

To improve the convergence speed, a variant of SMO is proposed, i.e. name as Fast Convergent Spider Monkey Optimization Algorithm. In the proposed modification acceleration coefficients based strategy is incorporated in the basic version of SMO.

The rest of the paper is structured as follows: In Sect. 2, SMO is described. Fast Convergent Spider Monkey Optimization Algorithm (FCSMO) is proposed in Sect. 3. In Sect. 4, performance of FCSMO is tested with several benchmark functions. Finally, Sect. 5 includes a summary and conclude the work.

2 Overview of Spider Monkey Optimization (SMO) Technique

A distinct class of NIA proposed by JC Bansal et al. [4], by trivial behavior of monkeys i.e. Spider Monkey Optimization (SMO) technique. Spider Monkey optimization, a Fission-Fusion mode is an extension of above discussed predicament. Here, a populous, consistently dictated by a female, is fragmentized into tiny clusters for seeking, chiefly food and they are buddy up to 40 to 50 singular who rift into small groups in search of food who again are headed by a female. In case she fails to meet the objective (food finding), further subdivides, again succeeded by a female, replicating the process until reach the food. For recent updates in their positions, various steps are undertaken: inspection of probing of wide search space and picking or electing of superlative practical results [10].

2.1 Steps of SMO Technique

SMO technique is based on population repetitive methodology. It consists of seven steps. Each step is described below in a detailed manner:

  1. 1.

    Initialization of Population: Originally a population comprised of N spider monkeys signifying a D-dimensional range \(M_{i}\) where i=1,2,...N and i represents \(i^{th}\) spider monkey. Each spider monkey (M) exhibits possible results of the problem under consider. Each \(M_{i}\) is initialized as below:

    $$\begin{aligned} M_{ij}=M_{min j}+R(0,1)\times (M_{max j}-M_{min j}) \end{aligned}$$
    (1)

    Here \(M_{min j}\) and \(M_{max j}\) are limits of \(M_{i}\) in \(j^{th}\) vector and R(0,1) is a random number (0,1).

  2. 2.

    Local Leader Phase (LLP): This phase relies on the observation of local leader and group mates, M renew its current position yielding a fitness value. If the fitness measure of the current location is larger than that of the former location, then M modifies his location with the latest one. Hence \(i^{th}\) M that also exists in \(k^{th}\) local group modify its position.

    $$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (LL_{k j}-M_{i j})+R(-1,1)\times (M_{r j}-M_{i j}) \end{aligned}$$
    (2)

    Here \(M_{i j}\) define \(i^{th}\) M in \(j^{th}\) dimension, \(LL_{k j}\) correlate to the \(k^{th}\) leader of local assembly location in \(j^{th}\) dimension. \(M_{r j}\) defines \(r^{th}\) M which is randomly picked from \(k^{th}\) troop such that \(r\ne i\) in \(j^{th}\) dimension.

  3. 3.

    Global Leader Phase (GLP): This following phase initiates just after accomplishing LLP. Depending upon the observation of global leader and mates of local troop, M updates their location. The position upgrade equation for GLP phase is as follows:

    $$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (GL_{j}-M_{i j})+R(-1,1)\times (M_{r j}-M_{i j}) \end{aligned}$$
    (3)

    Here \(GL_{j}\) poises for global leader’s location in \(j^{th}\) dimension and j=1,2,3,...,D defines an arbitrarily chosen index. \(M_{i}\) modify their locus considering probabilities \(Pr_{i}'s\). Fitness is used to calculate probability of a specific solution, with various methods such as

    $$\begin{aligned} Pr_{i}=0.1+(\frac{fitness_{i}}{fitness_{max}})\times 0.9 \end{aligned}$$
    (4)
  4. 4.

    Global Leader Learning (GLL) Phase: Here greedy selection strategy is applied on the population which modifies the locus of global leader i.e. the location of M which has best fitness in the group is chosen as the modified global leader location. Also its is verified that global leader location is modifying or not and in case not then GlobalLimitCount(GLC) is increased by 1.

  5. 5.

    Local Leader Learning (LLL) Phase: Here, local leader locus is modified by implement greedy selection in that population i.e. the location of M which has best fitness among the entire group is chosen as the latest location of local leader. Afterwards, this modified local leader location and old values are compared and LocalLimitCount (LLC) is increment by 1.

  6. 6.

    Local Leader Decision (LLD) Phase: Here, updating of local leader location is done in two ways i.e. by arbitrary initialization or by mixing information obtained via global and local leader, if local leader location is not modified up to a precalculated limit named as LocalLeaderLimit through equation based on perturbation rate (p).

    $$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (GL_{j}-M_{i j})+R(0,1)\times (M_{i j}-LL_{k j}) \end{aligned}$$
    (5)

    Clearly, it is seen in equation that modified dimension of this M is fascinated towards global leader and oppose local leader. Moreover, modified M’s fitness is determined.

  7. 7.

    Global Leader Decision (GLD) Phase: Here, global leader location is examine and if modification is not done up to precalculated iterations limit named as GlobalLeaderLimit then division of population in small group is done by local leader. Primarily population division is done in two classes and further three, four and so on until the upper bound called groups of maximum number (GM) is reached. Meanwhile, local leaders are selected using LL method for newly formed subclasses.

The pseudo-code of the SMO algorithm is as follows:-

figure a

3 Fast Convergent Spider Monkey Optimization Algorithm

In population repetitive methodology, exploration and exploitation are the two basic properties of NIA. A convenient balance between both these two properties are required. Exploration describe the promising regions by searching the given search space while exploitation helps in finding the optimal solution in the promising search regions. In the basic SMO, it is good at exploration and exploitation but there is possibilities of further improvements. So to improve the basic SMO, on new variant named Fast Convergent Spider Monkey Optimization Algorithm (FCSMO) is designed.

From the results of search process of basic SMO that it will get higher opportunity for advancement in various iteration using two ways: (1) Global Leader Phase (GLP) and (2) Local Leader Decision (LLD) phase. Exploration and exploitation capacity should be managed in an effective way. The details of these two steps of FCSMO implementation are explained below:

  1. 1.

    Global Leader Phase (GLP): In the GLP phase, depending upon the observation of Global leader and mates of local troop, M updates their location. In the GLP phase of iteration, solutions in search space are having large step-size resulting in exploration. In later iterations, there is a gradual decrease in step-size by moving slowly iteration by iteration due to which solution exploits the search space well and resulting good convergence. The position upgrade equation for this phase is as follow:

    $$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (GL_{j}-M_{i j})+(M_{r j}-M_{i j})\times [1-(\frac{iter}{Max iteration})]\times c \end{aligned}$$
    (6)

    Here \(GL_{j}\) poises for global leader’s location in \(j^{th}\) dimension and j=1,2,3,...,D defines an arbitrarily chosen index and iter and Max iteration show the present iteration and the maximum iteration number, respectively. c is the random number. Its value is 2. In this acceleration coefficient is added with random member. The position upgrade method of GLP phase is exhibited in following algorithm.

    figure b
  2. 2.

    Local Leader Decision (LLD) Phase: Here, updating of Local Leader location is done in two ways i.e. by arbitrary initialization or by mixing information obtained via global and local leader, if local leader location is not modified up to a precalculated limit named as LocalLeaderLimit through equation based on p. In the LLD phase of iteration, solutions in search space are having large step-size resulting in exploration. In later iterations, there is a gradual decrease in step-size by moving slowly iteration by iteration due to which solution exploits the search space well and resulting good convergence. The position upgrade equation for this phase is as follow:

    $$\begin{aligned} M_{new i j}=M_{i j}+(GL_{j}-M_{i j})\times (1-(\frac{iter}{Max iteration})) +R(0,1)\times (M_{i j}-LL_{k j}) \end{aligned}$$
    (7)

    Clearly, it is seen in equation that modified dimension of this M is fascinated towards global leader and oppose local leader. Moreover, modified M’s fitness is determined and iter and Max iteration show the present iteration and the maximum iteration number, respectively. In this acceleration coefficient is added with global leader. The position upgrade method of LLD phase is exhibited in following algorithm.

    figure c
figure d

4 Experimental Results

4.1 Test Problems Under Consideration

To evaluate the quality of proposed FCSMO algorithm, 14 opposed global optimization issue (\(f_1\) - \(f_{14}\)) are selected as presented in Table 1. All the issues are continuous optimization issues and having various rates of complexity. Test problems (\(f_1\) - \(f_{14}\))) are yield from [1, 11] with the correlated offset values.

Table 1. Test problems

4.2 Experimental Setting

To verify the efficiency of proposed algorithm FCSMO, a relative study is taken between FCSMO and SMO. To analysis FCSMO and basic SMO, over the examine testing issues, subsequent observational setting is emulated:

  • The number of simulations/run =100,

  • Population size (Monkeys) \(NP = 50\)

  • \(R = rand[0,1]\)

  • GlobalLeaderLimit \(\in \) [N/2, 2\(\times \) N] [4]

  • LocalLeaderLimit= D\(\times \) N [4]

  • Perturbation rate (p) \(\epsilon [0.1, 0.8]\)

4.3 Results Comparison

Table 2 represent the observational results of relative algorithm. Following Table 2 gives a information about Standard Deviation (SD), Mean Error (ME), Average Number of Function valuations (AFE) and Success Rate (SR). According to Results of Table 2, at maximum time FCSMO shows best results from SMO, in terms of performance, efficiency and accuracy.

Table 2. Comparison of the results of test functions, TP: Test Problem

Moreover, boxplots evaluation of AFE is taken for comparing the relevant algorithms in scheme of consolidated quality, so it can simply show the observed distribution of statistic graphically. The boxplots for FCSMO and SMO are presented in Fig. 1. The results declares that interquartile scope and medians of FCSMO are comparatively low. Further, all relevant algorithms are studied by allowing entire attention to the SR, AFE and ME. This study is determined using the quality basis i.e. represented in [2, 3]. The evaluated values of PI for the FCSMO and SMO are calculated and consecutive PIs graphs are represented in Fig. 2. The graphs analogous to several cases i.e. allowing entire attention to SR, AFE and ME (as explained in [2, 3]) are represent in Figs. 2(a), (b), and (c) respectively. In these diagram, horizontal axis means the weights and vertical axis means the PI. It is clear from Fig. 2 that PI of FCSMO are superior than the other studied algorithms in various case. i.e. FCSMO observe better on the studied testing issues as compare to the SMO.

Fig. 1.
figure 1

Boxplots graph for average function evaluation

Fig. 2.
figure 2

Performance index for test problems; (a) for weighted importance to SR, (b) for weighted importance to AFE and (c) for weighted importance to ME.

5 Conclusion

This paper presents a variant of SMO algorithm, known as Fast Convergent Spider Monkey Optimization Algorithm (FCSMO). In FCSMO, an acceleration coefficient based strategy is proposed in which step size is decreased through iteration. To evaluate the proposed algorithm, it is tested over 14 benchmark function. The results collected by the FCSMO is better than the basic SMO algorithm. In future, newly developed algorithm may be used to solve various real-world optimization problems of continuous nature.