Abstract
Spider Monkey Optimization (SMO) is a recent optimization method, which has drawn interest of researchers in different areas because of its simplicity and efficiency. This paper presents an effort to modify Spider Monkey Optimization Algorithm with higher exploitation capabilities. A new acceleration coefficient based strategy is proposed in the basic version of SMO. The proposed algorithm is named as Fast Convergent Spider Monkey Optimization Algorithm (FCSMO). FCSMO is tested over 14 benchmark test functions and compared with basic SMO. The result reveals that FCSMO will surely become a good variant of SMO.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Optimization works to unfolds all potential outputs meeting some stated constraints. From past years NIA have proposed various methods to explain NP-Hard and NP- complete optimization problems of real world [7]. Nature inspired algorithm, inspired by nature, is a stochastic approach wherein an individual or a neighbor’s interacts with each other intellectually to explain complicated preexisting mechanisms in an efficient manner. NIA is focused mainly on evolutionary based algorithm and swarm based algorithm. Evolutionary algorithm is a computational standard motivated by Darwinian Evolution [9]. Swarm intelligence assets in unlocking optimization problems considering collaborative nature of self-sustaining creatures like bees, ants, monkeys whose food-gathering capabilities and civilized characteristics have been examined and simulated [5, 6, 8]. SMO is a subclass of swarm intelligence, proposed by Jagdish Chand Bansal et al., in the year 2014 [4]. SMO is a food foraging based algorithm, considering nature and social frame work of spider monkeys. Fission-Fusion social system relates to social configuration of spider monkey. Many researchers have been studied that SMO algorithm is good at exploration and exploitation but there is possibilities of further improvements.
To improve the convergence speed, a variant of SMO is proposed, i.e. name as Fast Convergent Spider Monkey Optimization Algorithm. In the proposed modification acceleration coefficients based strategy is incorporated in the basic version of SMO.
The rest of the paper is structured as follows: In Sect. 2, SMO is described. Fast Convergent Spider Monkey Optimization Algorithm (FCSMO) is proposed in Sect. 3. In Sect. 4, performance of FCSMO is tested with several benchmark functions. Finally, Sect. 5 includes a summary and conclude the work.
2 Overview of Spider Monkey Optimization (SMO) Technique
A distinct class of NIA proposed by JC Bansal et al. [4], by trivial behavior of monkeys i.e. Spider Monkey Optimization (SMO) technique. Spider Monkey optimization, a Fission-Fusion mode is an extension of above discussed predicament. Here, a populous, consistently dictated by a female, is fragmentized into tiny clusters for seeking, chiefly food and they are buddy up to 40 to 50 singular who rift into small groups in search of food who again are headed by a female. In case she fails to meet the objective (food finding), further subdivides, again succeeded by a female, replicating the process until reach the food. For recent updates in their positions, various steps are undertaken: inspection of probing of wide search space and picking or electing of superlative practical results [10].
2.1 Steps of SMO Technique
SMO technique is based on population repetitive methodology. It consists of seven steps. Each step is described below in a detailed manner:
-
1.
Initialization of Population: Originally a population comprised of N spider monkeys signifying a D-dimensional range \(M_{i}\) where i=1,2,...N and i represents \(i^{th}\) spider monkey. Each spider monkey (M) exhibits possible results of the problem under consider. Each \(M_{i}\) is initialized as below:
$$\begin{aligned} M_{ij}=M_{min j}+R(0,1)\times (M_{max j}-M_{min j}) \end{aligned}$$(1)Here \(M_{min j}\) and \(M_{max j}\) are limits of \(M_{i}\) in \(j^{th}\) vector and R(0,1) is a random number (0,1).
-
2.
Local Leader Phase (LLP): This phase relies on the observation of local leader and group mates, M renew its current position yielding a fitness value. If the fitness measure of the current location is larger than that of the former location, then M modifies his location with the latest one. Hence \(i^{th}\) M that also exists in \(k^{th}\) local group modify its position.
$$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (LL_{k j}-M_{i j})+R(-1,1)\times (M_{r j}-M_{i j}) \end{aligned}$$(2)Here \(M_{i j}\) define \(i^{th}\) M in \(j^{th}\) dimension, \(LL_{k j}\) correlate to the \(k^{th}\) leader of local assembly location in \(j^{th}\) dimension. \(M_{r j}\) defines \(r^{th}\) M which is randomly picked from \(k^{th}\) troop such that \(r\ne i\) in \(j^{th}\) dimension.
-
3.
Global Leader Phase (GLP): This following phase initiates just after accomplishing LLP. Depending upon the observation of global leader and mates of local troop, M updates their location. The position upgrade equation for GLP phase is as follows:
$$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (GL_{j}-M_{i j})+R(-1,1)\times (M_{r j}-M_{i j}) \end{aligned}$$(3)Here \(GL_{j}\) poises for global leader’s location in \(j^{th}\) dimension and j=1,2,3,...,D defines an arbitrarily chosen index. \(M_{i}\) modify their locus considering probabilities \(Pr_{i}'s\). Fitness is used to calculate probability of a specific solution, with various methods such as
$$\begin{aligned} Pr_{i}=0.1+(\frac{fitness_{i}}{fitness_{max}})\times 0.9 \end{aligned}$$(4) -
4.
Global Leader Learning (GLL) Phase: Here greedy selection strategy is applied on the population which modifies the locus of global leader i.e. the location of M which has best fitness in the group is chosen as the modified global leader location. Also its is verified that global leader location is modifying or not and in case not then GlobalLimitCount(GLC) is increased by 1.
-
5.
Local Leader Learning (LLL) Phase: Here, local leader locus is modified by implement greedy selection in that population i.e. the location of M which has best fitness among the entire group is chosen as the latest location of local leader. Afterwards, this modified local leader location and old values are compared and LocalLimitCount (LLC) is increment by 1.
-
6.
Local Leader Decision (LLD) Phase: Here, updating of local leader location is done in two ways i.e. by arbitrary initialization or by mixing information obtained via global and local leader, if local leader location is not modified up to a precalculated limit named as LocalLeaderLimit through equation based on perturbation rate (p).
$$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (GL_{j}-M_{i j})+R(0,1)\times (M_{i j}-LL_{k j}) \end{aligned}$$(5)Clearly, it is seen in equation that modified dimension of this M is fascinated towards global leader and oppose local leader. Moreover, modified M’s fitness is determined.
-
7.
Global Leader Decision (GLD) Phase: Here, global leader location is examine and if modification is not done up to precalculated iterations limit named as GlobalLeaderLimit then division of population in small group is done by local leader. Primarily population division is done in two classes and further three, four and so on until the upper bound called groups of maximum number (GM) is reached. Meanwhile, local leaders are selected using LL method for newly formed subclasses.
The pseudo-code of the SMO algorithm is as follows:-
3 Fast Convergent Spider Monkey Optimization Algorithm
In population repetitive methodology, exploration and exploitation are the two basic properties of NIA. A convenient balance between both these two properties are required. Exploration describe the promising regions by searching the given search space while exploitation helps in finding the optimal solution in the promising search regions. In the basic SMO, it is good at exploration and exploitation but there is possibilities of further improvements. So to improve the basic SMO, on new variant named Fast Convergent Spider Monkey Optimization Algorithm (FCSMO) is designed.
From the results of search process of basic SMO that it will get higher opportunity for advancement in various iteration using two ways: (1) Global Leader Phase (GLP) and (2) Local Leader Decision (LLD) phase. Exploration and exploitation capacity should be managed in an effective way. The details of these two steps of FCSMO implementation are explained below:
-
1.
Global Leader Phase (GLP): In the GLP phase, depending upon the observation of Global leader and mates of local troop, M updates their location. In the GLP phase of iteration, solutions in search space are having large step-size resulting in exploration. In later iterations, there is a gradual decrease in step-size by moving slowly iteration by iteration due to which solution exploits the search space well and resulting good convergence. The position upgrade equation for this phase is as follow:
$$\begin{aligned} M_{new i j}=M_{i j}+R(0,1)\times (GL_{j}-M_{i j})+(M_{r j}-M_{i j})\times [1-(\frac{iter}{Max iteration})]\times c \end{aligned}$$(6)Here \(GL_{j}\) poises for global leader’s location in \(j^{th}\) dimension and j=1,2,3,...,D defines an arbitrarily chosen index and iter and Max iteration show the present iteration and the maximum iteration number, respectively. c is the random number. Its value is 2. In this acceleration coefficient is added with random member. The position upgrade method of GLP phase is exhibited in following algorithm.
-
2.
Local Leader Decision (LLD) Phase: Here, updating of Local Leader location is done in two ways i.e. by arbitrary initialization or by mixing information obtained via global and local leader, if local leader location is not modified up to a precalculated limit named as LocalLeaderLimit through equation based on p. In the LLD phase of iteration, solutions in search space are having large step-size resulting in exploration. In later iterations, there is a gradual decrease in step-size by moving slowly iteration by iteration due to which solution exploits the search space well and resulting good convergence. The position upgrade equation for this phase is as follow:
$$\begin{aligned} M_{new i j}=M_{i j}+(GL_{j}-M_{i j})\times (1-(\frac{iter}{Max iteration})) +R(0,1)\times (M_{i j}-LL_{k j}) \end{aligned}$$(7)Clearly, it is seen in equation that modified dimension of this M is fascinated towards global leader and oppose local leader. Moreover, modified M’s fitness is determined and iter and Max iteration show the present iteration and the maximum iteration number, respectively. In this acceleration coefficient is added with global leader. The position upgrade method of LLD phase is exhibited in following algorithm.
4 Experimental Results
4.1 Test Problems Under Consideration
To evaluate the quality of proposed FCSMO algorithm, 14 opposed global optimization issue (\(f_1\) - \(f_{14}\)) are selected as presented in Table 1. All the issues are continuous optimization issues and having various rates of complexity. Test problems (\(f_1\) - \(f_{14}\))) are yield from [1, 11] with the correlated offset values.
4.2 Experimental Setting
To verify the efficiency of proposed algorithm FCSMO, a relative study is taken between FCSMO and SMO. To analysis FCSMO and basic SMO, over the examine testing issues, subsequent observational setting is emulated:
-
The number of simulations/run =100,
-
Population size (Monkeys) \(NP = 50\)
-
\(R = rand[0,1]\)
-
GlobalLeaderLimit \(\in \) [N/2, 2\(\times \) N] [4]
-
LocalLeaderLimit= D\(\times \) N [4]
-
Perturbation rate (p) \(\epsilon [0.1, 0.8]\)
4.3 Results Comparison
Table 2 represent the observational results of relative algorithm. Following Table 2 gives a information about Standard Deviation (SD), Mean Error (ME), Average Number of Function valuations (AFE) and Success Rate (SR). According to Results of Table 2, at maximum time FCSMO shows best results from SMO, in terms of performance, efficiency and accuracy.
Moreover, boxplots evaluation of AFE is taken for comparing the relevant algorithms in scheme of consolidated quality, so it can simply show the observed distribution of statistic graphically. The boxplots for FCSMO and SMO are presented in Fig. 1. The results declares that interquartile scope and medians of FCSMO are comparatively low. Further, all relevant algorithms are studied by allowing entire attention to the SR, AFE and ME. This study is determined using the quality basis i.e. represented in [2, 3]. The evaluated values of PI for the FCSMO and SMO are calculated and consecutive PIs graphs are represented in Fig. 2. The graphs analogous to several cases i.e. allowing entire attention to SR, AFE and ME (as explained in [2, 3]) are represent in Figs. 2(a), (b), and (c) respectively. In these diagram, horizontal axis means the weights and vertical axis means the PI. It is clear from Fig. 2 that PI of FCSMO are superior than the other studied algorithms in various case. i.e. FCSMO observe better on the studied testing issues as compare to the SMO.
5 Conclusion
This paper presents a variant of SMO algorithm, known as Fast Convergent Spider Monkey Optimization Algorithm (FCSMO). In FCSMO, an acceleration coefficient based strategy is proposed in which step size is decreased through iteration. To evaluate the proposed algorithm, it is tested over 14 benchmark function. The results collected by the FCSMO is better than the basic SMO algorithm. In future, newly developed algorithm may be used to solve various real-world optimization problems of continuous nature.
References
Ali, M.M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Global Optim. 31(4), 635–672 (2005)
Bansal, J.C., Sharma, H.: Cognitive learning in differential evolution and its application to model order reduction problem for single-input single-output systems. Memetic Comput. 4(3), 209–229 (2012)
Bansal, J.C., Sharma, H., Arya, K.V., Nagar, A.: Memetic search in artificial bee colony algorithm. Soft. Comput. 17(10), 1911–1928 (2013)
Bansal, J.C., Sharma, H., Jadon, S.S., Clerc, M.: Spider monkey optimization algorithm for numerical optimization. Memetic Comput. 6(1), 31–47 (2014)
Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm Intelligence: From Natural to Artificial Systems, Number 1. Oxford University Press, New York (1999)
Colorni, A., Dorigo, M., Maniezzo, V., et al. Distributed optimization by ant colonies. In: Proceedings of the first European Conference on Artificial Life, vol. 142, pp. 134–142. Paris, France (1991)
Fister Jr., I., Yang, X.-S., Fister, I., Brest, J., Fister, D.: A brief review of nature-inspired algorithms for optimization. arXiv preprint arXiv:1307.4186 (2013)
Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical report, Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005)
Price, K.V.: Differential evolution: a fast and simple numerical optimizer. In: Fuzzy Information Processing Society, 1996. NAFIPS., 1996 Biennial Conference of the North American, pp. 524–527. IEEE (1996)
Sharma, A., Sharma, H., Bhargava, A., Sharma, N.: Power law-based local search in spider monkey optimisation for lower order system modelling. Int. J. Syst. Sci. 1–11 (2016)
Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.-P., Auger, A., Tiwari, S.: Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL report, 2005005:2005 (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Agarwal, N., Jain, S.C. (2017). Fast Convergent Spider Monkey Optimization Algorithm. In: Deep, K., et al. Proceedings of Sixth International Conference on Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, vol 546. Springer, Singapore. https://doi.org/10.1007/978-981-10-3322-3_5
Download citation
DOI: https://doi.org/10.1007/978-981-10-3322-3_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-3321-6
Online ISBN: 978-981-10-3322-3
eBook Packages: EngineeringEngineering (R0)