Keywords

1 Introduction

Nature-inspired algorithms (NIAs), that take inspiration from nature and its foundation is biological components of nature i.e. human and nature. The main objective of developing such algorithm is to solve distinct complex real world problems whose absolute solution doesn’t exist and is to optimize engineering problems [1]. Swarm intelligence based algorithms [5] are based on mimicking collective behavior of natural swarm’s e.g. particle swarm optimization (PSO) [2], artificial bee colony algorithm (ABC) [8], shuffled frog-leaping algorithm (SFLA) [7] and bacterial foraging algorithm (BFO) [9] etc.

SFLA takes inspiration from the grubbing behavior of frogs that replicate contagious information pattern with the natural and social behavior of species. In SFLA, population (frogs) is partitioned into several memeplexes. Frogs exchange their memes with other frogs using memetic evolution procedure which helps to improve the performance of individual frog towards its global optimum solution. There is always a presence of odds with all the evens, basic SFLA converges slowly at the last stage and easily falls into local minima. To elevate the performance of the conventional SFLA algorithm researchers are continuously working on this algorithm [6, 10, 15].

To improve the convergence, intensification and diversification proficiency of basic SFLA, locally informed search strategy is embedded in the local exploration phase of the conventional SFLA to ameliorate the position of the worst solution. In LISFLA, worst solution is take good memes either from local best and one local random solution of the memeplex or global best and a randomly chosen neighbour solution of the entire feasible search space. In this process, worst solution is locally informed through the global best or local best with one randomly selected neighbour to ameliorate the knowledge of worst solution of memeplex. The contemplated algorithm is titled as Locally Informed Shuffled Frog Leaping Algorithm (LISFLA).

The remaining paper is organized as shown: In Sect. 2, a brief overview of standard SFLA is described. Locally Informed Shuffled Frog-Leaping Algorithm (LISFLA) is proposed in Sect. 3. In Sect. 4, performance of LISFLA is tested with several numerical benchmark functions. Finally, Sect. 5 conclude the work.

2 Overview of Shuffled Frog-Leaping Algorithm

Eusuff invented Shuffled Frog Leaping Algorithm (SFLA) in 2003 [7] for solving distinct complex optimization problems. SFLA is a population-based cooperative search metaphor inspired by foraging behaviour of frogs [13]. Memetic evolution is used in SFLA for the purpose of spreading ideas among the solutions in a local exploration which is same as PSO [2]. A shuffling approach helps for exchanging ideas among local searchers that lead them toward a global optimum. SFLA contains elements of global exploration, local exploration and shuffling procedures.

In general, a SFLA works as follows. Firstly, the parameters for the SFLA are total number of memeplex (Mmpx), the number of frogs in each memeplex (Fm) and the range of feasible search space are initialized. Therefore, the total population size (N) of swarm is denoted as N\(\,=\,\)Mmpx * Fm. Afterwards, objective value of each frog is calculated. Rank is assigned accordingly to their objective value and sort them in the descending order of their objective values. Then, N frogs are partitioned into memeplexes (M), each containing frogs (F), like that first rank frog goes to first memepelex, second rank frog goes to second memeplex and third rank frog goes to third memeplex and so on. To construct submemeplex, memplexes are divided into the submemeplex with having the goal is that true solution to move towards its optimum solution by elevating their ideas. Submemeplex selection process assigns weights to frogs. The weights are assigned with a triangular probability distribution (\(prob_{k}\)) using Eq. 1.

$$\begin{aligned} prob_{k}=\frac{2(q + 1 - k )}{q(q + 1)} \end{aligned}$$
(1)

Here, q is total number of population and \(k = 1, 2, 3,. . . , q,\) represents rank of frogs within the memeplex. The frog with the higher objective value has the higher probability of being selected for the submemeplex. The frogs with the lower objective value has lower probability. The position of best frogs and worst frogs is recorded.

The worst solution is updated their position by using three phases: (1) Local best updating phase (2) Global best updating phase and (3) Randomly initialization of solution in the search space (Censorship).

  1. 1.

    Local best updating phase: To improve the position of worst frog. The position update equation for worst solution is defined in Eq. 2.

    $$\begin{aligned} U_{new} = P_{W}+ R(0,1)*(P_{LB}-P_{W}) \end{aligned}$$
    (2)

    Here, \(U_{new}\) is the new position of worst frog, \(P_{LB}\) and \(P_{W}\) are position of local best frog and worst frog respectively. R(0, 1) is a random number in the range [0, 1]. If \( U_ {new}\) lies in the feasible space, compute the new objective value. Greedy selection strategy is applied for improving the position of worst solution. If the position of worst solution gets better than the previous position then position is updated otherwise it goes in next phase i.e. global best updating phase.

  2. 2.

    Global best updating phase: In this phase, the worst frog get chance to update its position with the help of global best frog as defined in Eq. 3.

    $$\begin{aligned} U_{new} = P_{W }+R(0,1)*(P_{GB}-P_{W }) \end{aligned}$$
    (3)

    Here, \(P_{GB}\) represents the global best frog found so far. Again greedy selection strategy is applied for improving the position of worst solution. If worst solution does not update its position then it is randomly initialized in the feasible search space. After this phase memeplexes are updated with the new position of worst frog solution.

  3. 3.

    Randomly initialization of solution in the search space (Censorship): If new position of worst solution is infeasible means worst solution exist outside the range of search space and old position which is calculated by global best solution is not better. Meme of this frog not spread no longer it means that worst frog does not have good meme so, randomly generate a new frog within the range of feasible search space to replace the frog whose new position was not so good to progress.

After the memetic evolutionary steps within each memeplexes are to be shuffled and the population is to be sorted in decreasing order of their objective value. Position of the best frog \(P_{GB}\) is get updated. To check convergence, repeat the above procedure until the stopping criteria is met.

3 Locally Informed Shuffled Frog Leaping Algorithm (LISFLA)

In the working of SFLA algorithm, there are enough possibilities for the solutions to get stuck in local optima. It also suffers from the problem of slow convergence. To reduce such problems, locally informed search process is incorporated by taking mean of both local best and one randomly selected local neighbour of memeplex and global best and one random neighbour for improving the position of the worst solution in local best updating phase and global best updating phase of basic SFLA respectively.

As it is clear from the solution’s search process of conventional SFLA that the Eqs. (2, 3), the worst solution is updated during each iteration by using three phases: (1) Local best updating phase (2) Global best updating phase and (3) Randomly initialization of solution in the search space. Further, it is to be noted that worst solution is simply influenced by the local best or global best solution, which may lead to trap in local optima and leads to loss of intensification and diversification capability.

To avoid such possibilities (stagnation or converging in local optima), in the proposed strategy, the step size is calculated by taking the mean of both local best or global best solution and one randomly selected neighbour solution of memeplex otherwise it is randomly initialized in the search space. This type of search phenomenon elevates the intensification and diversification proficiency of the algorithm that are chief characteristics of the population-based optimization algorithms. Therefore, to improve the convergence and to maintain the intensification and diversification capability of SFLA, following modifications are proposed.

3.1 Local Best Learning Phase with Random Neighbour

The position of worst solution is updated using the locally informed search process. In the process of local search, the worst solution is get updated (informed) by taking mean of both local best solution and one randomly selected local neighbour solution of particular memeplex . The updated step size and position update equations is defined as Eqs. (4 and 5).

$$\begin{aligned} Step = R(0,1)*( \frac{P_{LB}+P_{KL}}{2}-P_{W}) \end{aligned}$$
(4)
$$\begin{aligned} U_{new} = P_{W}+ Step \end{aligned}$$
(5)

Here, \(U_{new}\) is the updated position of the worst solution and Step shows the step size. (\(P_{LB}\)) and (\(P_{KL}\)) represents the local best solution and a local neighbour solution of \(P_{W}\) respectively. R(0, 1) is a uniformly distributed random number in the range between [0, 1]. If \( U_{new}\) lies in the feasible space, compute the new objective value. Greedy selection strategy is applied for improving the position of worst solution in the search space. If the position of worst solution gets better than the previous position then the position of the worst solution is updated otherwise it goes to next phase. In the modified equation a new term is added that is containing the information received from the local best solution and a random neighbour solution of memeplex. In Eqs. (4, 5), worst solution \(P_W\) is informed through the both both \(P_{LB}\) and \(P_{KL}\). In basic SFLA Eq. (2), sometimes the worst solution \((P_W)\) are not updated through the local best solution that leads to loss of intensification and diversification proficiency of the search space. Therefore, in Eqs. (4, 5) to improve the position of worst solution mean of both \(P_{LB}\) and \(P_{KL}\) are taken that leads to move \(P_W\) toward its optimum solution.

3.2 Global Best Learning Phase with Random Neighbour

In this phase the worst solution is get update its position by taking mean of both global best and a random neighbour solutions of the feasible search space. The position update process of worst solution is defined as Eqs. (6 and 7).

$$\begin{aligned} Step = R(0,1)*( \frac{P_{GB}+P_{KG}}{2}-P_{W}) \end{aligned}$$
(6)
$$\begin{aligned} U_{new} = P_{W}+ Step \end{aligned}$$
(7)

Here, \(U_{new}\) is the updated position of worst solution and Step shows the step size. \(P_{GB}\) and \(P_{KG}\) represents the global best solution and neighbour solution of \(P_{W}\) respectively. If \( U_{new}\) is exist in the feasible space, compute its objective value and apply greedy selection strategy between the new and worst solution. If worst solution does not update its position then it is randomly initialized in the feasible search space. In Eqs. (6, 7) both \(P_{GB}\) and \(P_{KG}\) has the ability to attract the worst solution \(P_W\). In the modified equation a new term is added that is containing the mean of the information received from the global best solution and a randomly selected neighbour solution of the search space. According to Eqs. (6, 7), in place of only global best solution, mean of both randomly chosen neighbour solution and global best solution is used that commute the worst solution in the direction of the global best solution with neighbour solution that enhance the intensification and diversification potential of SFLA algorithm.

3.3 Censorship

If new position of worst solution is infeasible means worst solution exist outside the range of search space and old position which is calculated by global best solution is not better. Meme of this frog not spread no longer it means that worst frog does not have good meme so, generate a new solution randomly within the range of feasible search space to inplace the frog whose new position was not so good to evolution.

After the memetic evolutionary steps within each memeplex, the memeplexes are to be shuffled and the population is to be sorted in decreasing order of their objective value. Position of best frog \(P_{GB}\) is get updated. Then we check the stopping criteria of algorithm if it is satisfied then stop the process. Otherwise, again partition the frogs into memeplexes.

Like SFLA, the LISFLA algorithm is also divided into two phases, namely global exploration phase and local exploration phase. The locally informed search strategy is embedded in the local exploration phase of the algorithm, while global exploration phases are kept same as in the basic SFLA.

4 Results and Discussions

To analyze the validity of LISFLA algorithm, 15 distinct global optimization functions (\(f_1\) to \(f_{15}\)) are used here, demonstrated in Table 1.

To accredit the pursuance of the proposed algorithm LISFLA, a comparative experiment is carried out among LISFLA, SFLA [7], BC-SFLA, GSA [11], DE and BBO [14]. LISFLA is tested with the basic SFLA, BC-SFLA, BBO, DE and GSA over considered optimization test functions. Following experimental parameters are adopted:

  • The number of simulations/run = 30,

  • Total number of memeplexes \((Mmpx) = 5\)

  • Number of frogs in each memeplex \( (Fm) = 10 \) and total population Size (Mmpx * Fm) \((N) = 50\)

  • Parameter settings for the algorithms basic SFLA, BC-SFLA, GSA, DE and BBO are imitated from their elementary research papers. [7, 11, 14]

Fig. 1.
figure 1

Boxplots graphs (Average Function Evaluation)

Table 1. TP: Test Problems, OF: Objective Function, SR: Search Range, OV: Optimum Value, D: Dimension, AE: Acceptable Error for LISFLA

Table 2 shows the experimental results of the SFLA, BC-SFLA, GSA, DE and BBO algorithms and also furnishes about the standard deviation (SD), average number of function evaluations (AFE), mean error (ME), and success rate (SR). Results in Table 2 reflects that most of the time LISFLA outrun in terms of reliability, robustness, efficiency as well as accuracy in comparison to the SFLA, BC-SFLA, GSA, DE and BBO.

Table 2. Comparative result of TP, TP: Test Problem for LISFLA
Fig. 2.
figure 2

Performance index for test problems; (a) for weighted importance to SR, (b) for weighted importance to AFE and (c) for weighted importance to ME.

Besides, boxplots [3, 12] analysis is carried out for comparing the examined algorithms in the form of combined performance though it can efficiently depict the empirical dispersion of data graphically. The boxplots for LISFLA, SFLA, BC-SFLA, GSA, DE and BBO are displayed in Fig. 1. The results manifest that interquartile range and medians of LISFLA are comparatively low.

Nextly, all considered algorithms are also compared by giving weighted importance to the ME, SR, and AFE. This comparison is measured using the performance indices which is described in [3, 4]. The resultant values of PI for the LISFLA, BC-SFLA, SFLA, GSA, DE and BBO are computed and corresponding PIs graphs are demonstrated in Fig. 2.

The graphs belonging to each of the cases i.e. giving weighted importance to AFE, SR and ME (as explained in [3, 4]) are depicted in Figs. 2(a), (b), and (c) respectively. In these figures, horizontal axis represents the weights while vertical axis expresses the PI.

It is clear from Fig. 2 that PI of LISFLA are superior than the other considered algorithms in each case. i.e. LISFLA performs better on the considered test problems as compare to the BCSFLA, SFLA, GSA, DE and BBO.

5 Conclusion

In this paper, a new variant of SFLA algorithm is presented, namely Locally Informed Shuffled Frog Leaping Algorithm (LISFLA). In the proposed LISFLA, a new position update strategy for the worst solution is proposed and that is embedded in local exploration phase of primary SFLA. In the proposed locally informed update process, the step size of the worst solution is decided on the basis local best or global best and a local randomly selected neighbour solution of memeplex. In this proposed LISFLA, local best and global best solutions are intensified the search space while, randomly selected neighbour solution is diversified the search area. Further, the proposed algorithm is compared with basic SFLA, its recent variant, namely Binomial Crossover Embedded Shuffled-Frog Leaping Algorithm (BC-SFLA) and three other nature inspired algorithms, namely Gravitational Search Algorithm (GSA), Differential Evolution (DE) and Biogeography-Based Optimization Algorithm (BBO). Experiments over the test functions, depicts that the LISFLA outplays to the considered algorithms.