1 Introduction

The BFO algorithm, which was proposed by Passino [21], is a novel category of bionic intelligent optimization algorithm. In the process of foraging, the bacteria takes necessary actions (such as chemotaxis, swarming, reproduction, elimination and dispersal) to search for nutrient-rich areas so as to maximize the energy intake. The position of one bacterium represents a solution of the solving problem. According to tumbling to change the present direction and keeping swimming in the selected direction to update its position, the bacteria can seek for the optimal position by continuous movement and the position of best bacterium is the best solution of the solving problem. Up to now, the BFO algorithm has received widespread attention and has been applied successfully to a variety of fields, such as feature selection [7, 20], image processing [2, 27], scheduling problem [34] and continuous optimization problems [18, 22, 25].

However, some researchers have shown that the original BFO algorithm suffers from low solution accuracy and poor convergence performance. Meanwhile, many strategies and methods have been proposed to improve the performance of BFO algorithm. In the BFO algorithm, the constant chemotaxis step-size can not only lead to bad convergence performance but also result in sustained oscillation of the optimization procedure. To improve convergence performance, Mishra proposed a least square-fuzzy scheme to generate the changing chemotactic step-size adaptively based on the minimum value of cost function [17]. Moreover, the authors proposed two simple schemes to adaptively change the chemotactic step-size to avoid the oscillatory behavior and to speed up the convergence speed [8]. In optimization problem, the algorithm should combine global search with local search to balance exploration and exploitation process. Niu et al. proposed a non-linearly decreasing exponential modulation model to optimize the step-size for global optimization [19]. In [26], the authors proposed a new approach involving adaptable chemotactic step-size which changes depending on the fitness value to balance the local search and global search in the optimization process. More recently, Xu et al. proposed a novel adaptive scheme, which based on the function period, predefined maximum and minimum values, to adapt the chemotactic step-size and introduced a novel operation of micro-swim with the aim of improving its convergence behavior [31]. Yang et al. adjusted the step-size of each bacterium adaptively based on the evolutionary generations and the information of the globally best individual [33]. To enhance the global search ability and extend the diversity of bacterial population, an improved quantum BFO algorithm was proposed which utilizes a continuously varying rotation angle to update the rotation gate [13, 14].

These proposed algorithms were all shown to outperform the classical BFO algorithm over a few numerical benchmark functions. However, little information being communicated and no cooperation among bacteria existing in these algorithms affect the performance of the BFO algorithm. Inspired by the social information term of PSO [10], Biswas et al. proposed a hybrid algorithm involving PSO and BFO to balance between local search and global search [3]. The proposed algorithm executes local search by means of the chemotactic movement operation of BFO and performs global search over the entire search space through a PSO operator. Chen et al. introduced the communication strategies among multi-colony bacterial community, which can maintain the diversity in the whole bacterial community [5]. The proposed multi-colony cooperative approach speeds up the bacterial community to converge to the global optimum significantly. Chen et al. extended the basic BFO to a self-adaptive and cooperative bacterial colony foraging (BCF) by using the optimal location information and analyzing the current status to adjust the chemotactic step-size adaptively [4]. By combining cell-to-cell communication and self-adaptive search strategies, the authors proposed a modified BFO algorithm (BCFO) that the bacterial colony can maintain a balance between global search and local search [6]. Inspired by the differential evolution algorithm, a chemotaxis-enhanced bacteria foraging optimisation (CEBFO) algorithm was proposed to solve the tumble failure problem and enhance the convergence speed of the original algorithm [34]. Yang et al. proposed a new bacterial foraging optimization algorithm (BFO-CC) based on the new designed chemotaxis standard and conjugation strategies, which makes the algorithm keep a better balance between exploration and exploitation [33]. Tang et al. proposed an improved bacterial foraging (MBFO) algorithm by incorporating the PSO operator into chemotactic process to quicken the convergence rate and to improve the quality of the final solution [27]. As different swarm intelligence algorithms have different features, to improve the convergence performance, Turanoglu proposed a new hybrid heuristic algorithm, which combined the simulated annealing and BFO algorithm [28].

The above methods have achieved some progress, but there still exists some problems. On one hand, when solving optimization problem, the solutions are easy to trap in local optimum, but the above algorithms cannot escape from the local optimum effectively. To solve such problem, this paper uses the Lévy flight length as the chemotaxis step-size which can generate small step-size with high frequency and big step-size occasionally to balance local search and global search. Moreover, the solutions using the big step-size can escape from the local optimum effectively. For population-based optimization methods, it is desirable to encourage the individuals to wander through the entire search space at the initial phase of the optimization; moreover, it is very important to fine-tune the candidate solutions in the succeeding phases of the optimization [23, 33]. Therefore, to make the bacteria transform from global search to local search gradually, this paper constructs an adaptive Lévy flight step-size, which is reduced adaptively along with the increase of the number of iterations. On the other hand, in the original BFO algorithm, the bacteria choose one direction from an infinite number of directions randomly in the tumbling process, which not only possesses smaller probability of choosing the right direction but also causes mutual interference among different dimensions; such mechanism reduces the search efficiency greatly. In order to improve the search efficiency, this paper proposes an improved strategy to determine the search direction, where each bacterium selects one dimension randomly from the search space to tumble. As the selected dimension possess two directions, the selected direction is toward the better solution with the probability of fifty percent. Moreover, the bacterium swimming along one dimension can avoid the interference among different dimensions effectively and can maximize the search ability of the algorithm. Last, to improve swarming mechanism, this paper uses the global best solution of PSO algorithm to quicken the convergence rate and improve the swarming performance.

The remainder of the paper is organized as follows. Section 2 outlines the reviews of the classical BFO algorithm briefly. Section 3 introduces the proposed LPBFO algorithm. Section 4 comparatively analyze the LPBFO and other bionic algorithm based on eight benchmark functions and the experimental result verifies that LPBFO algorithm significantly outperforms other bionic algorithm in the literature in terms of t-test. Finally, the conclusion is drawn in Section 5.

2 Classical BFO algorithm

The classical BFO algorithm includes four principal mechanisms, namely chemotaxis, swarming, reproduction, and elimination-dispersal [21]. In the following, we describe the contents of these four processes briefly.

2.1 Chemotaxis

The chemotaxis process simulates the movement of the E. coli bacteria through tumbling and swimming. The E. coli bacteria can tumble for another direction through selecting a direction randomly from the search space. After that, bacteria are going to move a step size. If the fitness becomes better, then they keep on swimming along this direction. Suppose 𝜃i(j, k, l) is the position of the i th bacterium at j th chemotactic, k th reproductive, and l th elimination-dispersal step. The chemotactic movement of the i th bacterium can be represented as

$$ {\theta^{{i}}}({{j}} + 1{{,k,l}}) = {\theta^{{i}}}({{j,k,l}})+C{{(i)}}\frac{{{\Delta} ({{i}})}}{{\sqrt {{{\Delta}^{T}}({{i}}){\Delta} ({{i}})}} } $$
(1)

where C(i) is the fixed step-size in each swimming. Δ(i) is the selected direction vector by tumbling whose elements lie in the range [− 1,1].

2.2 Swarming

A cell-to-cell signaling via an attraction and repellant is simulated in swarming stage. The cell releases attractants to signal other cells that they should swarm together. The cell also releases repellent substance to repel a nearby cell. The combined cell-to-cell attraction and repelling effects is denoted as follows:

$$\begin{array}{@{}rcl@{}} J{cc(}\theta {{,P(j,k,l))}} &=& \sum\limits_{{{i}} = 1}^{S} {J{{cc(}}\theta {{,}}{\theta^{{i}}}{{(j,k,l))}}} \\ &=& \sum\limits_{{{i}} = 1}^{S} {\left[ - {{{d}}_{{{attract}}}}\exp \left( - {\omega_{{{attract}}}}\sum\limits_{{{m}} = 1}^{{D}} {{{({\theta_{{m}}} - {{\theta_{m}^{i}}})}^{2}}} \right)\right]} \\ &&+\sum\limits_{{{i}} = 1}^{S} {\left[{{{h}}_{{{repellant}}}}\exp \left( - {\omega_{{{repellant}}}}\sum\limits_{{{m}} = 1}^{{D}} {{{({\theta_{m}} - {{\theta_{m}^{i}}})}^{2}}} \right)\right]} \end{array} $$
(2)

where Jcc(𝜃, P(j, k, l)) is the cell-to-cell communication value to be added to the actual fitness function to present a time-varying fitness function. S is the total number of bacteria; D is the number of dimension to be optimized; 𝜃 = [𝜃1,..., 𝜃D]T is a point in the D-dimensional search space; dattract, ωattract, hrepellant, ωrepellant are different coefficients that should be chosen properly. The fitness value of bacterium i can be represented by

$$ J({{i,j,k,l}}) = J({{i,j,k,l}})+J{{cc(}}\theta {{,P(j,k,l))}} $$
(3)

2.3 Reproduction

Nc is the lifetime of a bacterium and after Nc chemotactic steps, the bacterium performs reproduction. The health value of bacterium i (Ji, health) equals to the sum of the fitness value during its life.

$$ {J_{{{i,health}}}} = \sum\limits_{{{i}} = 1}^{{N_{{c}}}} {J({{i,j,k,l}})} $$
(4)

Based on the health value, all bacteria are sorted in descending order. In the reproduction stage, the Sr (Sr = S/2) healthiest bacteria survive and each split into two bacteria, which are placed in the same locations, thereby maintaining a constant swarm size; the other Sr least healthy bacteria die.

2.4 Elimination and dispersal

Due to the adverse and unexpected changes in the local environment, some bacteria in this region may be killed or dispersed into new location. This phenomenon is simulated by the elimination in BFO and the bacteria are killed with a small probability Ped. Simultaneously, some new bacteria are randomly generated within the environment for replacement.

3 LPBFO algorithm

In order to improve the performance, three improvements of LPBFO are introduced. First, the adaptive Lévy flight step-size is used as chemotaxis step-size to keep the balance between global search and local search. Moreover, the adaptive step-size can also make the solutions escape from the local optimum effectively. Second, instead of choosing one direction randomly from the search space, the bacterium selects a dimension randomly as the search direction to update the position of the solution. Third, in order to accelerate convergence speed, the global best location information is used to improve the swarming performance.

3.1 Adaptive Lévy flight step-size

A Lévy flight is a random walk in which the step-size have a probability distribution that is heavy-tailed. The probability distribution is called Levy distribution, satisfying the form of a power-law distribution, and can be expressed as follows [30]:

$$ P({s}) = {{s}^{- \lambda} } $$
(5)

where s is the step-size with 1<λ ≤ 3. Lévy flight can generate smaller step-size with high frequency and occasionally generate larger step-size. In the search process, the bacteria with a larger step-size can reach the full range of the search space to locate the potential optimal regions; the bacteria with a smaller step-size tend to fine-tune the current solution to obtain the global optimal solution. Therefore, Lévy flight can well balance local search and global search. When solving optimization problem, the bigger step-size can also make the solutions escape from the local optimum effectively. The foraging behaviors of many creatures in nature satisfy Lévy flight, such as albatrosses’ foraging flight trajectory [15, 29] and drosophilas’ intermittent foraging flight trajectory [24]. Many studies suggest that Lévy flight is an optimal search strategy when the target sites are sparse and distributed randomly.

This paper uses the method proposed by [16] to calculate Lévy flight step-size:

$$ {s} = \frac{{u}}{{{{\left| {v} \right|}^{1/\beta} }}} $$
(6)

where β ∈ [0.3,1.99], u and v are two normal stochastic variables with standard deviation σu and σv.

$$ {u} \sim N(0,{\sigma_{u}^{2}}), \qquad {v} \sim N(0,{\sigma_{v}^{2}}) $$
(7)
$$ {\sigma_{u}} = {\left\{ {\frac{{{\Gamma} (1+\beta )\sin (\pi \beta /2)}}{{{\Gamma} [(1+\beta )/2]{2^{(\beta - 1)/2}}\beta} }} \right\}^{1/\beta} }, \quad {\sigma_{v}} = 1 $$
(8)

where Γ(z) is gamma function.

As shown in Fig. 1, Lévy flight with a mix of large step-size and small step-size can balance local search and global search. Therefore, the Lévy flight step-size s can be used to replace chemotaxis step-size to improve search efficiency. However, at the early stage of the optimization process, the bacteria are inclined to locate the potential optimal regions by wandering through the entire search space; conversely, most of the bees are apt to fine-tune the present solutions to obtain the optimal solution at the latter stage of the optimization. Therefore, this paper constructs an adaptive Lévy flight step-size, upon which the step-size dwindle down gradually with the increase of the number of iterations. The equation for the adaptive nonuniform step-size is given below.

$$ C^{\prime}({i}) = \frac{\alpha} {t(i)}*{|s|} $$
(9)

where α is a control parameter determining the change strength of step-size. Parameter t(i) denotes the number of chemotaxes that the i th bacterium has experienced and t(i) increases in the course of the optimization. At the early stage of the optimization, the bacteria with larger step-size can achieve global search by wandering through the entire search space. At the latter stage of the optimization, the bacteria with the adaptively reduced step-size can achieve local search by being apt to fine-tune the present solution.

Fig. 1
figure 1

An example of 1000 steps of a Lévy flight in two dimensions. The origin of the motion is at [0,0], the step-size is generated according to (6) with β = 1.5 and the angular direction is uniformly distributed

3.2 Improved search direction

In the classical BFO algorithm, it is less-efficient for each bacterium to choose one direction randomly in each chemotaxis process. First, there exists an infinite number of directions can be chosen in D-dimensional search space and each bacterium should choose one direction randomly from these directions. Therefore, the selected direction is toward the better solution with a low probability. Second, each selected direction contains D dimensions in which some dimensions are possible to have the correct directions and some dimensions are possible to possess the incorrect directions. Although the new solution gets better fitness value, some dimensions may get worse. Therefore, different dimensions may interfere with each other and the strategy updating all dimensions regarding a solution at each iteration reduces search efficiency.

Therefore, to improve the search efficiency, this paper proposes an improved strategy to determine the search direction. In chemotaxis stage, each bacterium chooses one dimension randomly to tumble which possess two directions. Therefore generally speaking, the selected direction is toward the better solution with the probability of fifty percent. Moreover, the bacterium swimming along one dimension can avoid the interference among different dimensions effectively and can maximize the search ability of the algorithm.

Based on the adaptive Lévy flight step-size and the improved search direction, the chemotactic movement of the i th bacterium in LPBFO can be represented as

$$ {\theta^{i}}(j+1,k,l) = {\theta^{i}}(j,k,l)+C^{\prime}(i)*{\Delta} '(i) $$
(10)

where Δ(i) is the improved search direction which is a D-dimensional vector with only one coordinate being 1 or -1 and all the others being 0; C(i) is the adaptive Lévy flight step-size taken in direction Δ(i). As can be seen from (10), each bacterium only updates the corresponding solution in one dimension during every chemotaxis process.

In the original BFO, the bacterium will calculate its fitness value after each movement. During the chemotaxis stage, if the fitness value is better than the previous position, the bacterium will keep on moving along this direction. If the fitness value gets worse, the bacterium will not move forward. However, the bacterium has already in the worse position, which can slow down the chrematistic activity of the bacterium. If this chemotaxis behavior happens frequently, the searching efficiency will be greatly reduced. In order to overcome this problem, this paper employs a self-regulation strategy. When the bacterium arrives at the worse position, the bacteria will return to the last position automatically and the bacteria continues the next step.

3.3 Modified swarming process

In the original BFO algorithm, the bacteria complete the swarming process through the fitness function which is related to attraction-repellant and the distance among bacteria. Without directionality, the swarming process cannot guarantee that bacteria converge to the optimal solution. The optimal position information is not considered, and therefore the swarming process possess a lower convergence rate.

To accelerate the convergence rate and improve the accuracy of convergence, this paper takes advantage of the information of the global best (gbest) solution to guide the movement of the bacteria, we modify the swarming equation

$$ {\theta^{i}}(j+1,k,l) = {\theta^{i}}(j,k,l)+({\theta_{gbest}} - {\theta^{i}}(j,k,l))*C^{\prime}({i})*{\Delta} '(i) $$
(11)

According to (11), the gbest term can drive the bacteria towards the global best solution in one dimension, therefore, the modified swarming equation can improve convergence rate of the swarming process. Moreover, the adaptive Lévy flight step-size is also used in the modified swarming equation to balance the global search and the local search.

3.4 Description of the LPBFO algorithm

The pseudo code of the LPBFO algorithm can be described as follows:

figure f
figure g

4 Experiments and discussion

To fully test the performance of the LPBFO algorithm, eight benchmark functions shown in Table 1 were used to conduct the experiments. The first four functions are unimodal functions with only one local optimum which is the global optimum; the others are multimodal functions with a large number of local optimum and algorithms may easily fall into a local optimum when intending to solve the functions. The unimodal functions can be used to analysis the intensification capability and convergence rate of the algorithms. The multimodal functions are commonly used to show the exploration capability of the algorithms. In Table 1, D denotes the dimensions of the solution space and 15, 30, 45, and 60 dimensions are used in the present paper.

Table 1 Numerical benchmark functions

The effectiveness of the proposed LPBFO algorithm was evaluated by comparing its results with other related algorithms, such as BCF [6], CEBFO [34], MBFO [27], and BFO-CC [33]. Following the parameter settings in the previous studies, the values of the population size used in each algorithm were chosen to be the same as S = 50. To make a fair comparison, we set Nc = 100, Nre = 5, Ned = 4 and then the algorithms perform totally T = Nc × Nre × Ned = 2000 iterations times in each run; all algorithms were tested with the maximum number of swimming and swarming length Ns = 4. The other detailed parameters of LPBFO algorithm, we take Ped = 0.25, β = 1.5 following the original studies and α = 1000 using trial and error. Additionally, as for other specific parameters of each comparison algorithm, we followed the settings in a corresponding original paper.

BCF settings::

For the proposed BCF algorithm, we set k = 0.8, ϕ1 = 1.494, ϕ2 = 1.494, εinitial = 100, Ku = 20, λ = 10, and the initialized run-length unit Cinitial is set to be 1% of the search space.

CEBFO settings::

For the CEBFO algorithm, we take Ped = 0.25, CR = 0.9, F = 0.5. The step length C is set for each benchmark function as shown in Table 3 in [34].

MBFO settings::

For the MBFO algorithm, we set Ped = 0.03, C(i) = 0.1, dattract = 0.1, ωattract = 0.2, hrepellant = 0.15, ωrepellant = 0.1.

BFO-CC settings::

For the BFO-CC algorithm, the parameters are set as Ped = 0.3, Pc = 0.2, L = 0.1 ∗ D, b = 0.5.

In order to ensure the experiment results stability, we repeated each algorithm 20 times and averaged the results.

4.1 Experimental results

To analyze the experimental data, this subsection uses five indexes introduced by [33] to evaluate the effectiveness of the proposed method: the best solution (Best), median solution (Median), worst solution (Worst), average solution (Mean), and standard deviation (Std). Figures 234, and 5 show how the mean of the best solution through 20 runs for each algorithm changes with the number of iterations times based on 8 basic benchmark functions with 15, 30, 45, 60 dimensions. The lines that do not extend to the end of the experiments indicate that they have converged to 0 in the next calibration.

Fig. 2
figure 2

Comparison of the convergence results of the average optimum value with 15 dimensions. a Sphere Function b Quartic Function c Schwefels Problem 2.22 d Rosenbrock Function e Rastrigin Function f Griewank Function g Ackley Function h Schwefel’s Problem 2.26

Fig. 3
figure 3

Comparison of the convergence results of the average optimum value with 30 dimensions. a Sphere Function b Quartic Function c Schwefels Problem 2.22 d Rosenbrock Function e Rastrigin Function f Griewank Function g Ackley Function h Schwefel’s Problem 2.26

Fig. 4
figure 4

Comparison of the convergence results of the average optimum value with 45 dimensions. a Sphere Function b Quartic Function c Schwefels Problem 2.22 d Rosenbrock Function e Rastrigin Function f Griewank Function g Ackley Function h Schwefel’s Problem 2.26

Fig. 5
figure 5

Comparison of the convergence results of the average optimum value with 60 dimensions. a Sphere Function b Quartic Function c Schwefels Problem 2.22 d Rosenbrock Function e Rastrigin Function f Griewank Function g Ackley Function h Schwefel’s Problem 2.26

At the initial phase of the optimization procedure, Figs. 25 show that the proposed LPBFO algorithm does not possess obvious advantage on the convergence rate compared with other algorithms. The reason is that LPBFO algorithm with large step-size realizes global search and does poorly in local search at this stage; the global search are inclined to locate the potential optimal regions and local search are apt to fine-tune the present solutions to improve the convergence rate. Further, the probability of obtaining a better solution is very high at the initial stage for all the algorithms; the search strategy in the proposed LPBFO algorithm, where each bacterium selects one dimension randomly from the search space to tumble, does not have obvious advantages. It can be seen from the Figs. 25 that the MBFO algorithm converges faster than other algorithms at the initial phase. By incorporating the PSO operator into chemotactic step, the MBFO algorithm quickens the convergence rate effectively and improve the quality of the solution.

As the adaptive Lévy flight step-size reduces along with the increase of the number of iterations, the LPBFO algorithm transforms from global search to local search to fine-tune the present solutions. Moreover, when it is difficult to obtain a better solution at the latter stage of the optimization, the advantages of the proposed search strategy can be well reflected. Therefore, the proposed LPBFO algorithm converges fast and achieves significantly better solution on all benchmark functions than other algorithms in the succeeding phase.

In chemotaxis stage, each bacterium randomly choosing a dimension to move forward improves the search efficiency. By taking into account the information of global best solution, the modified swarming process accelerates the convergence rate and improves the accuracy of the solutions.

Tables 234 and 5 shows the comparison among the experimental results based 8 basic benchmark functions with 15, 30, 45, 60 dimensions. The bold emphases in Tables 234578 mean that the corresponding experimental results are the best compared with other algorithms under the corresponding index. The results suggest that LPBFO algorithm offers the higher solution accuracy on all the functions, which proves that LPBFO algorithm is superior to other algorithms.

Table 2 Comparison between LPBFO and other algorithms for 20 times independent runs tested on 8 basic benchmark functions with 15 dimensions
Table 3 Comparison between LPBFO and other algorithms for 20 times independent runs tested on 8 basic benchmark functions with 30 dimensions
Table 4 Comparison between LPBFO and other algorithms for 20 times independent runs tested on 8 basic benchmark functions with 45 dimensions
Table 5 Comparison between LPBFO and other algorithms for 20 times independent runs tested on 8 basic benchmark functions with 60 dimensions

4.2 Comparison regarding the t-test

To investigate whether the experimental results obtained by LPBFO algorithm are statistically significantly different from results achieved by other algorithms, the F-test and t-test were performed in this section. The F-test was used to analyze the homogeneity of variances and t-test was used to determine if two sets of data are significantly different from each other. The confidence level was set to 95%, which means that the probability of receiving a difference by chance is not more than 5%. In the F-test, if the probability p is less than 0.05, the corresponding experimental results are heterogeneity of variance. If the probability p achieved in the t-test is less than 0.05, we can believe that there exists a significant difference between the corresponding experimental results. Table 6 lists the results of F-test and t-tests between LPBFO and the best results of the other algorithms regarding the indexes “Mean” for different benchmark functions (listed are the p(F-test), the p(t-test), and the significance of the results). In Table 6, the sample size and number of degrees of freedom were set as 20 and 38, respectively. “Yes” means that the results of LPBFO and of other algorithms are significantly different. “No” indicates that there is not a significant difference between the results of LPBFO and the best results of the other algorithms.

Table 6 Comparisons between LPBFO’s results and the best results of the other algorithms on F-test and t-test

From Tables 25, LPBFO algorithm has higher solution accuracy on all the functions. Based on the results of statistical tests in Table 6, almost all of the results have obvious differences and it is clear that the results of LPBFO algorithm are significantly better than the results of other algorithms.

4.3 Comparison with other algorithms

To simulate the swarm behavior in birds flocking and fish schooling, Kennedy et al. proposed the PSO algorithm [10]. Inspired by the swarm foraging behavior in honey bees, Karaboga proposed the artificial bee colony (ABC) algorithm [9]. Both PSO and ABC, which can be used to guide the particles to search for globally optimal solutions, have received much attention and achieved better results in solving the optimization problems.

In order to effectively solve the problem that the PSO algorithm easily falls into local optima, Li et al. proposed a self-learning PSO (SLPSO) algorithm, where four search strategies are used and each particle chooses the optimal strategy based on the adaptive learning framework [12]. To improve the performance of PSO, Lai et al. proposed a parallel PSO based on osmosis (PPBO), where the whole particle population is divided into several subpopulations and each subpopulation achieves migration based on osmosis [11]. In order to solve the stagnation behavior in ABC algorithm, Babaoglu proposed distribution-based ABC (distABC) algorithm, which uses the mean and standard deviation of the selected two solutions to obtain a new candidate solution [1]. Xue et al. proposed a self-adaptive ABC algorithm based on the global best candidate (SABC-GB), where several different candidate solution generating strategies and self-adaptive strategies are used to balance the rate of convergence and robustness of the algorithm [32].

Therefore, to further analyze the competitiveness of LPBFO algorithm, this section compares LPBFO with the state-of-the-art bionic algorithm: SLPSO [12], PPBO [11], distABC [1], SABC-GB [32].

All the algorithms were tested on eight benchmark functions with 30, 60 dimensions and each algorithm were repeated for 20 times. To make a fair comparison, the values of the population size used in each algorithm were chosen to be the same as S = 50 and the algorithms performed totally 2000 iterations times in each run expect SLPSO algorithm. In SLPSO algorithm, the number of iterations times was set to 400000 times. Additionally, as for other specific parameters of each comparison algorithm, we followed the settings in a corresponding original paper.

SLPSO settings::

The inertia weight ω = 0.9, acceleration coefficient η = 1.496, the minimun selection ratio for each operator γ = 0.01, the initial learning probability Pl = 1.0, the initial update frequency Uf = 10.

PPBO settings::

The acceleration coefficient c1 = c2 = 2; ws = 0.9, we = 0.4; the topology of PPBO is (5,10), which is a ring consisting of 5 subpopulations, each containing 10 particles.

distABC settings::

The control parameter limit = D × S/2.

SABC-GB settings::

The control parameter limit = 0.6 ∗ (S/2) ∗ D; LP = 10.

Tables 7 and 8 show the comparison among the experimental results based on 8 basic benchmark functions with 30, 60 dimensions. From the experimental results, the proposed LPBFO algorithm converges fast and achieves significantly better results in comparison with the best results of the other four algorithms on 4 benchmark functions: Sphere, Quartic, Schwefel’s Problem 2.22 and Ackley function. The results shown in Tables 7 and 8 indicate that the SLPSO, distABC, SABC-GB and LPBFO can converge to 0 on Rastrigin function, which means that these algorithms all can obtain the best result. Only on three functions are the results of LPBFO are worse than the best results achieved by the other algorithms: Rosenbrock (SLPSO), Griewank (distABC), Schwefel’s Problem 2.26 (SABC-GB).

Table 7 Comparison between LPBFO and other algorithms for 20 times independent runs tested on 8 basic benchmark functions with 30 dimensions
Table 8 Comparison between LPBFO and other algorithms for 20 times independent runs tested on 8 basic benchmark functions with 60 dimensions

It can be seen from the formulation of Rosenbrock function that the first term 100(xi+ 1xi)2 mainly influences the function value, which means that only the values among different dimensions in the candidate solutions are almost the same, the function value is smaller. Therefore, by choosing one dimension to updating the candidate solution, LPBFO failed to achieve the best results. However, by taking the global optimal solution (gbest) and the local optimal solution (pbest) into account, SLPSO uses different strategies to guide particles to move to cope with different situations in the search space. Therefore, based on the self-learning strategy and communication between particles, SLPSO algorithm gets the best results on Rosenbrock function.

The Griewank function and Schwefel’s Problem 2.26 are multimodal functions, where algorithms may easily fall into a local optimum when intending to solve the functions. The distABC algorithm uses the mean and standard deviation of the selected two solution to obtain a new candidate solution. The SABC-GB algorithm uses several different candidate solution generating strategies and self-adaptive strategies to obtain a new candidate solution. Both algorithms use multiple candidate solutions and use the communication between particles, which effectively reduce the probability of algorithm falling into local optimum. As a result, combining multiple solutions information to find the global optimum, distABC and SABC-GB have achieved the best results on Griewank function and Schwefel’s Problem 2.26.

In general, it is obvious that the proposed LPBFO algorithm yields competitive results in comparison with the best results of the other four algorithms.

4.4 Comparison regarding the t-test

The same as Section 4.2, the F-test and t-test were performed in this section. “Yes+” means that the results of LPBFO are significantly better than the results of other algorithms. “NO+” indicates that the results of LPBFO are better than the results of other algorithms but there exists no significant difference. “Yes-” means that the results of LPBFO are significantly worse than the results of other algorithms. “NO-” indicates that the results of LPBFO are worse than the results of other algorithms but there exists no significant difference. “SAME” means that LPBFO and the best one in other algorithms have obtained the same optimal solution.

Based on the results of statistical tests in Table 9, LPBFO algorithm is significantly better than the results of other algorithms in 7 experiments (YES+); LPBFO algorithm is significantly worse than the results of other algorithms in 2 experiments (YES-); there exists no significant difference between LPBFO algorithm and the results of other algorithms in 7 experiments (NO+, NO- and SAME). The above analysis suggests that the LPBFO algorithm is still the best one among the six algorithms and it possess competitive performance in comparison with other algorithms.

Table 9 Results of t-tests between LPBFO and the best results of the other algorithms regarding the indexes “Mean” for different benchmark functions

4.5 Discussion on LPBFO algorithm

In this subsection, we offer a thorough analysis on the proposed LPBFO algorithm and all the algorithms were tested using the same parameters with the previous experiments.

In order to overcome the insufficiencies in BFO algorithm, the proposed LPBFO algorithm introduces three improvements: the adaptive Lévy flight step-size, the improved search direction, the modified swarming process, where each improvement has different effect. We constructed three LPBFO algorithm variants (LPBFO1, LPBFO2, LPBFO3) to analyze how each improvement influences the LPBFO algorithm. All the algorithms were tested on 8 benchmark functions with 30, 60 dimensions and each algorithm were repeated 20 times.

The difference between LPBFO1 algorithm and LPBFO algorithm is that instead of using the adaptive Lévy flight step-size as the chemotaxis step-size, LPBFO1 algorithm uses the fixed step-size C(i) = 0.1. Different with LPBFO algorithm, LPBFO2 algorithm uses the tumbling direction in the original BFO algorithm as the search direction which is randomly selected from the search space. In the modified swarming process of LPBFO algorithm, the global best solution is used to guide the movement of the bacteria; LPBFO3 algorithm used a randomly selected solution to complete the swarming process. Tables 10 and 11 show the experimental results obtained by each algorithm in the 20 independent runs. As can be seen from Tables 10 and 11, LPBFO with the three improvements obtained the best solution than other LPBFO variants, where each LPBFO variant (LPBFO1, LPBFO2, LPBFO3) uses two improvements. Therefore, any improvement being less worsen the result, which means that the three improvements working together can achieve the best result.

Table 10 Comparison between LPBFO and other LPBFO variants for 20 times independent runs tested on 8 basic benchmark functions with 30 dimensions
Table 11 Comparison between LPBFO and other other LPBFO variants for 20 times independent runs tested on 8 basic benchmark functions with 60 dimensions

Among the three LPBFO variants (LPBFO1, LPBFO2, LPBFO3), LPBFO3 achieved the best solution, followed by LPBFO1 and the last is LPBFO2. Without using the global best solution in the modified swarming process, LPBFO3 still obtained good results which means that the third improvement has minor impact on LPBFO. From Tables 10 and 11, without using the improved search direction, LPBFO2 obtained the worst solutions than LPBFO1 and LPBFO3. Therefore, we can conclude that the second improvement (searching one dimension one time) has the most impact on LPBFO. Moreover, the adaptive Lévy flight step-size has certain impact on LPBFO and the global best solution has the least impact on LPBFO.

5 Conclusion

In this paper, we proposed an improved BFO algorithm (LPBFO) to accelerate the convergence rate and improve the precision of the solution. The adaptive Lévy Flight step-size is set as the chemotaxis step-size to balance local search and global search. To avoid the oscillation phenomenon among different dimensions and maximize the search ability of the algorithm, each bacterium choose one dimension randomly to tumble. To improve the accuracy of convergence, the information of global best solution is used to guide the movement of the bacteria. Finally, simulation experiments were conducted to assess the effectiveness of the LPBFO algorithm based on eight benchmark functions. The experiments verified that LPBFO algorithm significantly outperforms other bionic algorithms in the literature in terms of t-test.