1 Introduction

Path planning has always been a hot topic in the field of robot search and one of the key technologies for mobile robot technology research. The path planning of mobile robot is actually to select an optimal or suboptimal obstacle avoidance path that can be connected from the starting point to the ending point in the task area by referring to a certain parameter. The essence is to obtain the optimal or feasible solution under several constraints [1].

Path of the mobile robot can be divided into a path of a static known environment and a path of a dynamic unknown environment. The static path planning is named as the global path planning, while the dynamic path planning is named as the local path planning [2]. Local path planning is a kind of real-time dynamic path planning performed by the robot based on the local environment information collected by the sensors carried by the robot during the task execution. It is characterized by high flexibility and real-time performance [3].

However, as it relies on local environment characteristics, the path obtained may be only local optimal rather than global optimal [4]. For global path planning, it is first necessary to establish an abstract all-region environment map model, and then obtain the global optimal or a better path on the all-region map model by using the optimal searching algorithm, and finally guide the mobile robot to move safely toward the target point in a real situation. It mainly involves two parts: one is the understanding of environmental information and the construction of map model, and the second is all-region path search and robot guidance [5].

Based on above information, this paper studies the path planning problems based on the static global path planning. The selection of algorithm is the most critical part in the mobile robot path planning. At present, the commonly used algorithms for global path planning are GA [6], ACA [7], and PSO [8, 9]. With the development of mobile robot technology, path planning technology has achieved some research results, but in the path planning algorithm design, each algorithm has its advantages and disadvantages [10]. Therefore, more and more intelligent optimization algorithms are constantly emerging [11].

In 2008, Yang proposed a new swarm intelligence algorithm- FA. This algorithm which solves the optimal searching problem by simulating the mutual attraction behavior of fireflies due to the light intensity is a stochastic optimization algorithm based on the intelligence of biota [12].

FA is a genetic calculation method between GA and evolutionary programming [13]. Similar to other algorithms, it is also based on groups. However, instead of using the evolutionary operators, it treats each individual as a particle without a volume in the search space, moving freely in the search space. In the mobile search, the mobile distance is determined by the luminous intensity and light intensity absorption coefficient of its surrounding companions [14].

FA has the common characteristics of group intelligence algorithms: (1) they all have group-based optimization technology, that is, there are multiple search tracks, which have strong ease of implementation and parallelism; (2) they only need to use the value of the target itself, which is very versatile [15]. Because of the easy implementation and strong versatility of FA, more and more scholars combine it with various optimization problems.

FA has a short history of development. Although it has certain advantages over other algorithms in terms of optimization speed and accuracy, there are still some problems to be solved in the FA [16]. For example, the algorithm is easy to fall into the local optimal solution; the performance of the algorithm is too dependent on the selection of parameters; the algorithm is prone to premature convergence during the implementation process, etc [17, 18].

Motivated by the above work, this paper studies the optimization of FA to solve the problem that the FA is easy to fall into the local optimal solution. In the face of problem that the FA is easy to fall into the local optimal solution, some scholars have proposed different optimization strategies. Among them, dynamic adaptive parameter strategy is more typical [19]. The core idea of this strategy is to use dynamic formulas to update the adaptive parameters so that they finally meet the condition that all fireflies can converge to one point. However, a large number of simulation results show that the dynamic adaptive strategy has some errors in the implementation process, and the parameter update speed is slow [20].

This paper proposes a new genetic firefly algorithm (GFA), which is based on FA and GA. The core idea of this new algorithm is that when the FA falls into the local optimal solution, the local optimal fireflies would be regarded as a group, and the group is subjected to the selection, crossover and mutation operations in the GA. Finally, the optimal firefly individual can be obtained from genetic operations. Experimental results show that the new algorithm has a faster optimization speed than dynamic adaptive parameter strategy, and can solve the problem that FA is easy to fall into the local optimal solution. The main contributions of this paper are as follows:

  1. (1)

    We successfully combined the FA with the GA and propose a new hybrid algorithm based on this model.

  2. (2)

    The hybrid algorithm can not only solve the problem that the FA is easy to fall into the local optimal solution, but also improve the performance of the FA.

  3. (3)

    Hybrid algorithm can improve the computing power and reaction speed of mobile robot in path planning.

  4. (4)

    Compared with other algorithms, the hybrid algorithm has higher superiority and better research significance.

The specific content of this paper is as follows. The FA with its mathematical model and GA with its mathematical model are introduced in Sect. 2. The mathematical model of GFA and optimization experiment are also described in Sect. 3. Application in path planning and simulation results are reflected in Sect. 4. Finally, some concluding remarks and future works are drawn in Conclusions.

2 Mathematical models of algorithms

2.1 Mathematical model of FA

The search process of the FA relies on the attractiveness between individuals to produce movement, that is, the brighter fireflies have bigger attractiveness and always attract the darker fireflies to move to them. Each firefly in the group will have its own luminosity and correspond to a reference solution to the problem. Here, three kinds of remarks are required [13].

Remark 1

Fireflies are gender-neutral, any fireflies can be attracted by better (brighter) fireflies;

Remark 2

The attractiveness of fireflies is directly proportional to their luminosity. For any two fireflies, the darker firefly move toward brighter firefly;

Remark 3

If there is no fireflies brighter than the given fireflies, fireflies will move randomly;

According to the principle that darker fireflies are attracted to brighter fireflies, the position and luminosity of fireflies would be constantly updated, so as to find the brightest firefly and complete the optimization process [15].

For any two fireflies \({X_i}\) and \({X_j}\) \((i \ne j)\), their attractiveness can be expressed as follows:

$$\begin{aligned} {{A_{ij}}({r_{ij}}) = {A_0}{e^{ - \gamma {r^2}_{ij}}}}. \end{aligned}$$
(1)

where \({A_0}\) is the attractiveness when \({r = 0}\).

The parameter \(\gamma\) represents light absorption coefficient, which is always set as a constant. \({r_{ij}}\) is the distance from \({X_i}\) and \({X_j}\), which is defined by the following:

$$\begin{aligned} {r_{ij}} = \left\| {{X_i} - {X_j}} \right\| = \sqrt{\sum \limits _{d = 1}^D {{{({x_{id}} - {x_{jd}})}^2}} }. \end{aligned}$$
(2)

where \(d = 1,2,3,...,D,\) and D is the number of maximum problem dimensions [21].

The luminosity of fireflies is related to the objective function, and its definition is as follows:

$$\begin{aligned} {L_{ij}}({r_{ij}})= & {L_0}{e^{ - \gamma {r^2}_{ij}}}. \end{aligned}$$
(3)
$$\begin{aligned} {L_i}({r_i})= & {L_0}{e^{ - \gamma {r^2}_i}}. \end{aligned}$$
(4)

where \({L_{i}}\) represents the luminosity of the firefly \({X_i}\), and \({L_{ij}}\) represents the relative luminosity between \({X_i}\) and \({X_j}\). \({L_0}\) is the absolute luminosity at \({r = 0}\).

Similarly, for \({X_i}\) and \({X_j}\), the darker firefly always moves to the brighter one, assuming that \({X_i}\) is darker than \({X_j}\), then the process that \({X_i}\) moves to \({X_j}\) can be defined as follows:

$$\begin{aligned} {x_{id}}(t + 1) = {x_{id}}(t) + A({r_{ij}}) \times ({x_{jd}}(t) - {x_{id}}(t)) + \alpha \varepsilon . \end{aligned}$$
(5)

where \(\alpha \in \left[ {0,1} \right]\) represents the step factor. \(\varepsilon\) is a random value uniformly distributed in the range \(\left[ { - 0.5,0.5} \right]\), and t is the iteration number [21].

Algorithm 1 represents the pseudo code of the FA, where \({T_{\max }}\) is the maximum number of iterations and N is the maximum number of populations.

figure a

2.2 Mathematical model of GA

GA uses a certain coding method to map the solution space to the coding space. Each code corresponds to a solution of the problem, which called an individual or chromosome, and then randomly determines the starting group of individuals, which can be called a population.

In subsequent iterations, individuals are first selected based on the size of fitness value, and then individuals are crossed and mutated according to various genetic operators. After these operations, a new population will be generated, which is more adaptable to the environment than the previous generation. This continues until the optimization criterion is met [22]. Finally, the decoded last-generation individual can be used as the optimal solution to the problem [23].

The three main operations in genetic operations are selection, crossover, and mutation. Among them, the selection operation (selection operator) is used to determine the crossover individuals, that is, which individual will be copied according to the individual’s fitness value; the crossover operation (crossover operator) refers to replacing and reorganizing the partial structure of the two parents to generate new individuals.

The role of crossover operation is to generate new individuals, which is also a genetic operation that plays a central role in GA. Various crossover operators include two basic contents: (1) Pair individuals randomly and decide whether each pair needs crossover operation according to the preset crossover probability [24]. (2) Set the intersection of paired individuals, and exchange part of the structure of these paired individuals;

Mutation operation (mutation operator) refers to the inverse change of certain gene values of individuals in a group. There are two aims in the mutation operation: (1) Make GA have local random search ability. (2) Maintain the diversity of groups [25]. The operation of the mutation operator is generally divided into two steps: (1) The loci are randomly determined within the code string range of all individuals in the population [26]. (2) The gene values of these loci are mutated with a preset mutation probability [27].

The basic process of GA can be summarized as follows:

  1. (1)

    A certain number of initial populations are randomly generated, and each individual is expressed as a gene code of a chromosome;

  2. (2)

    Calculate the fitness of each individual, and judge whether it meets the optimization criteria. If it meets, output the best individual and end the calculation, otherwise go to step 3;

  3. (3)

    Select regenerative individuals based on fitness. Individuals with high fitness have a high probability of being selected, while individuals with low fitness may be eliminated;

  4. (4)

    Perform crossover and mutation operations to generate new individuals;

  5. (5)

    Get a new generation of population and return to step 2.

The pseudo code of the GA is shown in Algorithm 2, where N is the maximum population and T is the maximum number of iterations.

figure b

2.3 Defect verification

Although the FA has good performance on many optimization problems, it still has some shortcomings. For example, the optimization performance depends on the setting of control parameters, the convergence speed is slow, and it is easy to fall into the local optimal solution on complex problems.

This paper will focus on the problem that the FA is easy to fall into the local optimal solution. First, we will verify the existence of shortcomings of the algorithm. Selecting six binary nonlinear functions as the optimization objectives, the objective functions are shown in Table 1.

The reason for choosing these binary functions is that they are multimodal functions. As shown in Fig. 1, there are multiple local maxima in the domain of these functions. Once FA falls into the local optimum, some fireflies will gather near these local maxima.

Table 1 Six binary nonlinear functions

FA is used to optimize these objective functions. Setting the number of fireflies to 20, the step factor \(\alpha\) to 0.8, and the light absorption coefficient \(\gamma\) to 0.3. The performance of the algorithm depends on the choice of parameters, the inspiration for parameters selection comes from [19].

Figure 1 shows the results of FA to optimize objective functions \({f_5}(x,y)\) and \({f_6}(x,y)\). Among them, Figs. (a) and (b), respectively, represent the three-dimensional function graphs of \({f_5}(x,y)\) and \({f_6}(x,y)\).

Fig. 1
figure 1

Simulation results

From the results shown in Fig. 1, it can be found out that the binary nonlinear functions have some local extreme points in their definition domain, and the maximum extreme point of the function is near (0, 0). Besides the above analysis, some fireflies are distributed discretely around local extreme points, and most of the fireflies are not near the maximum extreme point. Thus, it can be confirmed that FA is easy to fall into the local optimal solution.

2.4 The model of AFA

Based on the core idea of FA (dark fireflies will be attracted by bright fireflies), it can be deduced that in an ideal situation, all fireflies should be close to the brightest firefly, and eventually converge to one point [28]. So for any two fireflies \({X_i}\) and \({X_j}\) :

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } {X_i}(t)= & \mathop {\lim }\limits _{t \rightarrow \infty } {X_j}(t),\forall i \ne j. \end{aligned}$$
(6)
$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } {X_i}(t + 1)= & \mathop {\lim }\limits _{t \rightarrow \infty } {X_i}(t). \end{aligned}$$
(7)

where \(i,j = 1,2,3,...,N.\) Eq. (6) shows that all fireflies eventually converge to one point. Eq. (7) indicates that the position of the fireflies after convergence will no longer change.

According to Eqs. (1), (2), (3), (5), (6) and (7), we can get:

$$\begin{aligned} \begin{array}{*{20}{l}} \begin{array}{l} \mathop {\lim }\limits _{t \rightarrow \infty } {X_i}(t + 1) - {X_i}(t) = 0\\ \Rightarrow \mathop {\lim }\limits _{t \rightarrow \infty } {A_{ij}}({r_{ij}}) \times \mathop {\lim }\limits _{t \rightarrow \infty } ({X_i}(t) - {X_j}(t))\\ + \varepsilon \times \mathop {\lim }\limits _{t \rightarrow \infty } \alpha = 0 \end{array}\\ {\begin{array}{*{20}{l}} { \Rightarrow {A_0} \times \mathop {\lim }\limits _{t \rightarrow \infty } {e^{ - \gamma {r^2}_{ij}}} \times \mathop {\lim }\limits _{t \rightarrow \infty } ({X_i}(t) - {X_j}(t))}\\ { + \varepsilon \times \mathop {\lim }\limits _{t \rightarrow \infty } \alpha = 0} \end{array}}\\ { \Rightarrow 0 + \varepsilon \times \mathop {\lim }\limits _{t \rightarrow \infty } \alpha = 0}\\ { \Rightarrow \mathop {\lim }\limits _{t \rightarrow \infty } \alpha = 0.} \end{array} \end{aligned}$$
(8)

From Eq. (8), we can find that when the fireflies finally converge to one point, the step factor \(\alpha\) will also approach to zero. Based on this inference, many scholars regard whether the step factor \(\alpha\) is 0 as a condition to determine whether the FA falls into the local optimal solution, and have proposed a variety of adaptive firefly algorithms (AFA) [29].

The main ideas of these algorithms are to change the dynamic value of the step factor \(\alpha\) to make it finally approach to 0, so as to solve the problem that the FA is easy to fall into the local optimal solution [30].

Some scholars have proposed the following adaptive parameter dynamic strategies to update the parameter \(\alpha\), and the parameter update formulas are:

$$\begin{aligned} \alpha (t + 1)=\; & (1 - \frac{t}{{{T_{\max }}}})\alpha (t). \end{aligned}$$
(9)
$$\begin{aligned} \alpha (t + 1)= \;& {(\frac{1}{{9000}})^{\frac{1}{t}}}\alpha (t). \end{aligned}$$
(10)

It has been confirmed that the parameter update speed of AFA is slow [31]. Even in the face of some complicated optimization problems, AFA can not play a very good optimization effect [32].

3 A hybrid algorithm based on GA and FA

3.1 The model of GFA

In this section, this paper introduces a new hybrid algorithm – GFA . The core idea of the GFA is that when the FA falls into the local optimal solution, the local optimal fireflies would be regarded as a group while the luminosity formula would be regarded as the fitness function. Then, the group will be genetically operated to obtain the brightest firefly [33].

The algorithm model of GFA is as follows: Suppose there are n fireflies trapped in local optimal solution, and each firefly has its own luminosity. Considering the luminosity expression formula as a fitness function, then each firefly has its own fitness value corresponding to its luminosity [34].

By regarding n fireflies trapped into the local optimal solution as a parent group, selection operation, cross operation and mutation operation are performed to the parent group to finally obtain the brightest firefly through continuous genetic iterations [35].

However, according to the algorithm model of FA, all fireflies trapped in the local optimal solution will approach the brightest firefly. So, there must be a firefly \({X_j}\), its fitness value (luminosity) \({I_j}\) is greater than other fireflies [36]. This phenomenon will lead to the highest probability of \({X_j}\) being selected in selection operation. Based on this theory, this paper introduces a selection operation method of roulette wheel [37].

Roulette wheel selection method, also known as ratio selection method, its basic idea is dividing the ratio on the roulette according to the individual fitness value. When rotating the wheel, the number of the pie chart pointed by the pointer changes continuously [38].

Finally, roulette wheel would be stationary, and the number which is pointed by the pointer will be the number selected by the roulette wheel [39]. The model of roulette wheel can be described as follows:

Assuming that there are n individuals in a group, and the fitness value of i is \({f_i}\), then the probability that i will be selected is as follows:

$$\begin{aligned} {P_i} = \frac{{{f_i}}}{{\sum {{f_i}} }},i = 1,2,3,...,n. \end{aligned}$$
(11)

The cumulative probability can be expressed as follows:

$$\begin{aligned} C{P_i} = \sum \limits _{j = 1}^i {{P_i}} . \end{aligned}$$
(12)

It can be drawn from Eqs. (11) and (12) that when the fitness value of i is the largest, the corresponding selection probability is also the largest. However, this paper finds that if the fitness value of an individual is much larger than the fitness value of other individuals, its proportion in the disc pie chart will also be much larger than the proportion of the pie chart occupied by other individuals [40].

In this case, the roulette wheel selection method will converge prematurely or even fail. In order to improve the convergence of roulette wheel selection method and prevent its premature convergence, this paper proposes a new selection probability formula:

$$\begin{aligned} {P_i} = Z \times \frac{{{f_i}}}{{\sum {{f_i}} }},i = 1,2,3,...,n, \end{aligned}$$
(13)

where Z is the proportional function and its expression is as follows:

$$\begin{aligned} Z = \frac{{{f_i} - {f_{\min }}}}{{{f_{\max }} - {f_{\min }}}} \times \frac{1}{{n - 1}}. \end{aligned}$$
(14)

If the fitness value of an individual is much larger than the fitness value of other individuals. Through Eqs. (12) and (13), the proportion in the disc pie chart can be adjusted, and the roulette wheel selection method can be prevented from prematurely converging. The new roulette wheel selection method is applied to the GFA, and the pseudo code of the GFA is sorted out as shown in Algorithm 3.

figure c

The basic process of GFA can be summarized as follows:

Step 1: Population initialization—generate N fireflies(solutions) \({X_i}\), \(i = 1,2,...,N\), and compute the fitness value of each firefly based on Eqs. (3) and (4).

Step 2: Firefly movement—If the fitness value \(f({X_i})\) is better than \(f({X_j})\), \({X_j}\) moves toward \({X_i}\) based on Eq. (5) and compute the new fitness value of \({X_j}\).

Step 3: Judgment condition—If \({\alpha _{{X_j}}} \ne 0\), Put new \(f({X_j})\) obtained in step 2 into Set H.

Step 4: Genetic manipulation—Regard luminosity value as fitness value, select parents from H to create offspring. Evaluate \({P_i}\) and \({CP_i}\) of each firefly based on Eqs. (12) and (13). Then, simulate the operation of roulette and pick out offspring.

Step 5: Other operations—Perform crossover and mutation operations to generate new individual.

Step 6: Termination condition—Get a new generation of population and return to step 2.

Figure 2 shows the Schematic flowchart of GFA.

Fig. 2
figure 2

The schematic flowchart of GFA

In the previous section, we used some binary nonlinear functions to prove that the FA is easy to fall into a local optimal solution. In this section, we will focus on using the proposed algorithm to solve this problem. Still choose \({f_5}(x,y)\) and \({f_6}(x,y)\) shown in Table 1 as the objective functions. Setting the number of fireflies to 20, the step factor \(\alpha\) to 0.8, and the light absorption coefficient \(\gamma\) to 0.3. Finally, simulation results are shown in Fig. 3.

Fig. 3
figure 3

Nonlinear optimization results by using GFA, AFA

As shown in Fig. 3, GFA can make the fireflies finally approach the maximum extreme point of the function. Thus, proposed algorithms can solve the problem that the FA is easy to fall into the local optimal solution.

In addition, this paper sorts out the simulation results of two algorithms when optimizing the objective functions, as shown in Fig. 4. Table 2 summarizes the computational results (Function value and Number of iteration) of these tow algorithms.

Fig. 4
figure 4

The simulation results of GFA and AFA

Table 2 Computational results achieved by GFA and AFA

As shown in Table 2, for these 6 binary nonlinear functions, both algorithms have found maximum values of the functions. However, it is worth noting that the number of iterations required by the two algorithms for optimization is different.

Specifically, for most functions, GFA requires fewer iterations than AFA. For example, for \({f_5}\) and\({f_6}\), GFA only needs 3 or 4 times to calculate the optimal value of the function. In a sense, it can be confirmed that GFA has faster calculation speed than AFA, and can solve the problem that the FA is easy to fall into the local optimal solution faster.

Of course, it is not enough to rely on the computational results shown in Table 2 to prove the competitiveness of the proposed algorithm. The performance of GFA is reflected experimentally.

3.2 Simulation analysis

In this section, we will conduct more experiments to verify the competitiveness of the proposed algorithm, we select another nine famous test functions to test the performance of GFA. The expressions of the famous test functions are shown in Table 3, the global optimum values of these functions are all 0. At the same time, we introduce some other algorithms for comparison. The selected algorithms are listed below:

Standard FA (FA)—[13].

Genetic Algorithm (GA)—[22].

Sparrow Algorithm (SSA)—[41].

Adaptive Parameters FA (AFA)—[19].

Particle Swarm Algorithm (PSO)—[42].

Whale Optimize Algorithm (WOA)—[43].

Table 3 Test functions used in the experiments

Set the problem dimension D to 50, the maximum population N to 100, the maximum iteration number \({T_{\max }}\) to 1000, the light absorption coefficient \(\gamma\) to 0.8, and the step factor \(\alpha\) to 0.6. In the experiment, above selected algorithms were all run 100 times. The inspiration for parameters selection comes from [19].

It is worth noting that a fixed initial population should be used for all experiments to ensure the fairness of the experiment. In addition, for any algorithm and its variants, we use the same initial parameters.

Finally, simulation results were obtained, as shown in Fig. 5, where the abscissa represents the number of iterations, and the ordinate represents the best function value.

Fig. 5
figure 5

Simulation results of 9 test functions for 7 algorithms under 50 dimensions

From the results shown in Fig. 5, it can be found out that for these nine test functions, the function values optimized by these algorithms are different, where the results obtained by GFA and GA are often better than other algorithms. In particular, for most test functions, the optimization effect of the FA is not good.

In order to identify the best algorithm, the Friedman test [44] is conducted on the data shown in Table 4. The results are listed in Table 5.

Table 4 Computational results achieved by different seven algorithms
Table 5 Friedman test results

According to Table 5, we can found out that GFA, WOA, and GA have lower relevance rank than AFA, FA, PSO, although the gap between these three algorithms is not large. To be specific, their relevance ranks are 1.37, 2.08, 2.29.

Lower rank represents stronger relevance, obviously, GFA achieves the best rank. Thus, GFA performs best in optimization for these nine benchmark functions. As compared with other algorithms, its performance is better so that better solutions can be produced of the test functions.

Although the proposed algorithm performs best in optimization for these 9 benchmark functions, we still find many drawbacks in the experimental process for proposed algorithm. First, the proposed algorithm depends on the choice of parameters. Then, although proposed algorithm can solve the problem that FA is easy to fall into the local optimal solution, it has the problem of slow convergence. Finally, the proposed algorithm is a hybrid algorithm, on the basis of the original algorithm, it only maintains the diversity of the population, but does not improve the diversity.

4 Application in mobile robot path planning

4.1 Simulation environment

Abstractly expressing the surrounding environment of a mobile robot with a set of data, establishing a two-dimensional or three-dimensional environmental model, and obtaining environmental data that the mobile robot can understand and analyze is the basic premise of robot path planning [45].

In this paper, the grid method is used. The principle is to regard the surrounding environment as a two-dimensional (three-dimensional) plane and divide the plane into grids with two-value (three-value) information of equal area.

In each grid, the amount of information about the surrounding environment is stored. The grid map designed here is a \(10 \times 10\) (\(10 \times 10 \times 10\)) terrain matrix, black areas indicate obstacles, and white areas indicate no obstacles. Path, here select the evaluation criteria of the shortest distance path planning, that is, the shortest path planning problem [46].

In addition, when using the grid method to build an environmental model, in order to convert the environmental information into data that can be recognized by the mobile robot, the serial number method is generally used to mark the environmental map information, that is, the grids are sequentially accumulated from number 0 to the last grid.

For a grid map with coordinates, each grid point has a unique coordinate. Assuming that the grid size is x rows and y columns, the position corresponding to the i-th grid is shown in Eq. (15).

$$\begin{aligned} \left\{ \begin{array}{l} {x_i} = a \times [\bmod (i,y) - a/2]\\ {y_i} = a \times [x + a/2 - ceil(i/x)] \end{array} \right. \end{aligned}$$
(15)

where a is the side length of each small square pixel.

In this paper, each grid has a side length of 1 cm and its serial number is from 0 to 99(999). Ceil(n) is the smallest integer greater than or equal to the value n, and mod(iy) is the remainder of i divided by y [47].

Here, some definitions are required.

Definition 1

Two-dimensional path planning evaluation function \({L_{2D}}(n)\):

$$\begin{aligned} {L_{2D}}(n) = \sqrt{{{({x_n} - {x_{end}})}^2} + {{({y_n} - {y_{end}})}^2}}. \end{aligned}$$
(16)

where \(({x_n},{y_n})\) is the coordinate of the center point of the n grid, \(({x_{end}},{y_{end}})\) is the coordinate of the end point of the planned path. L(n) represents the total distance of the planned path.

Definition 2

Three-dimensional path planning evaluation function \({L_{3D}}(n)\):

$$\begin{aligned} {L_{3D}}(n) = \sqrt{{{({x_n} - {x_e})}^2} + {{({y_n} - {y_e})}^2} + {{({z_n} - {z_e})}^2}}. \end{aligned}$$
(17)

Definition 3

Robot movement method:

Octree search strategy: the robot can move freely between adjacent grids in eight nearby directions during the search process. It is assumed that k is the next grid node to be selected, which can be defined as:

$$\begin{aligned} k = \mathop {\min }\limits _{n = 1}^8 (L(n)). \end{aligned}$$
(18)

Definition 4

All environmental boundaries are represented by obstacles.

4.2 Simulation results and analysis

In this section, this paper applies the GFA to the path planning of mobile robot, and analyzes simulation results. The performance of GFA for path planning is also reflected experimentally, AFA, FA, and GFA are tested in the same grid environment for many times.

The specific parameters of GFA, FA, and AFA used in path planning are shown in Table 6. The inspiration for parameters selection comes from [46].

Table 6 Simulation parameters
Table 7 Environmental parameters

In the experiment, the above-mentioned algorithms were all operated for 200 times. We have selected nine different grid environments to evaluate the algorithm for path planning, most of which are more complicated.

Table 7 shows the constraint conditions of each grid environment in 9 different grid environments, where Start represents the starting point number in 2D (3D) environments, End represents the ending point number in 2D (3D) environments, and \({L_{\min }}\) represents the shortest path length in different grid environments.

The simulation results are shown in Fig. 6, where the line marked with some circles represents GFA.

Fig. 6
figure 6

Path trajectory in different environments

Fig. 7
figure 7

Path results in different environments

From the simulation results, it can be found out that three algorithms have successfully found a path from the starting point to the ending point, and achieved completing avoidance of all obstacles. To compare the advantages and disadvantages of three algorithms more effectively, the minimum path length and the number of iterations in the same grid environment are recorded. As shown in Fig. 7.

Figure 7 shows the simulation results of the three algorithms in different dimensional grid environments. The blue waveform represents GFA, the orange waveform represents AFA and the red waveform represents FA. Similarly, the abscissa represents the number of iterations while the ordinate represents the minimum path length.

When algorithm finds the optimal path, the waveform will start to stabilize. In order to identify the best algorithm, the Friedman test [44] is conducted on the data shown in Tables 8 and 9. The results are listed in Table 10.

It is worth noting that in Tables 8 and 9, L(cm) represents the shortest path length calculated by the algorithm, T(n) represents the number of iterations required for the algorithm to calculate the shortest path length, and t(s) represents the time that it takes for the algorithm to calculate the result.

Table 8 Computational results achieved by three algorithms in different 2D environments
Table 9 Computational results achieved by 3 algorithms in different 3D environments
Table 10 Friedman test results of path planning

From the data resulting in Table 10, some conclusions can be drawn. As far as the algorithm alone is concerned, both L and T achieve better relevance ranks than t with this conclusion is valid in both 2D and 3D environments. To be specific, the relevance ranks for L are 1.00, 2.92, 2.89, and 1.78, which means that L achieves the best rank.

Thus, for these nine different grid environments, L is an important parameter for evaluating path planning, while T is ranked behind it.

In the case of the comparison between the three algorithms, the rank for L calculated by GFA is lower than the ranks calculated by the other two algorithms in different dimensions.

In addition, the Friedman test [44] results of three algorithms (overall) for these different 9 grid environments are also recorded in Table 10. In detail, their relevance ranks are 1.09, 2.92, 1.95 under 2D environment and 1.10, 2.94, 1.97 under 3D environment, which means that GFA achieves the best rank.

Thus, GFA performs best in path planning for these 9 grid environments. As compared with other algorithms, its performance is far better so that better path solutions can be produced under different grid environments.

5 Conclusion

As mentioned above, GFA can solve the problem that the FA is easy to fall into the local optimal solution faster, so that fireflies trapped into the local optimal solution converge to one point at a faster rate.

As verified by the theoretical and experimental results, GFA outperforms other algorithms on most of the test functions, it usually takes less iterations to find the optimal value of the function. Moreover, as compared with other algorithms, GFA shows better algorithm performance, which is more suitable for solving practical problems.

Applying the GFA to the path planning of mobile robots can improve the computing power and reaction speed of mobile robots, which also can make mobile robots find a route to avoid obstacles more quickly in different dimensional environment.

The final experiments also show that when GFA optimizes the path planning, it can be shown to yield better solutions. As opposed to FA and AFA, GFA tends to show better competitiveness which means that it has good research value and significance.

There are still many points for improvement in the research work of this paper, such as how to apply the GFA in other fields, how to apply the GFA in a dynamic grid environment, and how to improve the performance of the GFA from other aspects.