1 Introduction

Global optimization is important in various scientific problems and engineering applications. In some literatures, traditional mathematical methods are used to solve global optimization problems in various fields [1,2,3]. But they meet great challenges in solving complex optimization problems, such as multimodal, discontinuous and non-convex problems. Therefore, the meta-heuristic algorithms are proposed by imitating biological evolution and insect/bird behaviors, and applied to solve complex optimization problems.

In recent decades, the population-based meta-heuristic algorithms have been widely concerned by scholars, which are simple and easy to apply in various fields. Some of the population-based algorithms are proposed, for example, Particle Swarm Optimization (PSO) [4], Bees Algorithm (BA) [5], Artificial Bee Colony (ABC) [6] algorithm, Differential Evolution (DE) [7], Krill Herd (KH) [8], Coyote Optimization Algorithm (COA) [9], Bat Algorithm (BA) [10],Bacterial Foraging Optimization (BFO) [11] algorithm, Fruit Fly Optimization Algorithm (FOA) [12], Sine Cosine Algorithm (SCA) [13], Grey Wolf Optimization (GWO) [14], Flower Pollination Algorithm (FPA) [15], Spotted Hyena Optimizer (SHO) [16], Barnacles Mating Optimizer (BMO) [17], Poor and rich optimization algorithm(PRO) [18], Multi-Verse Optimizer (MVO) [19], Whale Optimization Algorithm (WOA) [20] and Moth-Flame Optimization (MFO) [21] algorithm.

Among the above algorithms, Moth-Flame Optimization algorithm has received special attention because MFO algorithm has good search ability, which is proposed by Seyedali Mirjalili based on flight behavior of moths in nature [21]. The MFO algorithm is implemented by updating the position of moths and generating flames. MFO has been widely used because of its simple and easy to use software. For example, in [22], MFO was used to optimize the parameters of least squares support vector machine (LSSVM), and the MFO-LSSVM forecasting model was established. In [23], MFO was utilized to determine the control parameters of blade pitch controllers. In [24], MFO was used to solve the vehicular ad hoc networks clustering problem. In [25], MFO was used to design the antenna array. In [26], MFO was utilized to realize the optimal link cost in wireless router network. In order to improve the control ability of multiarea hybrid interconnected power system, MFO was used to optimize its controller [27]. Puja Singh and Shashi Prakash have used MFO for multiple optical network units placement [28]. In [29], MFO was utilized to address the optimal reactive power dispatch (ORPD) problem.

To solve specific problems, researchers have developed some improved MFO algorithms. For example, in [30], to improve the global searching ability and convergence speed of MFO, the differential evolution was used to generate flames and the mechanism of flames guided moths was modified. To increase the diversity of the population, the levy flight strategy was integrated into MFO [31]. In [32], an Enhanced Moth-Flame Optimization was proposed to improve the performance of MFO in balancing exploitation and exploration. In [33], Opposition-based Moth Flame Optimization (OMFO) was proposed to solve the global optimization problem. In [34], MFO was hybridized with mutation strategy to solve the complex optimization tasks. In [35], a chaotic local search and Gaussian mutation have been introduced into MFO to optimize KELM. In [36], an improved Moth Flame Optimization (IMFO) algorithm was proposed to solve real-world optimization problems. In [37], two spiral search mechanisms were proposed to improve the search ability of MFO.

Although improved MFO algorithms proposed in the above literature enhance the global convergence of MFO algorithm to a certain extent, these enhanced MFO algorithms still face the problem of falling into local optimum in their results analysis. To further alleviate the above problems and to improve MFO’s performance, in this paper, an improved MFO algorithm (IMFO) is proposed. The main contributions of this paper are summarized as follows:

  1. (1)

    The OOBL strategy is used to update the best and worst flames in the iterative process, which can generate effective flames to guide moths, so as to enhance exploration ability of the MFO.

  2. (2)

    A modified position updating mechanism of moths is proposed by introducing linear search and mutation operator. Linear search and spiral search are integrated to improve convergence efficiency. Meanwhile, Euclidean distance is used to select the search strategy and mutation operators in the iterative process. This mechanism can guarantee the diversity of population under the condition of accelerating convergence speed.

  3. (3)

    The performance of the IMFO algorithm is evaluated on 23 benchmark functions and IEEE CEC 2014 benchmark test set, which is compared with MFO, LMFO, MFO3, CMFO, IMFO2020, COA, CMAES, SSA, SCA, GSA, ABC, PSO, GWO, WOA, PSOGSA, HCLPSO, MVO, HSCA, EWOA, IDA, SHO. Meanwhile, the IMFO is also used to solve three engineering optimization problems and the results are compared with other well-known algorithms(such as SCA, MMA and GCA).

The rest of the paper is organized as follows: Section 2, introduces Moth-Flame Optimization algorithm, Opposition-Based Learning (OBL) and Orthogonal Experiment Design (OED). In Section 3, the improved Moth-Flame Optimization algorithm is discussed in detail. Section 4 uses the classical benchmark function and IEEE CEC 2014 to estimate the performance of IMFO, and compares with other algorithms. In Section 5, IMFO is used to solve three engineering optimization problems. Section 6 presents the conclusions of this study.

2 Related works

2.1 Moth-flame optimization algorithm

MFO is a population-based optimization algorithm, which is proposed by Seyedali Mirjalili based on flight behavior of moths in nature [21]. In the search mechanism of MFO, flames and moths can exchange information quickly, and balance exploitation and exploration performance. The following is a brief review of MFO.

The moths population is expressed as:

$$ M = \left[ \begin{array}{c} M_{1} \\ M_{2} \\ \vdots\\ M_{n} \end{array} \right]= \left[ \begin{array}{ccccc} m_{11} & m_{12} & \ldots& {\ldots} & m_{1d} \\ m_{21} & m_{22} & \ldots& {\ldots} & m_{2d} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ m_{n1} & m_{n2} & \ldots& {\ldots} & m_{nd} \end{array} \right] $$
(1)

where the number of moths/flames is n, the variables(dimension) number of per moth individual is d.

At the same time, the fitness values of moths is expressed as:

$$ OM = \left[ \begin{array}{ccccc} OM_{1} \\ OM_{2} \\ {\vdots} \\ OM_{n} \end{array} \right] $$
(2)

where OMi means the fitness function value of the corresponding moth \(M_{i}=\left [m_{i1}, m_{i2}, \cdots , m_{id}\right ], i = 1,2, \cdots , n\), the fitness function is determined according to the actual situation. MFO initializes the population as follows:

$$ \begin{array}{@{}rcl@{}} \ M_{ij}=lb_{i}+u_{j}\left( ub_{i}-lb_{i}\right) \end{array} $$
(3)

where j = 1,2,⋯ ,d; uj is a random constant between 0 and 1; ubi and lbi indicate the upper and lower bounds of the i-th variables as follows:

$$ \begin{array}{@{}rcl@{}} \ ub=\left[ub_{1},ub_{2},\dots,ub_{n}\right] \end{array} $$
(4)
$$ \begin{array}{@{}rcl@{}} \ lb=\left[lb_{1},lb_{2},\dots,lb_{n}\right] \end{array} $$
(5)

where the values of ub and lb are determined by the actual situation.

Correspondingly, the flames and their fitness values are expressed as

$$ F = \left[ \begin{array}{c} F_{1} \\ F_{2} \\ \vdots\\ F_{n} \end{array} \right]= \left[ \begin{array}{ccccc} f_{11} & f_{12} & \ldots& {\ldots} & f_{1d} \\ f_{21} & f_{22} & \ldots& {\ldots} & f_{2d} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ f_{n1} & f_{n2} & \ldots& {\ldots} & f_{nd} \end{array} \right] $$
(6)
$$ OF = \left[ \begin{array}{ccccc} OF_{1} \\ OF_{2} \\ {\vdots} \\ OF_{n} \end{array} \right] $$
(7)

The logarithmic spiral is the main form of moths position update in MFO, which is defined as follows:

$$ \begin{array}{@{}rcl@{}} \ M_{i}(l+1)=\left\{\begin{array}{ll}{D_{i} \cdot e^{b t} \cdot \cos (2 \pi t)+F_{i}(l),} & {i \leq f_{n o}} \\ {D_{i} \cdot e^{b t} \cdot \cos (2 \pi t)+F_{f_{no}}(l),} & {i>f_{no}} \end{array}\right. \end{array} $$
(8)

where l represents the current number of iterations; Mi(l + 1) means the i th moth in l + 1 iteration; b is a constant, and the value here is 1; the value range of t is set to [r, 1], in which r = − 1 + l ∗ (− 1/L), the moth can converge to any point around the flame by changing t; L is the maximum number of iterations; Di represents the distance between the i-th moth and the flame corresponding to the moth. Di can be calculated as follows:

$$ \begin{array}{@{}rcl@{}} \ D_{i}=\left\{\begin{array}{ll}{|F_{i}-M_{i}|,} & {i \leq f_{n o}} \\ {|F_{f_{no}}-M_{i}|,} & {i>f_{no}} \end{array}\right. \end{array} $$
(9)

where fno denotes the number of adaptive flames and is calculated:

$$ \begin{array}{@{}rcl@{}} \ f_{no}=round(n-\frac{l}{L}(n-1)) \end{array} $$
(10)

where round denotes rounding in the nearest direction. This mechanism of adaptive flame number, which can reduce the number of flames adaptively during iteration, can balance the exploration ability and development ability of the MFO. The pseudo code of the MFO are given in Algorithm 1.

figure f

2.2 Opposition-based learning

OBL is to evaluate the current point and its reverse point at the same time, and then select the best to use. See Definition 1 for the basic definition of OBL.

Definition 1 (Opposite number)

[38] Let x ∈ [a, b] be a real number. Its opposite, \( \breve {x}\), is defined as follow:

$$ \begin{array}{@{}rcl@{}} \ \breve{x}=a+b-x \end{array} $$
(11)

Extend Definition 1 to high dimensional space.

Definition 2 (Opposite point in the d space)

[38] Let \(x=\left (x_{1}, \dots , x_{d}\right )\) be a point in d-dimensional space and \(x_{i} \in \left [a_{i}, b_{i}\right ], i=1,2, \dots , d\). The opposite of x is defined by \(\breve {x}=\left (\breve {x}_{1}, \dots , \breve {x}_{d}\right )\) as follow:

$$ \begin{array}{@{}rcl@{}} \ \breve{x}_{i}=a_{i}+b_{i}-x_{i} \end{array} $$
(12)

2.3 Orthogonal experiment design

OED is to select some representative points from the comprehensive test for the test. It is widely used in optimization algorithms to improve their performance, such as GA [39], because of its characteristics of uniform dispersion and strong comparability.

An orthogonal table LM(QK) for K factors at Q levels, the number of combinations is M [39]. Table 1 is an example of L8(27).

Table 1 An orthogonal table L8(27)

3 Improved moth-flame optimization algorithm

In this section, the improved Moth-Flame Optimization (IMFO) algorithm is described. The detailed introduction and discussion of the IMFO are as follows:

3.1 Motivation of improving MFO algorithm

Although, the MFO is effective in solving the problem with unknown constrained search space, it sometimes faces the lack of diversity. For MFO, the flames of the next iteration are generated by selecting the best individuals from the current iterative moths and flames. This method can realize the rapid exchange of information among the flames and the moths, but it may also lose the diversity of flames. It is difficult for the moths to jump out of the local optimal state if the flames fall into the local optimal state and are far from the global optimal state. At the same time, the flames adaptive guiding moths position updating mechanism can adaptively reduce the numbers of flames, and improve local search ability to a certain extent. Furthermore, the exploration ability and exploitation ability cannot be well balanced in the early and late stages of the iteration process.

Therefore, in order to improve the above problems in MFO algorithm, IMFO algorithm is proposed. The strategies in the IMFO are indicated in the following two sections.

3.2 Flames generation by OOBL strategy

In order to overcome the dimension degradation of the reverse solution, the OOBL is proposed, which is made up of OBL and OED. Assuming that Table 1 is used for orthogonal design, the d-dimensional space defines two levels for flame individual \(F_{i}=\left [f_{i1}, f_{i2}, \cdots , f_{id}\right ]\) and its reverse individual \(\breve F_{i}=\left [\breve f_{i1}, \breve f_{i2}, \cdots , \breve f_{id}\right ]\). Owing to the fact that individual dimension d is generally larger than factor K, orthogonal tables cannot be used directly. According to literature [39], the individual dimension is divided into K subvectors.

$$ \left\{\begin{array}{l}{F_{i1}=\left[f_{i1}, f_{i2}, \cdots, f_{ir_{1}}\right]} \\ {F_{i2}=\left[f_{ir_{1}+1}, f_{ir_{1}+2}, \cdots, f_{ir_{2}}\right]} \\ {\cdots} \\ {F_{iK}=\left[f_{ir_{K}-1}, f_{ir_{K}}, \cdots, f_{id}\right]} \end{array}\right. $$
(13)
$$ F_{i} = \left[ \begin{array}{ccccc} F_{i1} & F_{i2} & {\ldots} &F_{iK} \end{array} \right] $$
(14)

where 1 ≤ r1 < r2 < ⋯ < rK − 1 ≤ d.

To improve the diversity of the population and enhance the exploring ability of the MFO algorithm, the optimal flame and the worst flame are selected to generate the OOBL flames. Among them, the best flame is selected to improve the local search ability, and the worst flame is selected to improve the ability to jump out of the local optimum. The OOBL flames FM is as follows:

$$ \begin{array}{@{}rcl@{}} \ F_{bestnew}&=&lb_{best}\cdot ones(1,d)\\ &&+(ub_{best}\cdot ones(1,d)-F_{best}) \end{array} $$
(15)
$$ \begin{array}{@{}rcl@{}} \ F_{worstnew}&=&lb_{worst}\cdot ones(1,d)+rand\\ &&\cdot (ub_{worst}\cdot ones(1,d)-F_{worst}) \end{array} $$
(16)
$$ \begin{array}{@{}rcl@{}} \ FM=[F_{bestnew};F_{worstnew}] \end{array} $$
(17)

where Fbest and Fworst represent the optimal flame and the worst flame; Fbestnew and Fworstnew represent OOBL solutions of the optimal flame and the worst flame; ones(1,d) represents a vector of 1 × d and its elements are all 1; rand is a random number in [0,1], which is chosen to increase randomness and the possibility of jumping out of local optimum; ubbest, lbbest, ubworst and lbworst represent the upper and lower bounds of the optimal flame and the worst flame, respectively.

3.3 Modified position updating mechanism of moths

To further improve the convergence speed and the global search ability of the MFO algorithm, modified position updating mechanism of moths based on hybrid search strategy and mutation operator is proposed. The modified position updating mechanism of moths, which is introduced into IMFO, is divided into two parts through Euclidean distance. When the moth is near the best flame, the spiral and linear search mechanism is used to enhance the local search ability; when the moth is far away from the best flame, the mutation operator is used to enhance the global search ability. The maximum Euclidean distance in search space is as follows:

$$ \begin{array}{@{}rcl@{}} \ {D_{E}}=\sqrt{{\sum}_{j=1}^{d}(ub_{j}-lb_{j})^{2}} \end{array} $$
(18)

The Euclidean distance between i-th moth and optimal flame in current iteration is as follows:

$$ \begin{array}{@{}rcl@{}} \ {d_{i} }=\sqrt{{\sum}_{j=1}^{d}(M_{ij}-F_{bestj})^{2}} \end{array} $$
(19)

The modified position updating mechanism of moths is elaborated as follows:

  1. (1)

    In the iterative search process, when diwDE, linear search mechanism is introduced. The position of moths is updated as follows:

    $$ \begin{array}{@{}rcl@{}} \ M_{i}(l+1)=\left\{\begin{array}{ll}{F_{i}-A \cdot D_{i}^{\prime},} & {i \leq f_{no}} \\ {D_{i} \cdot e^{b t} \cdot \cos (2 \pi t)+F_{f_{no}}(l),} & {i>f_{no}} \end{array}\right. \end{array} $$
    (20)
    $$ \begin{array}{@{}rcl@{}} \begin{array}{c}{D_{i}^{\prime}=\left|C \cdot F_{i}-M_{i}\right|} \\ {D_{i}=\left|F_{f_{no}}-M_{i}\right|} \end{array} \end{array} $$
    (21)

    where w is the weight coefficient and the value is selected as 0.1 in this paper; A = 2aRa; C = 2 ⋅ R; a = − 1 + l ∗ (− 1/L); R is a random constant in [0,1]. Different places around the flame can be reached with respect to the current position by adjusting the value of A and C.

  2. (2)

    In the iterative search process, when di > wDE, mutation operator is used to update moth position to improve the diversity of the population. The mutation operator is defined as [40]

    $$ \begin{array}{@{}rcl@{}} \ M_{i}(l+1)=M_{g_{1}}(l)+R_{m} \otimes\left( M_{g_{2}(l)}-M_{g_{3}(l)}\right) \end{array} $$
    (22)

    where \(M_{g_{1}}\), \(M_{g_{2}}\) and \(M_{g_{3}}\) are randomly selected from M, g1, g2 and g3 are random value that is not equal to i in [1,n]. \(R_{m}=\left [q_{1}, q_{2}, \cdots , q_{d}\right ]\) is a randomly position vector, qj(j = 1,2,⋯ ,d) is a uniformly distributed random number in [0,1]; ⊗ represents hadamard product.

Based on the above explanation, the pseudo code of IMFO algorithm is shown in Algorithm 2 and the steps of IMFO are illustrated as follows:

  • Step 1 Parameter initialization

Initialize the number of moths/flames n, the maximum number of iterations L; the variables (dimension) number of per moth d and parameters a, t, A, C, w.

  • Step 2 Initialize the positions of moths using (3).

  • Step 3 Calculate the number of flames using (10) and the fitness function values of moths.

  • Step 4 Update flames

The moths of the current iteration, the flames of the previous iteration, and the flames generated by orthogonal opposition-based learning are sorted, and the best n individuals are selected to form the flame population of the current iteration.

  • Step 5 Update moths

Update the positions of moths using (20)–(22).

  • Step 6 Orthogonal opposition-based learning

In the search process, the best flame and the worst flame of each iteration are chosen as the base flame, and the orthogonal reverse flames are formed using (15)–(17).

figure g
  • Step 7 Stop or continue the iterative process.

Repeat step 3 to step 6 until the number of iterations l has reached the L or the desired fitness function value is reached.

3.4 Computational complexity of IMFO

The computational complexity of the algorithm represents the resources consumed by the algorithm. The computational complexity of the MFO algorithm depends on population initialization, position updating, population individual position evaluation and sorting mechanism. The computational complexity of initial population is O(n × D); the computational complexity of population position updating is O(L × n × D); the complexity of position evaluation is O(L × n × D); and the sorting mechanism adopts the fast sorting method, considering the worst case in the sorting process, the computational complexity is O(L × n2). The computational complexity of MFO is O(n × D) + O(L × (n2 + 2n × D)). The orthogonal opposition-based learning strategy is used to update the optimal and worst flames, so the complexity of IMFO position updating is O(L × (n + 14) × D), and the complexity of positions evaluation is O(L × (n + 14) × D). Therefore, the final computational complexity of IMFO is O(n × D) + O(L × (n2 + 2(n + 14) × D)). The computational complexity of IMFO is the same as that of MFO and all of them are O(n2).

4 Simulation study and discussion

Compared with MFO algorithm, the IMFO algorithm proposed in this paper is more effective in exploring global optimal solutions and exploiting the promising regions in search space. In this section, to verify the performance and advantages of IMFO algorithm, the 23 benchmark functions and the CEC 2014 benchmark set have been taken. All simulation experiments are conducted on a Windows10 Professional system with Intel(R) Core(TM) i3-6100 CPU(3.70GHz), 4.0 GB of RAM and language is MATLAB R2014a.

4.1 Experimental comparisons on classical benchmark set

The 23 classical benchmark problems, which are used to test the performance of the IMFO, are divided to three parts: unimodal problems(F1-F7), multi-modal problems(F8-F13), fixed dimension multi-modal problems(F14-F23). The details described of the classical benchmark set is given in Table 2. In addition, the performance of IMFO algorithm is compared with other algorithms: MFO [21], LMFO [31], MFO3 [37], CMFO [41], IMFO2020 [42], COA [9], CMAES [43], SSA [40], SCA [13], GSA [44], ABC [6], PSO [4], GWO [14], PSOGSA [45], HCLPSO [46], WOA [20], HSCA [47], EWOA [48], IDA [49], SHO [16]. The parameter settings of the 20 optimization algorithms are given in Table 3 and each algorithm runs 30 times independently on each test problem. The results of the 23 benchmark problems are given in Tables 45678910 and 11.

Table 2 Classical benchmark functions
Table 3 Parameter settings
Table 4 Comparison results of various mechanisms on the classic benchmark functions
Table 5 Test results of eight variant algorithms
Table 6 Comparisons of MFO and IMFO algorithms on unimodal and multi-modal problems with dimension 10
Table 7 Comparisons of MFO and IMFO algorithms on unimodal and multi-modal problems with dimension 30
Table 8 Comparisons of MFO and IMFO algorithms on unimodal and multi-modal problems with dimension 50
Table 9 Comparisons of MFO and IMFO algorithms on fixed dimension multi-modal problems
Table 10 Comparisons of IMFO and other algorithms on classical benchmark functions
Table 11 Test results of the 20 algorithms

4.1.1 The influence of improving mechanism on MFO

In this section, the different mechanisms of the proposed method are simulated and analyzed. Among them, MLMFO means to improve MFO by mutation operator and linear update mechanism; OOBLMFO means to improve MFO by orthogonal opposition-based learning; LMFO means to improve MFO by linear update mechanism; MMFO means to improve MFO by mutation operator; OOBLLMFO means to improve MFO by orthogonal opposition-based learning and linear update mechanism; OOBLMMFO represents the improvement of MFO with orthogonal opposition-based learning and mutation operator. In the simulation of this section, the number of population, dimension and number of iterations are 30, 30, 500 respectively, and each algorithm runs 30 times independently on each test function. The simulation results are given in Table 4.

As can be seen from Table 4, MFO variants are superior to MFO algorithm on most test functions in Mean and Std indexes. In order to intuitively analyze the convergence performance of the algorithm, Fig. 1 shows the convergence graphs of eight algorithms on six test functions. It can be seen from the figure that the IMFO algorithm has advantages in convergence accuracy and convergence speed. In conclusion, the IMFO selected in this paper is the best of the above variant algorithms.

Fig. 1
figure 1

Convergence corves on 6 functions

In order to further analyze the performance of IMFO, Wilson sign rank test [50] is used to detect IMFO and other variants. The test results are given in Table 5 and are indicated by ’1/0 /-1’. It can be seen from Table 5 that, in 23 test functions, IMFO is superior to MFO in 19 functions and has the same performance with MFO in 4 functions. Accordingly, IMFO has good results compared with other variants. At the same time, Friedman test [34] is used to sort the above eight algorithms. It can be seen in Table 5 that the average ranking (Ave) of IMFO is the best.

To sum up, IMFO shows the best performance among all variants.

4.1.2 Analysis on classical benchmark set

To verify the performance of IMFO, the IMFO algorithm and MFO algorithm are simulated and analyzed on 23 classical test problems in this subsection. The simulation results show that in Tables 69, the Best, Worst, Mean and Stds of each algorithm running independently for 30 times are given. On F1-F13, this section selects three dimensions (10, 30, and 50) to evaluate the performance of IMFO and MFO. Other parameters in the simulation are selected as shown in Table 3.

In the unimodal problems (F1-F7), there is only one global optimal solution and no local optimal solution. Consequently, F1-F7 are used to estimate the convergence rate and exploitation ability of the IMFO. The simulation results of IMFO algorithm and MFO algorithm are given in Tables 68. In these tables, it can be observed that IMFO algorithm is better than MFO algorithm in the Best, Worst, Mean and Std. Therefore, the test results from F1 to F7 show that the development ability and convergence rate of IMFO in search space are effective.

There are many local optimum solutions for multi-modal problems (F8-F13), which are used to estimate the exploration ability of the IMFO. The comparison results for F8 to F13 are given in Tables 68. For F9 and F11, IMFO can search for the best solution with good stability. And for F8, F10, F12 and F13, the comparison results of IMFO is better than MFO in Mean and Std indexes. Therefore, the results show that the exploration ability of IMFO is better than that of MFO.

Fixed dimension multi-modal problems(F14-F23) have fewer dimensions and fewer local optima. They are used to detect the balance between the exploration ability and exploitation ability of the IMFO. In Table 9, for F14, F15, F21,F22, F23 IMFO is superior to MFO. For F20, the mean value of IMFO is not as good as MFO, but the Std of IMFO is small. For F16, F17, F18, F19, IMFO and MFO have similar accuracy.

Therefore, by analyzing the comparison results in Tables 69, it can be concluded that IMFO algorithm is effective in searching global optimal solution.

To better illustrate the convergence performance of IMFO algorithm, the convergence curve in dimension 30 is drawn as shown in Fig. 2. The Fig. 2 are convergence curves of unimodal problems, multi-modal problems, fixed dimension multi-modal problems, respectively. Among them, the ordinates in the graph represent the mean of objective function value of the algorithm running independently for 30 times, and the abscissa represents the iteration times. From Fig. 2, it can be seen that the convergence speed and accuracy of IMFO are better than those of MFO in most test functions.

Fig. 2
figure 2

Convergence corves on 6 functions

4.1.3 Comparison with other algorithms

To further evaluate the performance of the IMFO, 20 optimization algorithms have been selected and compared with IMFO. The parameter settings of these algorithms used in this paper are shown in Table 3. Table 10 gives the comparative results between IMFO algorithm and MFO, LMFO, MFO3, CMFO, IMFO2020, COA, CMAES, SSA, SCA, GSA, ABC, PSO, GWO, PSOGSA, HCLPSO, WOA, HSCA, EWOA, IDA algorithms. In the table, each algorithm still runs 30 times independently in each classical test problem.

For F1, F2, F4, F9, F11, the IMFO can obtain the best solution. For F3, the IMFO can get better results than other algorithms. Among other test problems, IMFO has better simulation results in most problems. At the same time, Fig. 3 shows the convergence curve of 20 algorithm in 6 test problems. It can be seen in this figure that IMFO has better convergence accuracy and speed on these six test problems compared with other algorithms.

Fig. 3
figure 3

Convergence corves on 6 functions

Meanwhile, Wilcoxon signed rank test and Friedman test are introduced to analyze the proposed algorithm. In Table 11, the results of Wilcoxon signed rank test and Friedman test are given. In Table 11, the results of IMFO are better than that of other 19 algorithms in most test problems. It can be seen in Table 11 that the average ranking (Ave) of IMFO is the best.

Therefore, compared with other algorithms, IMFO algorithm has good competitiveness and better performance in convergence speed, search accuracy, robustness and jumping out of local optimum.

4.2 Experimental comparisons on CEC 2014

To further test the performance of IMFO algorithm, the CEC 2014 test set [51] is the second set of test problems to estimate the effectiveness of the IMFO. The CEC 2014 test problems are more difficult than the classical test problems. It consists of four groups: (1) Unimodal (f1-f3); (2) Multimodal (f4-f16); (3) hybrid (f17-f22); (4) composite (f23-30) problems. In this experiment, the individual dimension is 30, the population size is 30 and the maximum iteration times is 10000. The results of these test questions have been given in Tables 1213 and 14.

Table 12 Comparisons of MFO and IMFO algorithms on CEC 2014 test problems
Table 13 Comparisons of IMFO and other algorithms on CEC 2014 problems

4.2.1 Analysis on CEC 2014

f1-f3 are the unimodal problems. Table 12 shows that IMFO can effectively exploiting search space. In Table 12, the performance of IMFO is better than that of MFO in four indicators. Therefore, the improved position updating strategy and orthogonal reverse flame generation of moths have been proved to be effective in the search space of development problems.

f4 to f16 are multimodal problems. As can be seen in Table 12, the other indicators except Std of f16 are that IMFO strategy is superior to MFO. For f16, although the Std of IMFO is larger than that of MFO, the mean, best and worst indices of IMFO are better than that of MFO. This shows that IMFO has better exploration ability than MFO. The results show that the mutation operator and the Euclidean distance are used to modify the moth position update mechanism, which is helpful to improve the exploration ability of the MFO. Therefore, IMFO is more effective than MFO in exploration capability.

f17-f22 and f23-30 are hybrid and composite test problems in CEC 2014 test set, respectively. As can be seen from Table 12 that IMFO performs better than MFO on test problems f17 to f30, and has better global search ability.

4.2.2 Comparison with other algorithms

In this subsection, IMFO algorithm has been compared with MFO, LMFO, MFO3, CMFO, IMFO2020, SSA, SCA, PSO, WOA, HSCA, EWOA, IDA, MVO, SHO algorithms on the CEC2014 test problems. Table 13 shows the results of 16 algorithms on the test functions, with 30 dimensions and 10000 iterations. For Table 13, it can be observed that the IMFO has a strong competitiveness compared with other algorithms in Mean and Std indexes. The results show that the IMFO can get competitive solution by introducing OOBL and modified moth position update strategy.

To ensure the effectiveness of IMFO improvement, Wilcoxon signed rank test and Friedman test are also introduced, in which the parameter setting is consistent with the classical benchmark problem set. The statistical conclusions are shown in Table 14 and the symbols are the same as those described in Section 4.1.1. In Table 14, the results of IMFO are better than that of other algorithms in most test problems. It can be seen in Table 14 that the average ranking (Ave) of IMFO is the best.

Table 14 Test results of the 16 algorithms

5 Engineering optimization problems

In the above part, the performance of IMFO is simulated on classical test set and standard CEC2014 test set. In this section, to verify the ability of the IMFO to solve practical problems, the IMFO is used to optimize three practical engineering optimization problems.

5.1 Pressure vessel design problem

This problem is given in Fig. 4 [21]. Its purpose is to obtain the lowest cost cylindrical pressure vessel as much as possible. For this problem, there are four parameters to be optimized in Fig. 4: thickness of head Th, thickness of shell Ts, length of cylindrical shell L and inner radius R. The mathematical expressions of this problem are as follows [21, 52, 53]:

$$ \begin{array}{@{}rcl@{}} \textbf{Min} \ F_{1}(x)&=&0.6224 x_{1} x_{3} x_{4}+1.7781 x_{2} {x_{3}^{2}}+19.84 {x_{1}^{2}} x_{3}\\ &&+3.1661 {x_{1}^{2}} x_{4} \end{array} $$
(23)
$$ x=\left( x_{1}, x_{2}, x_{3}, x_{4}\right)=\left( T_{s}, T_{h}, R, L\right) $$
(24)
$$ \begin{array}{l}{ \textbf {s.t.} \ g_{1}(x)=0.0193 x_{3}-x_{1} \leq 0} \\ \qquad {g_{2}(x)=0.00954 x_{3}-x_{2} \leq 0} \\ \qquad {g_{3}(x)=1296000-\frac{4}{3} \pi {x_{3}^{3}}-\pi {x_{3}^{2}} x_{4} \leq 0} \\ \qquad {g_{4}(x)=x_{4}-240 \leq 0} \\ \qquad {1 \times 0.0625 \leq x_{1}, x_{2} \leq 99 \times 0.0625} \\ \qquad {10 \leq x_{3}, x_{4} \leq 200} \end{array} $$
(25)
Fig. 4
figure 4

Pressure vessel design problem

Table 15 shows the comparison results of the IMFO algorithm and other heuristic optimization algorithms for this problem, which includes the values of each variable and its corresponding results. From Table 15, the results show that IMFO can get the minimum cost than other optimization algorithms.

Table 15 Comparison of results on pressure vessel design problem

5.2 Cantilever beam design problem

This problem is shown in Fig. 5 [21], and its purpose is to get a set of values to minimize the weight of the cantilever. There are five parameters to be optimized: the side length xi (\(i=1, 2, \dots , 5\)) of cross section (square) of the five different beams. The mathematical expressions of this problem are as follows [21, 47, 52]:

$$ \textbf{Min} \ F_{2}(x)=0.0624 \times \sum\limits_{i=1}^{5} x_{i} $$
(26)
$$ \textbf{s.t.} \ g(x)=\frac{61}{{x_{1}^{3}}}+\frac{37}{{x_{2}^{3}}}+\frac{19}{{x_{3}^{3}}}+\frac{7}{{x_{4}^{3}}}+\frac{1}{{x_{5}^{3}}}-1 \leq 0 $$
(27)
Fig. 5
figure 5

Cantilever beam design problem

Table 16 shows the comparison results of the IMFO algorithm and other heuristic optimization algorithms for this problem, which includes the values of each variable and its corresponding results. From Table 16, the results show that IMFO can get the minimum weight design than other optimization algorithms.

Table 16 Comparison of results on cantilever beam design problem

5.3 Tension/compression spring design problem

This optimization problem is given in Fig. 6 [14]. Its purpose is to achieve lower manufacturing cost as much as possible. For this problem, there are three parameters to be optimized in Fig. 6: d indicates wire diameter; D indicates mean coil diameter; N number of active coils. In addition, the mathematical expressions of this problem are as follows [21, 55, 59, 60]:

$$ \textbf{Min} \ F_{3}(x)=\left( x_{3}+2\right) x_{2} {x_{1}^{2}} $$
(28)
$$ x=\left( x_{1}, x_{2}, x_{3}\right)=(d, D, N) $$
(29)
$$ \begin{aligned} \textbf { s.t. } & g_{1}(x)=1-\frac{{x_{2}^{3}} x_{3}}{71785 {x_{1}^{4}}} \leqslant 0 \\ & g_{2}(x)=\frac{4 {x_{2}^{2}}-x_{1} x_{2}}{12566\left( x_{2} {x_{1}^{3}}-{x_{1}^{4}}\right)}+\frac{1}{5108 {x_{1}^{2}}} \leqslant 0 \\ & g_{3}(x) =1-\frac{140.45 x_{1}}{{x_{2}^{2}} x_{3}} \leqslant 0 \\ & g_{4}(x) =\frac{x_{1}+x_{2}}{1.5}-1 \leqslant 0 \end{aligned} $$
(30)

where \(0.05 \leqslant x_{1} \leqslant 2.00\), \(0.25 \leqslant x_{2} \leqslant 1.30\), \(2.00 \leqslant x_{3} \leqslant 15.0\).

Fig. 6
figure 6

Tension/compression spring design

Table 17 shows the comparison results of the IMFO algorithm and other heuristic optimization algorithms for this problem, which includes the values of each variable and its corresponding results. From Table 18, the results show that IMFO can get the minimum weight design than other optimization algorithms.

Table 17 Comparison of results on tension/compression spring design

6 Conclusions

In this paper, an improved Moth-Flame optimization algorithm is proposed to solve global optimization problems. The IMFO is mainly realized by flames generation strategy and modified position update mechanism of moths. The flames generation strategy is used to generate effective flames to guide moths by OOBL, which improves the performance of the MFO algorithm to jump out of local optimum and enhance exploration ability. Modified position updating mechanism of moths is proposed based on spiral search and mutation operator, which helps to increase convergence speed of the IMFO. To verify the effectiveness of IMFO, it has been compared with MFO, LMFO, MFO3, CMFO, IMFO2020, SSA, SCA, GSA, ABC, PSO, GWO, WOA, PSOGSA, HCLPSO, MVO, HSCA, EWOA, IDA, SHO in 23 benchmark functions and the CEC 2014 benchmark test functions. The comparative results show that the IMFO algorithm is effective, accurate and stable. In addition, the IMFO is also compared with other well-known algorithms (such as SCA, MMA and GCA) on three practical engineering optimization problems. The comparative results show that the IMFO algorithm can obtain competitive solutions in solving practical engineering optimization problems. Therefore, the comparative analysis results of this paper show that the IMFO algorithm overcomes some shortcomings of MFO algorithm, and it can be used to deal with the classical test problems and the practical engineering optimization.