1 Introduction

Storn and Price [1] introduced a population-based optimization technique known as Differential Evolution (DE). It can widely solve various optimization problems, including continuous optimization [2], discrete optimization [3], constrained optimization [4], and unconstrained optimization problems [5]. DE has also achieved success in practical applications [6,7,8,9]. However, like other evolutionary algorithms, DE is prone to get stuck in local optima [10, 11], and its performance still needs improvement.

Mutation, crossover [12, 13], and selection operators are prominent features of the DE algorithm. These three operations greatly influence DE’s efficiency. Among them, the mutation operation has garnered significant attention from researchers and undergone extensive investigation. Typically, the mutation operation guides population evolution by using difference information generated by different individuals.

Many researchers have recently introduced various mutation strategies to further enhance the algorithm’s performance. such as DE/rand/1, DE/best/1 [14], DE/current-to-rand/1, and DE/current-to-pbest/1 [15]. These algorithms have greatly improved the performance of DE. For example, DE/rand/1 can thoroughly explore the entire search space, while DE/best/1 can converge quickly based on superior solutions. The DE/current-to-pbest/1 algorithm proposed by Zhang and Sanderson strikes a good balance between exploration and convergence speed, and it has found widespread use in subsequent research [16, 17]. Based on the advantages of different mutation operations in solving various problems, some researchers have combined multiple mutation strategies to achieve certain effects [18,19,20,21]. In summary, the choice of mutation strategy directly determines the exploratory and exploitative performance of individuals. However, the impact of mutation strategies on the determination of individual stagnation remains an unexplored area.

Parameter settings play a crucial role in algorithm performance. In the DE algorithm, the main parameters include the scaling factor (F), crossover rate (CR), and population size (NP). The scaling factor F and the crossover rate CR primarily control the magnitude of disturbances during the evolutionary process, which significantly impacts the algorithm’s exploratory and exploitative capabilities. Traditionally, F and CR are often set to fixed values. However, to better adapt to diverse optimization problems, researchers have begun to explore adaptive or dynamic adjustment strategies based on the characteristics of the problem. For example, dynamically adjusting the F value based on feedback information during the evolutionary process can achieve better performance balance during the execution of the algorithm [22,23,24]. Additionally, researchers are also exploring fixed or adaptive population size strategies to accommodate the population’s needs at different stages of iteration, aiming to balance the diversity and convergence of the population [25,26,27,28,29]. In summary, by conducting in-depth studies on parameter settings, researchers aim to enhance the population’s search performance. However, as far as the author is aware, there are currently no studies on using parameter settings to improve the algorithm’s performance in detecting local optima.

In DE algorithms, the selection strategy has a significant influence on algorithm performance. Most existing DE algorithms adopt a greedy selection strategy based on fitness values. However, some researchers attempt to enhance algorithm performance by modifying the selection strategy in various ways [30, 31]. For instance, Das et al [32] introduced a novel selection strategy, which calculates the likelihood of an individual accepting an inferior trial vector by considering the fitness value ratio between the target vector and the trial vector. This approach can prevent population stagnation but overlooks the impact of the iteration phase, which may affect population evolution, the process where individuals in the population gradually approach the optimal solution. Moreover, Yu et al. [33] introduced a survivor selection method inspired by the simulated annealing algorithm. This approach calculates the survival probability of the trial vector based on factors such as the temperature and the fitness difference between the trial vector and the parent vector. Abbas et al [34]. introduced a DE algorithm based on tournament selection. The tournament selection strategy is more conducive to enhancing the diversity of the population during the selection process. However, each evaluation criterion only considers the fitness value of the individual, which may lead to inefficient searching. Ghosh et al. [35]. determine the survival probability of failed trial vectors based on the fitness difference between the trial vector and the target vector, as well as the Manhattan distance between them. Despite its ability to dynamically fine-tune the exploration and exploitation capabilities of the population, it does not consider the impact of individual update status, which may also affect the evolution of the population. Finally, Zeng et al [36], based on the aforementioned considerations, proposed a new selection strategy based on evolutionary status. Unlike previous methods, this approach does not use fitness as the criterion for accepting worse individuals but adjusts the probability of accepting discarded trial vectors using individual update states and iteration states. This method bolsters the population’s capability to escape local optima, boosting variety in the initial phases and promoting convergence in subsequent stages. In summary, although some researchers have conducted studies on selection strategy, the current research in this area remains insufficient.

Based on the foregoing discussion, we have identified that selection strategy that accept discarded individuals with inferior fitness under certain conditions can enhance the performance of the DE algorithm. The existing research on selection strategy primarily falls into three categories:

  1. 1.

    Acceptance probability based on fitness or Euclidean distance differences between trial and parent vectors: This method enhances the diversity of the population and prevents evolutionary stagnation from accepting overly inferior trial individuals. However, it entirely neglects the individual’s inherent potential for exploration and exploitation, thus failing to implement targeted selection strategies for individuals.

  2. 2.

    Optimal individual selection from subsets of generated trial individuals and the original population: This approach transforms the comparison between trial individuals and target individuals into a comparison among all individuals within subsets, allowing the entire population to select the most fitness-optimized individuals from these subsets. Nevertheless, this strategy solely relies on fitness values for selection, which could lead to a rapid decline in population diversity. Moreover, it overlooks the individuals’ potential for exploration and exploitation, thereby affecting further optimization of individuals.

  3. 3.

    Selection strategy based on iteration phases and individual update status:: Not only does this method consider the update status of individuals, enhancing the algorithm’s capability to escape local optima, but it also adjusts the focus on exploration and exploitation based on the iteration phase. However, it overlooks the individuals’ exploration and exploitation potential, hindering the enhancement of these capabilities in the population. Furthermore, the acceptance of inferior individuals is solely based on the quality of trial individuals generated in the current iteration phase, and such randomness impedes the individuals’ rapid escape from local optima.

In this paper, a fitness-distance-based selection (FDS) strategy is proposed to address the above issues. In particular, individuals are categorized to utilize different selection strategies. Individuals with exploration potential utilize a selection strategy that considers the Euclidean distance between the target vector and trial vectors, while individuals with higher exploitation potential employ a fitness-based selection strategy. This approach ensures that each individual chooses the most appropriate selection strategy based on their own characteristics. Moreover, this paper for the first time introduces a novel approach, namely improving the scaling factor (F), to enhance the selection strategy’s ability to accept discarded trial individuals and escape local optima. At last, this paper presents a new DE variant, called fitness-distance-based DE (FDDE). Numerous experimental tests were conducted based on the CEC 2017 benchmark set [37], the CEC 2022 benchmark set [38], and the CEC 2011 benchmark set [39], and the FDDE algorithm is compared with six state-of-the-art DE algorithms and four recent and competitive evolutionary algorithms. The experimental findings indicate that FDDE surpasses the other algorithms in terms of performance. The primary key points of this paper are as follows:

  1. 1.

    A new selection strategy, FDS, is proposed, which implements different selection ways for individuals with exploration potential and exploitation potential. This strategy leverages the concept that discarded trial vectors, although not immediately successful, still contain valuable information that can be harnessed to enhance the evolutionary process. The theoretical basis of the FDS strategy is rooted in the premise that these discarded vectors are not merely failures but are a source of untapped potential that, if properly integrated, can provide critical insights into the solution space.

  2. 2.

    A novel scaling factor control method is introduced within the mutation strategy to assist the FDS strategy in detecting stagnation during local searches, consequently enhancing the algorithm’s performance. Moreover, the theoretical basis of this proposed scaling factor control method lies in its ability to adjust the search range of the mutation strategy. This adjustment optimizes the balance between exploration during population evolution and the detection of local optima, thereby significantly improving the algorithm’s effectiveness.

  3. 3.

    The FDS strategy and the new scaling factor control formula are applied to the DISH algorithm [40], and a novel DE variant, FDDE, is introduced, which has been proved effective through a series of benchmark tests.

The rest of this paper is structured as follows: Sect. 2 provides an introduction to the DE algorithm and its various variants. Section 3 introduces the motivation and specific working principles of the FDS strategy, as well as the new scaling factor F control formula, and proposes the FDDE algorithm based on this. Section 4 conducts comprehensive experiments using the proposed algorithm and conducts a detailed analysis of the experimental results, comparing them with six state-of-the-art DE algorithm variants and four competitive evolutionary algorithms. Lastly, Sect. 5 summarizes the paper and discusses future research directions.

2 Related work

In this section, we will review the relevant literature on the DE algorithm and provide detailed introductions to several DE variants that are related to this paper.

2.1 Preliminary knowledge

DE algorithm is an efficient and versatile population-based optimization algorithm, renowned for its simplicity and robustness. It operates through mechanisms of mutation, crossover, and selection to iteratively improve a population of candidate solutions. In DE, new candidate solutions are generated by adding the weighted difference between several population individuals to a base member. This process creates a diverse pool of potential solutions, and through crossover and selection procedures, the crossover strategy combines elements from parent vectors to create trial vectors, introducing diversity into the population. The selection strategy in DE compares trial vectors with their corresponding target vectors, retaining only those that provide a fitness improvement. The mutation strategy of DE/current-to-pbest/1 proposed by Tanabe has been proved to be effective. Moreover, most DE algorithms still employ binomial crossover and greedy selection strategies, as illustrated specifically in Eqs. (1),  (2),  (3).

$$\begin{aligned}{} & {} V_{i}^{g} = X_{i}^{g} + F \times \left( X_{pbest}^{g} - X_{i}^{g} \right) + F \times \left( X_{r1}^{g} - \overline{X_{ra}^{g}} \right) \end{aligned}$$
(1)
$$\begin{aligned}{} & {} \quad U_{i,j}^g = {\left\{ \begin{array}{ll} V_{i,j}^g &{} \text {if } \text {rand}(0,1) \le CR \text { or } j = j_{\text {rand}} \\ X_{i,j}^g &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(2)
$$\begin{aligned}{} & {} \quad X_i^{g+1} = {\left\{ \begin{array}{ll} X_i^g, &{} \text {if } f(X_i^g) < f(U_i^g) \\ U_i^g, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(3)

where \(V_i^g\) is the mutation vector of the target vector \(X_i^g\) in the gth generation, \(U_i^g\) is the trial vector. \(X_{pbest}^g\) denotes the best individual among the top \(p\%\) individuals in the gth generation. ra is a randomly generated integer from the set \({1,2, \dots , NP+|A|}\), where A represents the archive of parent vectors that have failed in the selection strategy. \(\overline{X}\) is the union of the archive A and the population. The indices \(r_1\) are randomly generated integers from the set \(\{1,2, \dots , NP\}\), and they are all distinct from each other. rand(0,1) is a uniformly distributed random number within the interval [0,1], D represents the dimension of the problem, \(j_{rand}\) is a randomly generated integer ranging from [1, D], and CR is a crossover rate that spans from 0 to 1. f(x) represents the value of the objective function for solution x.

Current improvements in the DE algorithm focus primarily on three areas: mutation strategy, parameter adjustments, and selection strategy. In terms of mutation strategy, research predominantly utilizes two approaches: employing a single mutation strategy and combining multiple mutation strategies. For example, the strategy DE/current-to-pbest/1 proposed by Tanabe achieves a good balance between exploration and exploitation within the population. Other studies aim to merge mutation strategies with strong exploratory capabilities with those that excel in exploitation to enhance the overall evolutionary capacity of the population [41]. Regarding parameter improvements, these include adjustments to the scaling factor, crossover rate, and population size. Currently, improvements to scaling factors and crossover rates mainly utilize two strategies: a population-based adaptive parameter strategy and an individual-based adaptive parameter strategy. For instance, Zhu [42] proposed a multipopulation DE algorithm with dynamic parameter settings for each population, while Brest and Greiner [43] introduced the JDE algorithm, which employs adaptive parameter settings to adjust the population parameters F and CR. Qin [44] introduced an adaptive parameter control DE algorithm, SaDE, which records the parameters of superior individuals in each generation during the iteration process and uses these parameters to generate offspring parameters. Meng [45] proposed a method using wavelet basis functions to generate the scaling factor F for each individual. Research on population size has shown that dynamically adjusting the size can significantly enhance the performance of the DE algorithm. For example, Tanabe and others proposed a new strategy, LPSR [46], based on the SHADE algorithm [47], which controls the evolutionary process by linearly reducing the population size. Under this strategy, individuals with the worst fitness values in the population are discarded. Based on this strategy, they introduced a new type of DE mutation called LSHADE. To meet the needs of different evolutionary stages for population size, Zhang [48] proposed a DE algorithm with nonlinear population size reduction, AGDE-MPP. Zeng [49] proposed a method of periodically adding and removing individuals from the population, which improves the population’s ability to escape local optima and enhances convergence. In terms of selection strategy improvements, recent studies have shown that selection strategies can significantly enhance the performance of the DE algorithm. Current research is mainly divided into three categories: 1) strategies based on the differences between trial individuals and original individuals, which increase population diversity; 2) merging all trial individuals with target individuals into a subset and selecting superior individuals within the subset to proceed to the next generation, thereby accelerating population convergence; and 3) selection strategies based on the state of individuals, accepting suboptimal solutions to help stagnant individuals escape local optima. These strategies attempt to fully utilize the information of discarded trial individuals to enhance the performance of the DE algorithm.

2.2 Success-history-based DE variants

2.2.1 Success-history-based DE

SHADE [47] is a further improvement based on JADE algorithm [15]. SHADE introduces a new historical memory, including \(M_{CR}\) and \(M_F\). This archive retains successful CR and F values from past evolutionary stages, and it influences the derivation of subsequent crossover rates and scaling factors by referencing this stored data. Tanabe and Fukunaga proposed a linear population size reduction (LPSR) method, which adjusts the population size based on the number of fitness evaluations as a criterion. The specific adjustment formula is as follows:

$$\begin{aligned} N_{G} = N^{init} + \left( \frac{N^{min} - N^{init}}{maxnfes} \right) \times fes \end{aligned}$$
(4)

where \(N_{G}\) represents the population size of the G generation, \(N^{init}\) represents the initial population size, \(N^{min}\) represents the minimum population size, fes represents the number of fitness evaluations, and maxnfes represents the maximum number of fitness evaluations. When \(N_{G+1} < N_G\), the (\(N_G\) \(- N_{G+1}\)) individuals with the worst fitness values will be removed from the population. Based on the LSPR strategy and the SHADE algorithm, LSHADE algorithm was proposed.

Based on LSHADE, the techniques for regulating the scaling factor and crossover rate were further enhanced by Brest et al. [50], leading to the proposal of iLSHADE. At the same time, iLSHADE also calculates the p in Eq. (7) which is used to generate the \(X_\text {pbest}^g\). The upper bound of p, \(p_{max}\), is set to 0.25, and the lower bound of p, \(p_{min}\), is set to half of \(p_{max}\). Moreover, based on iLSHADE, Brest [51] improved Eq. (1), introduced a new mutation strategy DE/current-to-pbest-w/1, and proposed a new algorithm called jSO. The specific mutation strategy formula is shown in Eq. (5), and the calculation of \(F_w\) is shown in Eq. (6).

$$\begin{aligned}{} & {} V_i^g = X_i^g + F_w\times (X_\text {pbest}^g - X_i^g) + F\times (X_{r1}^g - \overline{X_{ra}^{g}}) \end{aligned}$$
(5)
$$\begin{aligned}{} & {} F_w = {\left\{ \begin{array}{ll} 0.7 \times F, &{} \text {if } fes< 0.2 \times maxnfes\\ 0.8 \times F, &{} \text {else if } fes < 0.4 \times maxnfes\\ 1.2 \times F, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(6)
$$\begin{aligned}{} & {} p = p_{min} + \frac{fes}{maxnfes} \times (p_{max} - p_{min}) \end{aligned}$$
(7)

In order to enhance the exploration ability of the population in high-dimensional spaces, Viktorin [40] introduced distance-based parameter adaptation for SHADE and subsequently presented a DE variant known as DISH. The DISH algorithm improves the original weight updating formulas of the scaling factor and crossover rate. Instead of computing weights \(w_{i}\) from the fitness difference between the trial vector and the parent vector, the DISH algorithm calculates weights based on their Euclidean distance. The specific formula is depicted in Eq. (8). The generated weight is then applied to Eq. (9) and Eq. (10) to generate new scaling factors and crossover rates.

$$\begin{aligned}{} & {} w_i = \frac{\sqrt{\sum _{j=1}^D (X^g_{i,j} - u^g_{i,j})^2}}{\sum _{k=1}^{|S_{CR}|} \sqrt{\sum _{j=1}^D (X^g_{k,j} - u^g_{k,j})^2}} \end{aligned}$$
(8)
$$\begin{aligned}{} & {} \quad {\left\{ \begin{array}{ll} M_{CR} = \text {mean}_{WL} (S_{CR}) \\ \text {mean}_{WL} (S_{CR}) = \frac{\sum _{K=1}^{|S_{CR}|} W_k \times S^2_{CR,k}}{\sum _{K=1}^{|S_{CR}|} W_k \times S_{CR,k}} \end{array}\right. } \end{aligned}$$
(9)
$$\begin{aligned}{} & {} \quad {\left\{ \begin{array}{ll} M_{F} = \text {mean}_{WL} (S_{F}) \\ \text {mean}_{WL} (S_{F}) = \frac{\sum _{K=1}^{|S_{F}|} W_k \times S^2_{F,k}}{\sum _{K=1}^{|S_{F}|} W_k \times S_{F,k}} \end{array}\right. } \end{aligned}$$
(10)

In this context, \(mean_{WL}\) denotes the weighted lehmer mean, where \(w_i\) represents the weight corresponding to individual i. Additionally, \(S_{CR}\) and \(S_F\) denote the sets containing values of crossover rate and scaling factor, respectively.

2.2.2 ESDE

The ESDE algorithm introduces a novel selection strategy (ESS) aimed at assisting individuals in breaking free from local optima. By adopting specific strategies, the algorithm can accept discarded trial vectors in certain situations. However, blindly accepting suboptimal solutions in any situation is infeasible, as it may lead to individual stagnation. Therefore, the ESS strategy considers two key factors when accepting inferior solutions: 1) the update status of the individual [52,53,54] and 2) the evolutionary stage of the individual. The update status of the individual reflects the probability of the individual being trapped in a local optimum. The more the generations an individual stops updating, the higher the chance it is stuck in a local optimum. The iteration stage is used to control when to accept inferior individuals. In the early stages of iteration, the algorithm considers accepting inferior trial vectors, while in the later stages, in order to accelerate convergence, the algorithm does not accept trial vectors worse than the target vector.

As discussed above, ESDE allows individuals to accept inferior solutions with a certain probability, which enhances their ability to escape local optima. As shown in Eq. (11), \(\alpha\) signifies the upper limit of accepting inferior individuals, \(\beta\) regulates the influence of an individual’s non-update count on \(Ac\_rate\), \(stop\_gen\) denotes the consecutive generations without target vector updates, \(\gamma\) governs the effect of function evaluation count on \(Ac\_rate\),and maxnfes represents the predetermined maximum number of function evaluations.

$$\begin{aligned} Ac\_rate{} & {} = \alpha \times \frac{1}{1+exp(\beta -stop\_gen)} \nonumber \\{} & {} \quad \times \frac{1}{1+exp(fes-\gamma \times maxnfes)} \end{aligned}$$
(11)

3 Fitness-distance-based DE algorithm

3.1 Motivation

Greedy selection operators have been extensively studied and implemented in single-objective DE algorithms. Despite their widespread use, these strategies often lead individuals to become trapped in local optima. Studies on the ESDE algorithm reveal that it probabilistically accepts discarded trial vectors based on both the individual’s update status and the current stage of iteration, assisting individuals in escaping local optima. However, ESDE does not thoroughly assess how the acceptance of these discarded vectors influences the individuals’ ability to escape local optima. This oversight could lead to algorithmic instability or a reduction in exploration within promising regions, thereby diminishing the algorithm’s overall performance. Furthermore, since individuals exhibit varying levels of exploration and exploitation capabilities within the search space, tailored selection strategies should be applied when accepting inferior individuals.

In traditional DE algorithms and their variants, the setting of the scaling factor F is primarily aimed at facilitating a balance between exploration and exploitation for individuals. However, research on the impact of the scaling factor on individual local optimum detection remains very limited, and this lack of study could lead to inaccuracies in the determination of local optima. Given this research gap, this study aims to deeply analyze the nature of individual local optimum detection and, based on this, specifically set a reasonable scaling factor to effectively balance exploration and local optimum detection during the evolutionary process. Detailed analysis and methodology will be introduced in Sect. 3.2.

To address the above issues, a fitness-distance-based selection strategy and a new scaling factor control are proposed. With this approach, we expect to select trial vectors based on their characteristics, so as to enhance the algorithm’s ability to escape local optima, and reduce the impact of misjudgment on population evolution. Based on the above strategies, a new DE variant, FDDE is proposed. We will provide a comprehensive overview of the FDDE algorithm in the subsequent sections.

3.2 Fitness-distance-based selection strategy

In our proposed FDDE, the algorithm performs an ascending fitness-based sorting of the population. The top 100\(elite\_rate\)% of individuals are considered as exploitation layer individuals with higher exploitation potential, while the remaining individuals are considered as exploration layer individuals with higher exploration potential. The algorithm initially employs Eq. (11) to compute the acceptance probability of an individual for a discarded trial vector. When an individual is determined to accept a discarded vector, then the algorithm employs the fitness-distance-based selection (FDS) strategy.

In the exploration layer, individuals with lower fitness are more inclined to explore new areas. Once these individuals encounter local optima, it indicates that their surrounding area lacks potential for further exploration, making it crucial for them to quickly move away from these regions to seek new possibilities. As illustrated in Fig. 1, in such situations, traditional greedy selection strategies often fail to propel individuals forward in their evolutionary journey. Consider an individual that has generated trial vector points a and b in the last two generations; if we only consider the most recent trial vector, we might be inclined to choose vector b, which is more similar to the original individual. However, this similarity could lead to stagnation. In contrast, selecting the trial vector a, which bears less resemblance to the original individual, can more effectively help the individual escape from local optima and explore new domains. This approach could evolve vector a into vector c, achieving breakthroughs that vector b could not. Not only does this strategy aid individuals in escaping local optima, but it also fosters exploration of unknown areas, thereby enhancing the diversity of the population. Since the distance between individuals often reflects the similarity of their fitness landscapes, we propose that when individuals in the exploration layer need to accept inferior trial individuals, they should select the trial individual from the past K generations that is farthest from the original individual in terms of Euclidean distance.

In the exploitation layer, since its individuals already has better fitness, accepting too inferior individuals will hardly help it find better solutions. Therefore, while ensuring that the algorithm does not accept too inferior trial vectors, the individual accepts slightly worse individuals with a certain probability. However, it is difficult to define what constitutes “too inferior” individuals. As shown in Fig. 2, after an individual falls into a local optimum, the individual in the stagnant stage may generate two trial vectors: X2 and X1. However, due to the inferior fitness value of vector X1, the individual will choose to accept vector X2, which is more likely to be perturbed to individual X3, thus helping the individual escape the local optimum and find a better solution. It is worth noting that when misjudgment occurs, this method of selecting trial vectors based on fitness can also ensure that individuals stay in better areas; as a result, it preserves the information of promising regions. thereby reducing the impact of misjudgment to some extent. By employing this approach, not only can the individual’s ability to escape local optima be enhanced, but it also improves the exploitation capability of elite individuals within the population. In this study, k = 2 is adopted based on a series of experiments.

Fig. 1
figure 1

Mechanism of FDS strategy in helping exploration layer individuals escape local optima

Fig. 2
figure 2

Mechanism of FDS strategy in helping exploitation layer individuals escape local optima

In this paper, a fitness-distance-based DE algorithm is proposed. This algorithm divides the population into an exploration layer and an exploitation layer to enhance the ability of different individuals to break out of local optima. A larger exploitation layer is advantageous for the population to exploit the excellent regions, while a larger exploration layer is beneficial for the population to explore more areas. And as the iterations progress, the population will gradually transition from exploration to exploitation. Therefore, we need to dynamically adjust the size of the layers. As mentioned previously, the value of \(elite\_rate\) influences the size of layers. To better meet the population’s needs for exploration and exploitation at different stages, the formula for calculating \(elite\_rate\) is as follows:

$$\begin{aligned} elite\_rate = \frac{nfes}{\gamma \times maxnfes} \end{aligned}$$
(12)

In the above formula, nfes represents the current number of function evaluations, while maxnfes represents the maximum number of function evaluations. The setting of \(\gamma\) is consistent with the setting in Eq. (11).

3.3 Improved parameter settings

Existing studies often overlook the crucial role of parameters in optimizing the performance of selection strategies. At the heart of the selection strategy is the accurate determination of individual stagnation, which relies on tracking changes in an individual’s fitness over a series of iterations. If an individual does not show an improvement in fitness within a certain number of iterations, it is considered to have stagnated. However, this method is highly dependent on the search area within the mutation strategy. If the individual’s mutation strategy fails to explore around the evolving individual or if the search step is too large, it is fundamentally impossible to effectively determine whether the individual is in a local optimum area. The DE/current-to-pbest/1 mutation strategy used in this paper can effectively explore the area around an individual, yet inappropriate search step sizes may also lead to inaccuracies in the detection of individual stagnation.

To minimize the chances of incorrectly identifying individuals ensnared in local optima, we introduce a new approach to controlling the scaling factor (F) by imposing restrictions on excessively large scaling factors, and both the search capability of individuals and the stability of the mutation strategy around individuals are ensured. This restriction allows for an appropriate mutation step size and prevents excessive fluctuations or jumps. As a result, the accuracy of determining whether an individual is stagnant based on the number of stagnation instances is improved. Therefore, restricting excessively large scaling factors helps ensure the accuracy of stagnation detection, thereby enhancing the performance of the DE algorithm. The formula is shown in Eq. (13).

$$\begin{aligned} F = {\left\{ \begin{array}{ll} \min (F, 0.6) &{} \text {if } nfes \le 0.6 \times maxnfes \\ F &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(13)

3.4 Framework of FDDE

A new DE variant FDDE is proposed, which is based on FDS and the new scaling factor. The pseudocode of FDDE is shown in Algorithm 1.

From Algorithm 1, there are four differences between DISH and FDDE: (1) K and count values are initialized in line 3; (2) a new scaling factor formula is used in lines 22–24; (3) sorting of the population and calculation of \(elite\_rate\) are done in lines 28–29; and (4) the fitness-distance-based selection strategy is introduced in lines 35–41.

figure a

4 Experiments and discussions

4.1 Experiment environment

In this section, to assess the FDDE algorithm’s performance, this study delves into the proposed FDS strategy and the new scaling factor setting, and further tests the effectiveness of the algorithm on CEC 2017, CEC 2022, and CEC 2011.

CEC 2017 consists of 30 benchmark functions that cover various types of optimization problems, including unimodal functions (F1-F3), simple multiobjective functions (F4-F10), hybrid functions (F11-F20), and composite functions (F21-F30). Each test function includes four instances corresponding to 10D, 30D, 50D, and 100D, totaling 120 instances. Detailed information about CEC 2017 can be found in reference [37]. The optimal values for all test functions are predetermined, and the evaluation criterion is based on the disparity between the values achieved by each algorithm and the established optima. The mean value and standard deviation of the function error values are denoted as Mean and Std, respectively. In the CEC 2017 benchmark set, we set the maximum number of algorithm evaluations to 10,000 \(\times\) D, and the algorithm runs 51 times on each instance, thereby avoiding inaccuracies due to algorithmic randomness. For each algorithm, if the error value falls below \(10^{-8}\), it is adjusted to 0.

The CEC 2022 test suite includes 12 test functions, covering four types of functions: unimodal functions (F1), basic functions (F2-F5), hybrid functions (F6-F8), and composition functions (F9-F12). In this experiment, we set the dimension of the CEC 2022 benchmark test suite to 20, tested each algorithm on each function 30 times, and took the average. Detailed information about the CEC 2022 test suite can be found in reference [38].

In the CEC 2011 benchmark set suite, seven problems are selected for testing to demonstrate the superiority of the FDDE algorithm in handling real-world problems. The maximum number of evaluations was set to 150,000, and for each instance, 25 runs were conducted for every algorithm. The specific settings are described in reference [39]. To ensure the effectiveness of the algorithm performance, this study utilized the Wilcoxon rank-sum test, with a significance level set at 0.05.

4.2 Analysis of algorithmic complexity

The complexity of the algorithm in this paper is determined according to the method in reference [37]. The specific steps are as follows: Firstly, the test code in reference [37] is run to obtain the time \(T_{0}\); secondly, the function 18 in the CEC 2017 test set is evaluated 200,000 times to obtain the time \(T_{1}\); and lastly, the test algorithm is evaluated 200,000 times in function 18 and repeated five times to calculate the average value as time \(\bar{T}_{2}\). The time complexity is then evaluated by the formula (\(\bar{T}_{2} - T_{1}\)) / \(T_{0}\).

Table 1 indicates that the FDDE algorithm’s complexity closely resembles that of ESDE and DISH. Notably, the complexity of the FDDE algorithm does not significantly increase with the increase in dimension. All time complexity tests are executed on the following computer configuration: The programming language is MATLAB; it is compiled with MATLAB 2021; CPU is AMD Ryzen 7 5800 H with Radeon Graphics 3.20 GHz; and RAM is 16 G.

Table 1 Complexity of FDDE

4.3 Ablation experiment

To verify the effectiveness of the FDS strategy and the new scaling factor control formula, we compared the following three methods: (1) FDDE algorithm; (2) FDDE algorithm without using the new scaling factor control formula (FDDE-1); and (3) FDDE algorithm without using the FDS strategy (FDDE-2). Figures 3 and 4 show the performance differences between FDDE, FDDE-1, and FDDE-2. The outcomes reveal that FDDE outperforms FDDE-1 across all scales (10D, 30D, 50D, and 100D), demonstrating that the FDS strategy can significantly enhance algorithm performance, especially in the 30D, 50D, and 100D. Figure 3 further illustrates the improvement of algorithm performance brought by the new scaling factor. It can be observed that the new scaling factor brings significant performance improvement in all dimensions (10D, 30D, 50D, and 100D).

Fig. 3
figure 3

Comparison between FDDE and FDDE-1

Fig. 4
figure 4

Comparison between FDDE and FDDE-2

4.4 Parameter analysis

4.4.1 Sensitivity of K values

In this paper, we propose a selection strategy based on fitness distance, where the parameter K plays a significant role in enabling individuals to specifically accept inferior trial individuals to escape local optima. To better investigate the setting of K, we employ the Friedman test to rank the performance of the FDDE algorithm under the settings of K=1, K=2, and K=3. The FDDE variants corresponding to these settings are FDDE-K1, FDDE-K2, and FDDE-K3, as detailed in Table 2.

Table 2 Average rankings among FDDE algorithm variants according to the Friedman test

As shown in Table 2, when comparing the three settings of K values, the setting of K=2 demonstrated the best performance across all four dimensions, while K=1 showed the least desirable effect. This indicates that relying solely on trial individuals generated in the current generation of an individual might limit the capability to overcome local optima. In the comparison between K=3 and K=2, selecting individuals based on the furthest Euclidean distance or the optimal fitness distance did not fully leverage the potential of the FDS strategy. Nevertheless, the performance under K=3 still surpasses that of the K=1 setting.

4.4.2 Sensitivity of population size

Population size has a significant impact on the performance of the DE algorithm. Recognizing the popularity of various population settings, this study compares the FDDE algorithm described in this paper with two FDDE variants that employ widely used initial population sizes. These two variants are: FDDE-P2, which sets the population size to \(18\times D\), and FDDE-P3, which determines the population size based on the formula \(25\times log(D)\times sqrt(D)\). We use the statistical method Friedman test to rank these algorithms in order to assess the impact of different population size settings on performance.

As shown in Table 3, the parameters set in this paper achieved the best ranking in both 10D and 100D dimensions. Additionally, the FDDE algorithm demonstrated superior performance across these four dimensions, reflecting the advantages of this algorithm in terms of population size settings. Furthermore, FDDE-P3 achieved the best ranking in the 30D and 50D dimensions. Notably, on the 30D, 50D, and 100D dimensions, the FDDE algorithm and its variants were among the top three, showing that even with three different population size setting strategies, the FDDE algorithm consistently outperformed the DISH and ESDE algorithms in all scenarios. These results not only highlight the excellent performance of the FDDE algorithm but also demonstrate its flexibility and robustness in adjusting population size parameters. The performance of the FDDE algorithm further confirms its low sensitivity to different population sizes, allowing it to maintain stable high performance across a wide range of applications.

Table 3 Average rankings among FDDE algorithm variants according to the Friedman test

4.5 Comparison results

4.5.1 Comparison results with six recent DE variants on CEC 2017

In this section, we compare the performance of the FDDE algorithm with six state-of-the-art DE algorithms, including JADE [15], LSHADE [46], jSO [51], DISH [40], ESDE [36], and HIP-DE [55]. To faithfully represent the performance of the original algorithms, the parameter settings for each algorithm are kept the same as those in the corresponding literature. As shown in Table 4, for the JADE algorithm, the population size NP is set to 100 for 10D, 120 for 30D, 150 for 50D, and 200 for 100D. The \(\alpha\) in ESDE and FDDE is set to 0.5, \(\beta\) is set to 48 for 10D and 30D, and 32 for 50D and 100D, and \(\gamma\) is set to 0.4 for 10D and 30D, and 0.5 for 50D and 100D. The parameter settings in CEC 2011 are the same as those in the 30D setting.

Table 5 presents the performance of the FDDE algorithm in comparison with other algorithms on the CEC 2017 test set, and statistical analysis was conducted on the results of each algorithm independently run 51 times on each function, based on the Wilcoxon rank-sum test at a significance level of 0.05. The specific information of the detailed data, including the mean error values and standard deviations obtained through each algorithm, is provided in Tables 16, 17, 18, and 19. The symbols "+" (FDDE performs significantly better), "–" (FDDE performs significantly worse), and "=" (FDDE and the comparison algorithm do not show a significant difference) indicate whether the performance of the FDDE algorithm is significantly different from that of the other algorithms based on the Wilcoxon rank-sum test on the mean error values. The integer corresponding to "+,—,=" represents the number of functions that are significantly better, significantly worse, or not significantly different between the FDDE algorithm and the compared algorithm on the test set based on the statistical analysis.

Table 4 Parameter settings
Table 5 Comparison of FDDE and six other DE variants
Table 6 Comparison of FDDE and six other DE variants under different function types for 10D
Table 7 Comparison of FDDE and six other DE variants under different function types for 30D
Table 8 Comparison of FDDE and six other DE variants under different function types for 50D
Table 9 Comparison of FDDE and six other DE variants under different function types for 100D
Table 10 Average rankings between FDDE and six other DE variants according to the Friedman test

Table 5 shows that FDDE is significantly better than other advanced DE algorithms. the performance difference between FDDE and JADE was the largest. FDDE has improved in 85 instances and only significantly underperformed in 9 instances when compared with JADE. When compared with LSHADE, 77 instances showed significant improvement, while only 11 instances significantly underperformed. In comparison with jSO, FDDE achieved significant improvement in 68 instances, with only 7 instances significantly underperformed. We paid particular attention to the performance of FDDE against ESDE and DISH. When compared to DISH, FDDE showed significant improvement in 63 instances and significantly underperformed in 11 instances, the improvement rate is 52.5\(\%\) and the degradation rate is 9.2\(\%\). FDDE performed significantly better than ESDE in 50 instances, with only 5 instances significantly underperformed, the improvement rate is 41.7\(\%\) and the degradation rate is only 4.2\(\%\). It is evident that, compared to these two algorithms, FDDE has achieved substantial improvement and fewer instances of degradation. Of note, FDDE shows significant improvements compared to both ESDE and DISH, with very few functions where FDDE significantly underperformed. the HIP-DE algorithm is the closest to FDDE. FDDE performed significantly better than HIP-DE in 68 instances, with 23 instances significantly underperformed.

The results in Tables 6, 7, 8, and 9 are derived from statistical analysis using the Wilcoxon rank-sum test at a significance level of 0.05. As shown in Tables 6, 7, 8, and 9, in comparison with other algorithms in different dimensions and types of functions, we have observed that our method has varying degrees of advantages in different types of functions. For 10D, specifically, our algorithm shows significant enhancements in multimodal functions with 22 instances of improvements and no instances showing a decline. Moreover, in unimodal functions, all methods in our experiments converge to the global optimum. Additionally, our algorithm demonstrates similar performance compared to other algorithms. For 30D, our algorithm achieves substantial improvements in multimodal functions, hybrid functions, and composition functions. In multimodal functions, 23 instances show significant improvements without any instances showing a decline. In hybrid functions, 49 instances exhibit significant enhancements, while 2 instances show significant deterioration. For composition functions, 33 instances demonstrate significant improvements, while 7 instances show significant deterioration. For 50D, our algorithm achieves significant improvements in multimodal functions and hybrid functions with 31 instances and 53 instances of improvement, respectively, without any instances showing significant deterioration. Moreover, in composition functions, our algorithm shows significant improvements in 42 instances and significant deteriorations in 12 instances. For 100D, our algorithm achieves significant improvements in multimodal functions and composition functions with 29 instances and 45 instances of significant enhancements, respectively, without any instances showing a significantly decline. In hybrid functions, there are 49 instances of significant improvements and 2 instances of significant deteriorations. In unimodal functions, 8 instances demonstrate significant improvements, while 4 instances show significant deteriorations. It is noted that our algorithm shows varying degrees of improvement over ESDE and DISH algorithms in all dimensions and function types.

To delve deeper into the analysis of the FDDE algorithm’s performance, we conducted an analysis of the average error values for each algorithm using the Friedman test. The obtained average rankings are presented in Table 10, where smaller average ranks indicate better algorithm performance and the best ranking is highlighted in bold. Our algorithm achieved the best rankings in dimensions 30, 50, and 100. For 30D, ESDE ranks second, and DISH ranks fourth. In particular, for 50D, the FDDE algorithm exhibited a significant lead over the second-ranked ESDE algorithm, ESDE ranks second, and DISH ranks third. Furthermore, for 100D, our algorithm attained the lowest average rank among the four dimensions, ESDE ranks second, and DISH ranks third. For 10D, our algorithm ranked third, and HIP-DE ranks first. In comparison with the DISH and ESDE algorithms, our algorithm demonstrated improved performance in all dimensions.

The convergence curves can reflect the convergence characteristics of the algorithm. In this study, an analysis was conducted on some functions in 30D, 50D, and 100D on CEC 2017. F10 was selected as the test function from the simple multimodal functions, F20 was chosen as the test function for hybrid functions, and F26 was selected for composition functions. The convergence curves are shown in Figs. 5, 6, and 7. It can be observed that the proposed algorithm in this study significantly outperforms all advanced DE algorithms. In comparison with DISH and ESDE, we observed that FDDE achieved a similar convergence rate in the early stages and was able to converge to a better value in the later stages. This is because when individuals with exploitation potential get trapped in local optima, the FDDE algorithm continues to explore the regions with exploitation potential by accepting the fitness values of discarded trial individuals that have been filtered, which is advantageous for finding better solutions in the early stage. In the later stage, due to the thorough exploration in the early stage that identifies regions with development potential, the FDDE algorithm still maintains good convergence speed. Therefore, it can be observed that the FDDE achieves a good balance between exploration and exploitation. In summary, the research found that in 10D functions, this paper does not significantly differ from other algorithms, but in high-dimensional test functions such as 30D, 50D, and 100D, this paper performs excellently, outperforming the other six comparison algorithms. Upon further study, it was found that in multimodal functions, the FDDE algorithm achieved significant improvement in 10D, 30D, 50D, and 100D, with no instances of degradation. In hybrid functions and composition functions, the number of instances in which the FDDE algorithm improved and underperformed in 10D was fairly close, but in 30D, 50D, and 100D, significant improvement was achieved, significantly outperforming the six comparison algorithms.

Fig. 5
figure 5

Convergence curves of the mean fitness on certain test functions in 30D

Fig. 6
figure 6

Convergence curves of the mean fitness on certain test functions in 50D

Fig. 7
figure 7

Convergence curves of the mean fitness on certain test functions in 100D

To demonstrate the algorithm’s robustness, applicability, and scalability, we utilized four distinct types of functions from the CEC2017 benchmark for our analysis. The FDDE algorithm’s performance was showcased across a variety of problems using box plots. Specifically, we selected function F3 for unimodal challenges, F10 for simple multimodal problems, and functions F13 and F17 for hybrid scenarios, as well as functions F22 and F26 for composite problems. These functions were chosen because they present significant optimization challenges, providing a rigorous test of the algorithm’s ability to manage uncertainty and sustain performance.

Box plots are statistical charts that illustrate data distribution effectively. They provide a five-number summary: the minimum, first quartile (Q1), median (Q2), third quartile (Q3), and maximum. The central box represents the interquartile range, with the median marked by a central line. The boundaries of the box, Q3 and Q1, delineate the upper and lower quartiles, respectively. In this study, outliers are indicated with a red "+" to emphasize deviations, ensuring clarity in data interpretation and analysis.

As shown in Figs. 8, 9, 10, and 11, for unimodal functions, most algorithms stably converged to a satisfactory value across 10D, 30D, 50D, and 100D, except for JADE, which displayed numerous outliers and poorer convergence in 30D, 50D, and 100D. For simple multimodal problems, which feature many local optima, FDDE significantly outperformed other algorithms in terms of median values and demonstrated higher stability (smaller interquartile range). In hybrid functions, FDDE showed superior stability and convergence performance compared to others, notably in 100D where FDDE had more outliers but still outperformed other algorithms. The presence of numerous outliers suggests that while FDDE can navigate local optima in high-dimensional settings, its improvement still depends on the exploratory scope of the population, and escaping local optima does not guarantee finding better solutions. Composite functions combine multiple function types, and here, FDDE not only maintained high stability and convergence but also showed significantly better performance than other algorithms, which struggled with convergence. This highlights FDDE’s strong global search capability and effective convergence in scenarios with multiple local optima and extensive suboptimal regions. In summary, FDDE exhibits robust performance across various problem types, maintaining stability through multiple runs. This further attests to FDDE’s excellent applicability and scalability across a diverse range of function types and dimensions.

Fig. 8
figure 8

Boxplot results for different function types in 10D

Fig. 9
figure 9

Boxplot results for different function types in 30D

Fig. 10
figure 10

Boxplot results for different function types in 50D

Fig. 11
figure 11

Boxplot results for different function types in 100D

4.5.2 Comparison results with four famous evolutionary algorithms on CEC 2017

In this section, we compare the proposed FDDE algorithm with four of the latest advanced evolutionary algorithms: the PSO-sono [56], LEA [57], LPO [58], and WOA algorithms [59]. The parameter settings for these four algorithms are the same as those in the original texts, as detailed in Table 11. PSO-sono represents the latest variant of the particle swarm optimization algorithm, while LEA, LPO, and WOA are innovative meta-heuristic algorithms. For each algorithm, the mean and standard deviation of fitness error values are recorded after 51 independent runs on each function. The recorded data are then subjected to statistical analysis and comparison using the Wilcoxon rank-sum test with a significance level set at 0.05.

Table 11 Parameter settings of four advanced evolutionary algorithms

In this paper, the experimental results presented in Table 12 demonstrate a comprehensive performance enhancement of our proposed FDDE algorithm when compared with four advanced evolutionary algorithms: PSO-sono, LEA, WOA, and LPO. Specifically, in the 10D dimension, FDDE showed significant improvements in 22, 26, 30, and 21 test functions respectively against PSO-sono, LEA, WOA, and LPO, but significantly underperformed in 4, 4, and 1 test problems respectively. In higher dimensions of 30D, 50D, and 100D, FDDE’s performance was exceptionally strong, significantly outperforming all test functions compared to PSO-sono, LEA, and WOA. Compared to LPO, FDDE was significantly better in 27, 28, and 29 functions in these dimensions, with only 1, 0, and 0 functions where it underperformed respectively.

The analysis of the data indicates that the FDDE algorithm has made significant progress in all tested dimensions. Although the improvements at 10D were relatively modest, enhancements in the dimensions of 30D, 50D, and 100D were markedly greater. This phenomenon reflects the increased complexity of problem-solving as the number of dimensions increases, leading to more local optima. Thanks to the FDS strategy in the FDDE algorithm, which is particularly suited for selectively discarding inferior solutions, the algorithm exhibits robustness against multiple local optima, making FDDE more effective at handling high-dimensional problems.

Table 12 Comparison of FDDE and other evolutionary algorithms

4.5.3 Comparison results on CEC 2022

To further explore the performance and robustness of the FDDE algorithm across different function types, we continued to use the CEC 2022 test suite to test the algorithm. The characteristics of the functions in the CEC 2022 test suite include nonlinearity, multimodality, non-convexity, non-differentiability, and high complexity, which impose high demands on the algorithm’s performance. The parameter settings of the comparison algorithms are the same as those used in the CEC 2017 30D test set. As shown in Table 13, "±/=" represents significant improvement, significant underperformance, or no significant difference based on the Wilcoxon rank-sum test compared to other algorithms. "Rank" indicates the average ranking obtained from the Friedman test.

Unimodal functions are used to assess the convergence capability of algorithms. In comparison with advanced algorithms such as ESDE, DISH, jSO, LSHADE, and HIP-DE, the FDDE algorithm proposed in this paper demonstrated outstanding performance, ranking first, while the JADE algorithm ranked last. This result clearly demonstrates FDDE’s capability for rapid and effective convergence.

For basic functions, which are characterized by their multimodality, there is a high demand for the algorithm’s exploration ability. The FDDE algorithm ranked second in this category. According to the Wilcoxon rank-sum test results, FDDE performed comparably to the ESDE algorithm. Compared to the DISH and jSO algorithms, FDDE was significantly better in 1 instance and showed no significant difference in 3 instances. Against the JADE and HIP-DE algorithms, FDDE was significantly better in two instances and showed no significant difference in two instances. These findings indicate that the FDDE algorithm can effectively explore extensive search spaces and accurately locate potential optimization areas.

Hybrid functions are created by combining various basic test functions to mimic the diversity and complexity of real-world problems and demand more from the adaptability and robustness of optimization algorithms. Based on Friedman test results, the FDDE algorithm ranked first, significantly outperforming other algorithms. According to the Wilcoxon rank-sum test, compared to the ESDE algorithm, FDDE performed significantly better in 2 instances but underperformed in one. When compared with DISH, jSO, and LSHADE, FDDE was significantly better in one instance and showed no significant difference in three. In comparison with JADE and HIP-DE, FDDE was significantly better in 3 instances. Overall, the FDDE algorithm maintained robust performance when dealing with hybrid functions, showing a stable improvement over other DE variants, with only a few instances of poor performance. This indicates that the fitness-distance selection strategy can reliably assist the algorithm in exploring the entire search space.

Composition functions combine multiple different test functions into a single optimization problem. Each basic function in the composite maintains its independence and impacts different regions of the entire search space. These functions often have multiple local optima. According to the results of the Friedman test, the FDDE and DISH algorithms ranked joint first. Based on the Wilcoxon rank-sum test results, compared to the ESDE, DISH, jSO, LSHADE, and HIP-DE algorithms, FDDE performed significantly better on one instance and showed no significant differences on 3 instances. Compared to the JADE algorithm, FDDE performed significantly better on two instances but was significantly worse on one, demonstrating that FDDE’s superior selection strategy and improved scaling factor parameters enhance the algorithm’s performance in handling local optima challenges.

In comprehensive comparisons across all function types, the FDDE algorithm ranked first and maintained significant performance improvements compared to six other excellent DE variants. Overall, the FDDE algorithm demonstrated outstanding exploration and exploitation capabilities, maintained stability in handling complex problems with multiple local optima, and showed high robustness across tests of various function types.

Table 13 Comparison of algorithm performance on different function types according to Wilcoxon rank-sum test and Friedman test rankings

4.5.4 Comparison results on CEC 2011

To validate the effectiveness of FDDE in practical problems, CEC 2011 was selected as the testing platform to evaluate FDDE and six comparative DE variants algorithms. Seven bounded function problems were chosen. The specific problem description and dimension settings are shown in Table 14. The parameter configurations for each algorithm are consistent with their settings in the 30-dimensional space within CEC 2017.

Table 15 shows that FDDE exhibits significant performance differences compared to the JADE algorithm. In six out of the seven problems, FDDE outperforms JADE algorithm, with only one problem showing slightly inferior performance. When compared to DISH and ESDE algorithms, FDDE shows similar performance, surpassing DISH algorithm in two problems while being slightly inferior in one, and outperforming ESDE algorithm in one problem. These results indicate the potential of the FDDE algorithm in solving real-world problems, with its overall performance being comparable to or better than advanced DE algorithms.

Table 14 Description of the real-world problems
Table 15 Comparison results for the real-world problems

5 Conclusion

In this paper, we proposed a novel fitness-distance-based selection (FDS) strategy and introduce a new scaling factor formula. Specifically, FDS selects and accepts appropriate discarded trial vectors based on individual exploration and exploitation potential under certain conditions. Furthermore, the adjusted scaling factor enhances the identification of individual stagnation in the FDS strategy, leading to the improvement of the algorithm performance. The experimental results demonstrate that FDS significantly enhances algorithm performance, and the new scaling factor further improves the algorithm’s effectiveness. When compared with six advanced DE algorithms, the proposed fitness-distance-based DE (FDDE) algorithm shows remarkable superiority from CEC 2017, CEC 2022, and CEC 2011, and FDDE shows great potential in real-world optimization problems. Furthermore, when FDDE is compared against four other advanced evolutionary algorithms on the CEC 2017 benchmark, the results consistently reveal its significant advantages over competing algorithms.

While this study has contributed valuable insights into the application of differential evolution strategies, it also uncovers several limitations that point towards avenues for future research:

  1. 1.

    The study primarily uses fitness values to assess individual potential for exploration and exploitation. This straightforward approach does not consistently ensure precise identification, especially in complex scenarios where fitness landscapes are deceptive.

  2. 2.

    The feasibility of the proposed selection strategy in other evolutionary algorithms remains to be fully explored. The results of this study are promising, but the applicability of the strategy across different evolutionary frameworks could further validate its effectiveness.

  3. 3.

    The current method for detecting local optima requires further refinement to improve its accuracy. As the detection of local optima is crucial for the timely adjustment of search strategies in evolutionary algorithms, enhancing the precision of this detection mechanism could significantly impact the overall performance of the algorithm.