Abstract
A swarm intelligence-based optimization algorithm, named Duck Swarm Algorithm (DSA), is proposed in this study, which is inspired by the searching for food sources and foraging behaviors of the duck swarm. Two rules are modeled from the finding food and foraging of the duck, which corresponds to the exploration and exploitation phases of the proposed DSA, respectively. The performance of the DSA is verified by using multiple CEC benchmark functions, where its statistical (best, mean, standard deviation, and average running-time) results are compared with seven well-known algorithms like Particle swarm optimization (PSO), Firefly algorithm (FA), Chicken swarm optimization (CSO), Grey wolf optimizer (GWO), Sine cosine algorithm (SCA), and Marine-predators algorithm (MPA), and Archimedes optimization algorithm (AOA). Moreover, the Wilcoxon rank-sum test, Friedman test, and convergence curves of the comparison results are utilized to prove the superiority of the DSA against other algorithms. The results demonstrate that DSA is a high-performance optimization method in terms of convergence speed and exploration–exploitation balance for solving the numerical optimization problems. Also, DSA is applied for the optimal design of six engineering constrained optimization problems and the node optimization deployment task of the Wireless Sensor Network (WSN). Overall, the comparison results revealed that the DSA is a promising and very competitive algorithm for solving different optimization problems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Optimization algorithms play a significant role in solving the real-world optimization problems. Especially, these algorithms can be compartmentalized different categories using different descriptions. Common names are evolutionary algorithm (EA) [1], nature-inspired algorithm (NIA) [2], meta-heuristic algorithm (MA) [3], and swarm intelligence (SI) algorithm [4], however, some of the algorithms included are the same. Thus, a challenge of the algorithm is that searching for the optima in the search space with higher convergence speed. Three typical and noted heuristic algorithms (evolutionary algorithms), Genetic Algorithm (GA) [5], Simulated Annealing (SA) [6] algorithm, and Particle Swarm Optimization (PSO) algorithm [7], have made great contributions and provided a lot of reference for the algorithms that were proposed later.
Genetic Algorithm combines evolution and natural selection, which are applied to its population over generations, and it was proposed in the 1970s [5, 8]. The best chromosomes in the previous generation or generated by crossover and mutation constitutes the next population in the optimization process. The crossover is to inherit a part of the value of two chromosomes from each parent and produces one offspring, which can direct to the exploitation. The mutation is randomly changing some values in a chromosome and responsible for the exploration. Overall, highly random operations make GA avoid falling into local optimum, and slow convergence is its disadvantage at the same time.
Simulated Annealing [6] was proposed in 1983, one of the most well-known physics-based methods, which is inspired by the annealing in metallurgy. It starts to find the global optimal solution at a high "temperature" and becomes more sensitive as the temperature decreases, that is, the ratio of the difference solution decreases. Thus, the initial temperature and annealing speed are the key indicators that affect whether it can reach the optimum.
Particle Swarm Optimization (PSO) algorithm was proposed in 1995 [7], one of the most popular SI methods, which inspired by the bird flocking behavior. The movements of the particles are affected by the position and speed of the previous generation and the surrounding particles. PSO algorithm has a clearer direction than GA and SA, because it is easy to implement, and the parameters are rarely the outstanding advantages of the PSO. However, it tends to converge to the local optimum prematurely when optimizing multi-modal functions, because it uses the static finite predecessor and group of linear motion. Above the three methods, their variations have been proposed, such as Quantum PSO [9], Adaptive PSO [10], and Hybrid GA with SA [11], etc.
During the last two decades, many meta-heuristic algorithms were proposed and have been used for solving optimization problems after GA, SA, and PSO. Some of the most well-known optimization techniques are Differential evolution (DE) [12], Harmony search (HS) [13], Ant colony optimization (ACO) [14], Firefly algorithm (FA) [15], Cuckoo search (CS) [16], Gravitational search algorithm (GSA) [17], Grey wolf optimizer (GWO) [18]. To some extent, the algorithms mentioned above are inspired by some, such as the social behavior of animal groups (foraging, migration, courtship), the evolution of nature, human social behavior, etc. Thus, we can name all of them inspiration algorithm in this paper. These optimization algorithms have succeeded to solve optimization problems of the literature. However, according to the No Free Lunch (NFL) theorem [19] that no inspiration algorithm best for solving all optimization problems. Namely, this indicates that an inspiration algorithm may produce satisfying solutions on a set of problems but unsatisfying solutions on another set of problems. Thence, this motivates our essays to develop a novel swarm intelligence algorithm with inspiration from duck swarm.
This study proposes a novel swarm intelligence algorithm, named Duck Swarm Algorithm (DSA), for solving the numerical optimization functions from the CEC benchmark functions, the real-world engineering constrained optimization problems and the WSN’s node optimization deployment task. The inspirations behind the proposed algorithm are the search and foraging behaviors of the duck swarm. The main contributions are as follows:
-
Inspired by the social behaviors of the duck swarm which summarized observations in daily life, a novel group-based swarm intelligence algorithm is proposed, named Duck Swarm Algorithm (DSA).
-
Two rules are modeled from the finding food and foraging of the duck, which corresponds to the exploration and exploitation phases of the proposed DSA, respectively. Where duck cooperation and competition are considered in details.
-
The proposed DSA performs well on multiple CEC benchmark functions, and outperforms for solving the engineering constrained optimization problems and the WSN node optimization deployment task.
The rest is set up as follows: Sect. 2 reviews the literature of the nature‐inspired metaheuristic algorithms. Section 3 introduces the proposed DSA in detail. Section 4 presents the comparison experiments of the algorithms. Experiments and simulations of the DSA's performance are described, and the results are illustrated in separate graphical diagrams in Sect. 5. Moreover, the conclusion and future work are discussed in Sect. 6.
2 Literature review
According to the inspiration principle of the meta-heuristic optimization algorithms, which can be simply categorized into four categories (See Fig. 1). Based on the inspiration source, the mainly four classes are: (i) evolution-based algorithms, (ii) swarm intelligence algorithms, (iii) physics-based algorithms, and (iv) human behavior-based algorithms. Of course, all meta-heuristic optimization methods benefit from these advantages despite the differences.
The first main division of meta-heuristics is evolution-based methods. Such evolutionary algorithms normally mimic evolutionary rules in nature, some of the most well-known techniques are Genetic algorithm (GA) [8], Genetic programming (GP) [20], Differential evolution (DE) [12], Evolutionary programming (EP) [21], Biogeography-based optimizer (BBO) [22], Gradient evolution algorithm (GEA) [23], and Tree-seed algorithm (TSA) [24].
The second main division of meta-heuristics is swarm-based approaches. These SI algorithms currently mimic swarm behaviors in animals. Some of the most popular algorithms are Particle swarm optimization (PSO) [7], Ant colony optimization (ACO) [14], Firefly algorithm (FA) [15], Cuckoo search (CS) [25], Grey wolf optimizer (GWO) [18], Salp swarm algorithm (SSA) [26], and Marine-predators algorithm (MPA) [27]. Besides, some well-known SI algorithms are not listed in Fig. 1 that are Whale optimization algorithm (WOA) [28, 75] inspired by the foraging and hunting of the whales in the ocean, Moth-flame optimization (MFO) [29] inspired by the navigation approach of moths, and Butterfly optimization algorithm (BOA) [30] inspired by the foraging and mating behaviors of butterflies, etc.
The third main division of meta-heuristics is physics-based methods. These optimization algorithms usually mimic physical principle. Some of the well-known methods are Simulated annealing (SA) [6], Gravitational search algorithm (GSA) [17], Water cycle algorithm (WCA) [31], Sine cosine algorithm (SCA) [32], Henry gas solubility optimization (HGSO) algorithm [33], and Archimedes optimization algorithm (AOA) [34]. It is worth mentioning that AOA was proposed in 2021 by Fatma et al., which inspired from the phenomenon explained by Archimedes’ principle. Also, Equilibrium optimizer (EO) [35] and Gradient-based optimizer (GBO) [36] are proposed for solving the numerical optimization problems inspired by the physical rules.
The fourth main division of meta-heuristics is human social behavior-based tools. Such optimization algorithms typically mimic social behavior rules in humans. Some of the popular algorithms like Harmony search (HS) [13], Imperialist competitive algorithm (ICA) [37], Teaching learning-based optimization (TLBO) [38], Socio evolution and learning optimization (SELO) [39], and Political optimizer (PO) [40]. We divide HS algorithm into social behavior is based on the harmony that only humans can sing, and its principle include the description of the propagation of musical sound. For a more detailed review, different categories can refer to the literature [18, 41,42,43].
Overall, various SI methods have been proposed recently. Most of these approaches is inspired by foraging, mating, hunting and searching behaviors of animals in nature. In the scope of our knowledge, there is no SI method in the literature inspired by the social behaviors of duck swarm. This is the main motivation for proposing a new SI method by modeling the social behavior of the duck swarm. Additionally, its abilities for solving the numerical, real-world engineering optimization problems and the WSN node optimization deployment task are investigated in the following.
3 Duck Swarm Algorithm
In this section, a novel SI optimization algorithm, named Duck Swarm Algorithm (DSA), is proposed that imitates the searching for food and foraging behaviors of the duck swarm. To understand this new algorithm some biological facts and how to model them of the DSA are discussed in details below.
3.1 Inspiration
In nature, formation characteristics are common for group animals, especially in the process of animal migration and foraging (hunting). Among group mammals, there are also obvious hierarchical characteristics, such as: lions, wolves, monkeys, etc. The GWO algorithm proposed for the hierarchical system of the grey wolves, the algorithm divides the hunting characteristics of grey wolves into four levels. Moreover, there are many intelligent algorithms proposed for group animals belonging to birds, including classic PSO algorithm, CS algorithm, Crow search algorithm (CSA) [44], Chicken swarm optimization (CSO) algorithm [45], Sparrow search algorithm (SSA) [46], Honey Badger Algorithm (HBA) [58], etc.
Ducks are aquatic and terrestrial amphibians, and it can be simply divided into three common duck species [47]: water ducks, diving ducks, and roosting ducks. The common duck (poultry) in our life belongs to the water duck and the order Anseriformes. It is generally considered to be a bird. The nature-inspired heuristic algorithms are derived from the observation of phenomena, such animals, plants, or other characteristics in nature. Then, their behaviors are abstracted into mathematical models, and designed as optimization methods for solving numerical optimization problems, and engineering constraint optimization problems, and other real-world optimization problems.
It can be seen from observation that duck swarm queuing, searching for food sources and foraging behaviors have certain laws in life. Some pictures of duck swarm behaviors are provided in Fig. 2.
3.2 Mathematical model of DSA
This section detailed presents the mathematical model of the proposed approach. Three main processes of the DSA are discussed as follows: (i) Positions of duck swarm after queuing (Population initialization), (ii) Searching for food sources (Exploration phase), (iii) Foraging in groups (Exploitation phase). Noted that there are two rules that need to be followed in the process of searching food of ducks. Rule one: when looking for food, ducks with strong search ability are located closer to the center of food source, which attract other individuals to move closer to them; the updated location is also affected by nearby individuals. Rule two: when foraging, the individuals all approach the food; the next position is affected by neighboring individuals and food position or leader duck.
3.2.1 Population initialization
Supposing the expression of randomly generated initial position in the D-dimensional search space is as follow:
where Xi denotes the spatial position of the i-th duck (i = 1, 2, 3, …, N) in the duck group, N is the number of population size. Lb and Ub represent the upper and lower bounds of the search space, respectively. o is a random number matrix between (0, 1).
3.2.2 Exploration phase
After the queuing behavior of duck swarm, that is, the ducks arrived at a place with more food. Each-individual gradually disperses and starts searching for food, this process is defined as follows:
where sign(r-0.5) influences the process of searching for food, and it can be set either -1 or 1.\(\mu\) denotes the control parameter of global search. P is searching conversion probability of exploration phase. CF1 and CF2 denote cooperation and competition coefficient between ducks in the search stage, respectively. \(X_{leader}^t\) represents the best duck position of the current historical value in the t-th iteration. \(X_j^t\) denotes the agents around \(X_i^t\) in searching for food by duck group in the t-th iteration. Moreover, parameter \(\mu\) can be calculated as follows:
where K is calculated by:
In the exploration phase, Fig. 3a depicts the process of agents update its position pertaining to Xi, Xj, and Xleader in a 2-D search space. The value curves of parameters K and \(\mu\) with 200 iterations are shown in Fig. 3b.
As shown in Fig. 3, the search range of duck swarm is wider when \(\mu > 1\) in the exploration phase. This non-linear strategy is used to enhance the global search ability of the proposed DSA. Besides, this phase is also can be integrated, and two parameters \({C}_{1}\) and \({C}_{2}\) will be considered in the updating formula, which are used to balance the positions of the individual in the exploration phase. Notably, the integrated formula is as follows:
where \(\mu\) denotes the control parameter of global search. \({C}_{1}\) and \({C}_{2}\) are the position balance adjustment parameters, \({C}_{1}\) is a random number in (0, 1). Notably, \({C_2}\) can not only be set as a random, but also as a constant, like 0, 1 or others. CF1 and CF2 denote cooperation and competition coefficient between ducks in the search stage, respectively. \({X_{leader}^t}\) indicates the best duck position of the current historical value in the t-th iteration. \({{X_j}^t}\) represents the individuals around \(X_i^t\) in searching for food by duck group in the t-th iteration.
3.2.3 Exploitation phase
After the duck swarm searching for enough food, which can satisfy the foraging of the ducks. This process is closely related to fitness of each duck’s position and defined as follows:
where \(\mu\) denotes the control parameter of global search in exploitation phase; parameters KF1 and KF2 denote the cooperation and competition coefficient between ducks in the exploitation phase, respectively. \(X_{leader}^t\) represents the best duck position of the current historical value in the t-th iteration. \(X_k^t\) and \(X_j^t\) denote the agents around \(X_i^t\) in foraging of duck group in the t-th iteration, where k ≠ j.
Noted that the values of parameters CF1, CF2, KF1 and KF2 are all in (0, 2), and the calculation formula can be summarized as follows:
where FP is a constant, it is set to 0.618; the rand is a random number in (0, 1).
In the exploitation phase, Fig. 4 depicts the process of ducks update its position pertaining to Xi, Xj, Xk, and Xleader in a 2-D search space. Path 1 denotes the choice of ducks with cooperation. Path 2 represents the competition between Xi, Xk and Xj in the t-th iteration. Path 3 indicates the choice of the duck that have failed to compete the food search process.
3.3 Pseudo-code and complexity analysis of the DSA
3.3.1 Pseudo-code of the DSA
A detailed algorithm optimization process needs to be demonstrated, in which, the designed DSA theory is introduced in the Sect. 3.2. The pseudo-code of the DSA is listed in
Besides, the flowchart of the designed DSA is presented in Fig. 5. The phases of the Exploration (Stage 2) and Exploitation (Stage 3) are shown in details, which corresponds to the pseudo-code of the proposed DSA.
3.3.2 Complexity analysis
In this subsection, time and space complexity of the DSA are presented.
3.3.2.1 Time complexity
Assuming that the population size and the search space dimension of the problem are n and d, and the maximum iteration is T. The complexity of the DSA includes: the population initialization complexity is O(nd), the fitness value of calculation complexity is O(nd), the exploration and exploitation phases update complexity are O(T)(n + nlogn + n + nlogn), and the parameters update complexity of the method is O(T). To the above parts, the total time complexity of the proposed DSA is expressed as:
3.3.2.2 Space complexity
The storage space consumed by an algorithm can be defined the space complexity. It is closely related to the population size (n) of the algorithm and the dimension (d) of the problem. The total space complexity of the proposed DSA is O(n·d). Thus, the space efficiency of the proposed method is effective and stable.
4 Comparative experiment and statistical test design
To verify the proposed algorithm’s efficiency, DSA has been compared to seven optimization algorithms. The comparison techniques are Particle swarm optimization (PSO, 1998) [48], Firefly algorithm (FA, 2008) [15], Chicken Swarm Optimization (CSO, 2014) [45], Grey wolf optimizer (GWO, 2014) [18], Sine cosine algorithm (SCA, 2016) [32], Marine-predators algorithm (MPA, 2020) [27], and Archimedes optimization algorithm (AOA, 2021) [34]. The initial parameter values of the seven competitive methods are listed in Table 1. Three categories of seven comparison algorithms are used to assess the DSA efficiency.
4.1 Comparative experiment
All the experimental series are carried out a Windows 10 system using Intel (R) Core (TM) i5-10210U CPU @2.11G with 8G RAM, and MATLAB 2018a in this study. For the statistical results like Mean, and Standard deviation (Std), the comparison algorithms performed 30 independent runs for each test function. The agent size in the population N is set to 30, and the max iteration of the comparison algorithms is set to 200. Additionally, the dimension of unimodal and multimodal functions is set to 30, which are selected from the CEC benchmark functions.
Eighteen functions [1, 17, 42, 49] from the IEEE CEC benchmark functions are used to assess the performance of the DSA in this study. Three groups of the test functions are unimodal, multimodal, and fixed-dimension numerical optimization problems. These functions are shown in the Table 2, including Search range, Dim dimension (Dim) of the function, and fmin is the optimum of the function in theory. Additionally, the 2-D versions of each benchmark function are illustrated in Fig. 6, where F1~F7, F8~F14, and F15~F18 are the 2-D versions of the unimodal, multimodal, and fixed-dimension problems, respectively.
4.2 Statistical test
The DSA is assessed via eighteen benchmark functions and compared with seven optimization algorithms. The performance of the DSA is evaluated by different test functions. The exploitation ability of competitive methods can be assessed by the unimodal functions because of only one global optimum of them (F1–F7). The exploration ability of competitive algorithms can be evaluated by the multimodal functions (F8–F14) because they have many local optima. The local optima avoidance ability between exploration and exploitation of the competitive algorithms can be assessed by fixed-dimension functions (F15–F18) as they have lots of local optima. The statistical results include the Best, Mean, Standard deviation (Std) of the optimal results with 30 times, and the average running time (Time/s) is also considered. They can be calculated as follows:
-
(1)
The best value (Best)
$${F_{best}} = \min ({F_1},{F_2}, \ldots {F_m})$$(9)where m indicates the number of optimization tests, and \({F_{best}}\) represents the optima in 30 independent runs.
-
(2)
The mean value (Mean)
$${F_{mean}} = \frac{1}{m}\sum\limits_{i = 1}^m {F_i}$$(10)where \({F_i}\) indicates the optimum in each independent run, and \({F_{mean}}\) denotes the mean value of the 30 independent runs.
-
(3)
The Standard deviation (Std)
$${F_{std}} = \sqrt {\frac{1}{m}\sum\limits_{i = 1}^m {{{({F_i} - {F_{mean}})}^2}} }$$(11) -
(4)
The average running time (Time/s)
$${T_{mean}} = \frac{1}{m}\sum\limits_{i = 1}^m {T_i}$$(12)where \({T_i}\) indicates the running time for each single optimization.
The statistical results (include Best, Mean, Standard deviation (Std), and average running time (Time/s)) are given in Table 3, 4, 5. The best results are highlighted in bold. For the statistical results of comparison algorithms, statistical tests are required to assess the performance of the DSA sufficiently according to Ref. [50]. The statistical tests (Wilcoxon’s rank-sum [51], and Friedman rank test [52]) are needed to suggest a remarkable improvement of a new swarm intelligence algorithm in comparison to the other well-known SI algorithms to solve a particular optimization problem. Wilcoxon’s rank-sum (WRS) is a classical non-parametric statistical test that has been performed and reached the 5% significance level. Generally, p-value < 0.05 is considered strong evidence against the null hypothesis. Besides, the Friedman rank test is used to evaluate the superiority of the proposed DSA for solving optimization problems.
4.3 Population diversity test
According to Ref. [53, 54], to distinguish the diversity of agents in the process of exploration and exploitation, it is necessary to visually analyze the diversity of the population for a new SI algorithm. To analyze the population diversity of the proposed DSA, the diversity [55] is defined as follows:
where \(Div(t)\) indicates population diversity in iteration t, t is the current iteration during the optimization process, N represents the population size, and D is the dimension of the problem.\(Di{v_j}\) is calculated [54] as follows:
where \(\mathop {X_j}\limits^-\) represents the mean of current solutions on dimension j, \(Di{v_j}\) indicates mean population diversity on dimension j, and \({X_{ij}}\) represents current solutions. Thus, the exploration and exploitation percentage measurement of the search process can be defined as follows:
where \(Di{v_t}\) indicates population diversity of t-th iteration, and \(Di{v_{\max }}\) denotes the max diversity of the whole group’s population diversity.
5 Results analysis and discussions
In this section, the experimental results of comparison algorithms are presented in Table 5, 6, 7. Figure 6 shows the convergence curves of the competitive algorithms for different types of functions. Figures 6 and 7 present the exploration and exploitation percentage curves of the population diversity during the process of optima with 200 iterations and boxplot of the comparison algorithms for the benchmark functions, respectively. Eventually, DSA successfully produces effective results that verify its performance as we will illustrate in this section.
5.1 Statistical results analysis
The statistical results are reported in Tables 3, 4 and 5, respectively. Table 3 shows that DSA displayed an extremely good exploitation ability among the comparison algorithms except F5. According to Table 4, the DSA yields a pretty exploration ability for multimodal dimension problems, excluding F12 and F13. For functions F9 and F10, AOA and DSA can obtain the best fitness value, but the Mean and Std of AOA are much worse than DSA. For F14, CSO, MPA, AOA and DSA can obtain the best optimum. According to the dimension of benchmark functions, it can be divided into three types: low dimension (Less than 10 dimension), high dimension (Between 10 and 300 dimension), and large-scale (Greater than 300 dimension). In general, DSA demonstrates outstanding performance on test functions F1, F2, F3, F4, F6, F7, F8, F9, F10, F11, and F14, especially on F9 and F11 because of the best fitness obtained by the proposed DSA.
In addition, from the Best in Table 5, DSA can obtain the best fitness on functions F15, F16, F17, and F18. It illustrates the advantages of the DSA to strike a balance between exploration and exploitation phases for fixed dimension problems. However, for the Mean and Std on fixed functions, MPA has better stability on F15 and F16 than DSA. PSO algorithm has a best stability on F18 among the comparison algorithms. Thus, the stability of the DSA on fixed dimension functions should be improved in further study.
Moreover, DSA is compared with the other seven algorithms in the running-time calculation on the eighteen benchmark functions. The running-time calculation method is that the comparison methods independently run 30 times on each test function and noted the results in Table 3, 4, 5, respectively. DSA outperforms FA and MPA while taking less time than for unimodal, multimodal, and fixed test functions. Compared with the running-time of the CSO, GWO, and AOA, the running-time of the DSA is an order of magnitude, and the gap is small. Although the running-time of the PSO algorithm is the shortest, and SCA followed, their optimization accuracy is poor among the comparison algorithms. Generally, the proposed DSA still possesses effective superiorities over the comparison methods on the running-time.
Two of the frequently used tests are used to statistically evaluate the performance of the DSA in this paper. Tables 6 and 7 illustrate Friedman rank and Wilcoxon’s rank-sum test results. According to the Friedman test results listed in Table 6, it can be concluded that the rankings of the eight comparison algorithms are DSA > MPA > AOA > GWO > CSO > PSO > FA > SCA. It is shown that DSA can produce satisfactory results and is also statistically superior to comparison algorithms. DSA will play a constructive role in the future as a robust algorithm.
The WSR results of pair-wise comparison of the DSA and comparison methods are presented in Table 7 at 0.05 significance level. Where H = 1 means acceptable; H = 0 means rejection; and NaN means that the optimization values of the two algorithms are similar. According to the p-values in Table 7, for function F8, the FA, GWO, AOA are better than DSA. For functions F11 and F12, AOA is better than DSA from the WSR test. For function F14, the optimum values of CSO, MPA, and AOA are similar to DSA. Additionally, PSO is better than DSA on functions F15 and F16 based on WSR test results. Also, the WSR test results of FA, CSO, and AOA is better than DSA with 200 iterations on F15 and F16.
Overall, the statistical results verify that there is a significant difference between the results obtained by the DSA and the comparison approaches in almost all cases. Especially, DSA in benchmark functions F1–F4, F6, F7, F9, F10, F11, F14, F17 and F18 has a significant advantage over almost all comparison methods. However, in functions F15 and F16, the performance of the DSA in the comparison algorithms is insufficient, which indicates that the DSA needs further improvement for solving fixed-dimensional problems. This conclusion of the statistical tests is in line.
5.2 Convergence curves analysis
Notably, to completely report the performance of the competitive algorithms, the convergence curves of the eighteen test functions are shown in Fig. 7. The y-axis and x-axis represent the fitness value and iteration, respectively. The convergence curves show that the poor performance of the SCA, PSO and FA is the imbalance between the exploitation and exploration phases. The convergence curve clearly illustrates the advantages of the DSA integrating the two phases into the search process. Except for DSA, MPA and AOA, almost all comparison algorithms converge to a local optimum for test functions. It also indicates that the DSA is faster and superior to other comparison algorithms in solving almost all the numerical optimization problems, except F5, F12, F13, and F17.
5.3 Diversity analysis
In this study, the components DSA exploration and development capabilities of the impact of the diversity were analyzed. The plots are discussed to evaluate the ability of the DSA to balance exploration and exploitation. Figure 8 shows the exploration and exploitation percentage curves of population diversity in the search space while solving the test functions.
As shown in Fig. 8, DSA preserves a balance between the exploration and exploitation rates during the search process for all agents in solving the benchmark optimization functions. Notably, the results balance the capabilities between exploration and exploitation phases to push duck i to the global optimal solution by moving to the optimum produced at a given time.
According to Fig. 8 and Table 5, although DSA can obtain the best fitness value, the Mean and Std of MPA on F15 and F16 are better than DSA. In other words, the exploration and exploitation ability of the DSA should be improved for the fixed-dimension or low-dimensional optimization problems.
5.4 Boxplot analysis
The boxplots of the comparison algorithms on the test functions (F1–F16) are illustrated in Fig. 9. According to Tables 6 and 7 and the charts shown in Fig. 9, it can be ascertained that the DSA achieved the best results and had the best convergence among the others on most of the functions. However, the FA performs better than DSA in F12, and the PSO algorithm performs better than DSA in F13, respectively. Consequently, it can be inferred that the DSA outperforms other comparison methods on classical benchmark functions.
5.5 Sensitivity analysis of the Scalability test
In this subsection, parameter sensitivity tests are used to assess the influence of population size, and iterations on the proposed method that four test functions (two unimodal and two multimodal) are selected, including Sphere function, Schwefel 2.22 function, Rastrigin function, and Ackley function. The population size is set to 30, 50, 80 and 100, and the number of iterations is set to 200, 500, 1000 and 2000, with the dimension of test functions is fixed to 30.
The results of the sensitivity analysis (the mean fitness, Std and convergence curves) for the above four functions are illustrated as follows: (i) Number of ducks (N), (ii) Max iterations (T). DSA was simulated for different numbers of ducks with iteration is fixed to 500. Table 8 shows the Mean and Std value of the DSA when it was applied to solve Sphere (F1), Schwefel 2.22 (F2), Rastrigin (F9), and Ackley (F10) with different number of ducks.
Figure 10 shows the convergence curves of the DSA on the four functions related to number of ducks, respectively. DSA is simulated for various numbers of iterations. Table 9 displays the Mean and Std value of the DSA when it is applied to simulate four test functions with different numbers of iterations. Figure 10 plots the convergence curves of the DSA for Sphere (F1), Schwefel 2.22 (F2), Rastrigin (F9), and Ackley (F10) using different numbers of iterations.
Table 8 shows that the best performance is obtained with different search agents. As can be seen visually from Fig. 9 and Table 9, as the population size increased, the Mean and Std become slightly worse, except functions F9 and F10. The reason is that the search ability of the DSA has hardly changed after the population reaches 30. As can be seen from Fig. 10, the sensitivity of the proposed DSA increases slightly with the number of search agents.
The results in Table 9 and Fig. 11 prove that DSA converges to the optimum when the number of iterations increases. This supports the importance of the iteration number on the robustness and convergence behavior of the DSA. As can be seen visually from Fig. 10 and Table 9, as the population size increased, the Mean and Std become better, except functions F9 and F10. The reason is that the increase in iterations increases the number of searches and accuracy. However, the mean fitness and Std of the F9 and F10 do not become better as the number of iterations increased.
According to the above analysis, when the global approximate optimal solution is roughly found, as the population and the number of iterations continue to grow, the result does not increase proportionally. The results are not increased proportionally. In a word, researchers can set the favorable population and number of iterations according to specific tasks.
6 Applications of the proposed DSA
To confirm the performance of the DSA for solving real-world engineering constrained optimization problems are presented: Three-bar truss problem (TBTP) [56], and Sawmill operation problem (SOP) [57]. Other four engineering constrained optimization problems are also present in subSect. 6.3. SubSect. 6.4 gives the comparison results of the designed DSA for solving the node optimization deployment of the WSN. All the considered problems have several inequality constraints that should be handled.
6.1 Three-bar truss problem
The schematic of Three-bar truss (TBS) structure is shown in Fig. 12. The volume of a statically loaded Three-bar truss is to be minimized subject to stress (\(\sigma\)) constraints on each of the truss members. The objective is to assess the optimal cross-sectional areas (A1, A2). This problem can be expressed as below (volume of a member = cross-sectional area × length):
where \(A_{1} ,A_{2} \in [0,1],{\mkern 1mu} l = 100cm,P = 2KN/cm^{2} ,and{\mkern 1mu} \sigma = 2KN/cm^{2}\).
The results of the DSA on TBTP are shown in Table 10 and 11. The statistical results of the comparison algorithms are given in Table 10, and 11 presents the best solutions obtained by the DSA and other optimization algorithms. As shown, the optimal value of the DSA on TBTP is 263.8958434, which means when x1, x2, g1, g2, and g3 are set to 0.788675136, 0.408248285, − 0.232790818, − 1.231270871, and − 1.001519946 respectively for the three-bar design problem. The convergence curves of this problem are shown in Fig. 13.
6.2 Sawmill operation problem
Assuming a company owns two sawmills and two forests. Duration of one project, each forest can produce up to 200 logs per day; the cost of transporting logs is estimated at $10/km/log; and at least 300 logs are required per day. Table 12 shows the capacity of each of the mills (logs/day) and the distances between the forests and the mills (km). The goal is to minimize the total daily cost of transporting logs and meet the constraints on demand and factory capacity of the mills. The design problem is to determine how many logs to ship from Forest one or two to Mill A or B (x1, x2, x3, x4), as shown in Fig. 14.
The cost of transportation can be defined as follow:
where \({x_1},{x_2},{x_3},{x_4} \in [0,200]\).
The results of the DSA on SOP are shown in Table 13 and 14. The statistical results of the comparison algorithms are listed in Table 13 and 14 presents the best solutions obtained by comparison methods. According to Table 14, the optimal value of the DSA on SOP is 37200.0053, which means when x1, x2, x3, x4, g1, g2, g3, g4, and g5 are set to 1.20E−11, 1.63E−06, 100.0001, 199.9999, − 214.9374, − 281.9187, − 171.8508, − 196.9062, and 256.8561 respectively, the total cost of the SOP is the minimum. It can be concluded from Table 13 that the results obtained by the DSA are better than PSO, FA, CSO, GWO, SCA, and AOA, except MPA. The convergence capability of the proposed DSA and other algorithms is illustrated in Fig. 15.
6.3 Other engineering constrained problems
To further analyze the generalization performance of the DSA, four additional engineering constrained optimization problems, Tension spring design (TSD) [59, 66], Welded beam design (WBD) [18], Pressure vessel design (PVD) [59, 66], Speed reducer design (SRD) [60], were added to compare results. The parameters, dimension and other information of the engineering problem are shown in the following Table 15. The best optimization results of the DSA are compared with the existing advanced intelligent algorithms. The results of the comparison methods can be seen in Tables 16, 17, 18, 19 for details.
The results of the proposed DSA on TSD are listed in Table 16. Table 16 presents the best solutions obtained by the DSA and other optimization algorithms. The optimal value of the DSA on TSD is 0.012668, which means when x1, x2 and x3 are set to 0.052068, 0.365900, and 10.770262 respectively for tension spring design. It can be concluded from Table 17 that the results obtained by the DSA are better than the comparison algorithms, except HGSO [33] and MPA [27].
The results of the proposed DSA on WBD are listed in Table 17. Table 17 shows the best solutions obtained by the comparison optimization algorithms. The optimal value of the DSA on WBD is 1.725555, which means when x1, x2, x3 and x4 are set to 0.205731, 3.475599, 9.036601, and 0.205731 respectively for welded beam design. It can be concluded from Table 17 that the results obtained by the DSA are better than the comparison algorithms, except AVOA [64].
The results of the proposed DSA on PVD are shown in Table 18. Table 18 presents the best solutions obtained by the DSA and other optimization algorithms. The optimal value of the DSA on PVD is 5885.374386, which means when x1, x2, x3 and x4 are set to 0.778189, 0.384659, 40.320642, and 199.985755 respectively for pressure vessel design. It can be concluded from Table 18 that the results obtained by the DSA are better than all the comparison algorithms.
The results of the proposed DSA on SRD are listed in Table 19. Table 19 presents the best solutions obtained by the DSA and other optimization algorithms. As shown, the optimal value of the DSA on SRD is 2996.403492, which means when x1, x2, x3, x4, x5, x6 and x7 are set to 3.500006, 0.700000, 17.000000, 7.300490, 7.800000, 3.350216, and 5.286759 respectively for speed reducer design. The comparison results show that DSA is better than the optimization algorithm listed in Table 19.
6.4 Nodes optimization deployment of the WSN
Node optimization deployment [67,68,69] is the key point of the WSN for the development of the intelligent internet of things (IoT) and smart city, which can be applied to the smart agriculture, fisheries, animal husbandry, military, etc. Besides, node deployment also plays an important role in the field of environmental monitoring, which can used to collect the real time and accurate monitoring data. The node optimization deployment (NOD) problem [70] can be regard as a constraint task. Supposing the length and width of the deployment area projected onto a two-dimensional plane are L and W, and their unit is the meter (m). which is defined as:
where \(Cov\) denotes the node coverage percentage.\(p({V_i},{U_j})\) is the probability of the j-th monitoring point \({U_j}\) covering by the i-th sensor node \({V_i}\). \({u_1},{u_2}\) indicate the monitoring point from x-axis and y-axis, respectively. \(d({V_i},{U_j})\) denotes the Euclidean distance from the j-th monitoring point \({U_j}\) to the i-th sensor node \({V_i}\). \({R_s}\) represents the node sensing radius, which satisfies \(2{R_s} \leqslant {R_c}\), \({R_c}\) denotes the node communication radius. S is the sensor node number in the monitoring area. M is the deployment nodes number in theory. Notably, the binary perception model is used to calculate the coverage probability \(p({V_i},{U_j})\)between the j-th monitoring point \({U_j}\)and the i-th sensor node \({V_i}\), which is defined as
In addition to obtaining visual results intuitively through optimizing the coverage of nodes, the communication between nodes also needs to be considered. The Delaunay triangulation [71] is applied to analyze node connectivity, because the communication between nodes is generally based on the shortest distance strategy except for the sink nodes. Figure 16 shows the optimized node coverage results of the proposed DSA with different sensing radius. Besides, PSO, GWO, SCA, MPA, HBA are also utilized to verify the performance of the designed DSA for the NOD problem. The setting of parameters of the comparison algorithms are shown in Table 1. The β and C of the HBA are set 6 and 2, respectively. Notably, simulation parameters of the node deployment area are listed in Table 20. Besides, the comparison results of different algorithms are presented in the Table 21, which includes the node coverage percentage (%), optimization time (s), and the mean values.
In the deployment area of 100 m × 100 m, when the number of nodes is 20, different node sensing radius have a significant impact on the optimized coverage. This indicates that the larger the node sensing radius, the larger the range of sensing radius and the more data collected. The DSA optimized node coverage percentages are 88.68%, 92.82%, and 97.17%, respectively, and the time required to optimize node coordinates is 5.15 s, 5.34 s, and 4.98 s, respectively. From Fig. 16, when using random initial deployment (such as Fig. 16a, c, and e), not only will nodes exhibit centralized deployment, but the triangulation between nodes is also uneven, indicating poor regional coverage and connectivity of nodes. The deployment and Delaunay triangulation between nodes after DSA optimization are shown in Fig. 16 (b), (d), and (f), the optimized nodes have higher coverage and more uniform triangulation, indicating that communication between the optimized nodes will be smoother, thereby reducing power consumption. In summary, within the deployment area, when the number of nodes remains unchanged, using isomorphic sensors with a larger sensing radius can not only achieve better coverage, but also improve connectivity between network nodes, reduce communication power consumption between nodes, and increase network lifespan.
When the number of nodes is 20 and the node sensing radius is 15 m, Fig. 17 shows the visualization of node optimization deployment using different optimization algorithms. The population is set to 30 and the number of iterations is set to 100. As shown in Table 21, the coverage rates of PSO, GWO, SCA, MPA, HBA, and DSA are 93.51%, 91.30%, 89.74%, 93.66%, 94.82%, and 97.17%, respectively. The corresponding optimization node coordinates take 2.32 s, 5.35 s, 1.69 s, 3.89 s, 2.14 s, and 4.98 s, respectively. From Fig. 17, the node coverage optimized by PSO and GWO has a small amount of node overlap, while the node coverage optimized by SCA has a large overlap of nodes and a large coverage hole area, it will result in node redundancy. Besides, DSA optimized node coverage is more uniform than MPA and HBA.
From Table 21, for different node sensing radius, the coverage optimized by DSA is superior to other comparison algorithms, but node position optimization takes relatively long time. From the perspective of different node sensing radius, compared with PSO, the DSA node coverage corresponding to sensing radius of 13 m, 14 m, and 15 m has increased by 7.38%, 3.71%, and 3.66%, respectively. Compared with GWO, the coverage of DSA nodes with sensing radius of 13 m, 14 m, and 15 m increased by 6.82%, 6.46%, and 5.87, respectively. Compared with SCA, the coverage of DSA nodes with sensing radius of 13 m, 14 m, and 15 m increased by 8.31%, 5.01%, and 7.43%, respectively. Compared with MPA, the coverage of DSA nodes with sensing radius of 13 m, 14 m, and 15 m increased by 4.49%, 3.38%, and 3.51%, respectively. Compared with HBA, the coverage of DSA nodes with sensing radius of 13 m, 14 m, and 15 m increased by 3.27%, 2.21%, and 2.35%, respectively. From the mean coverage rate and optimization time of nodes’ coordinates, the mean coverage rates of PSO, GWO, SCA, MPA, HBA, and DSA are 87.97%, 86.51%, 85.97%, 89.10%, 90.28%, and 92.89%, respectively. Although the optimization time of SCA is the least, the optimized node coverage is the lowest. The optimization time of DSA is only 0.12 s less than GWO, but the optimization performance of DSA is the best among the comparison algorithms. Therefore, DSA has better application prospects in WSN node coverage optimization problems, but there is still space for improvement in the performance of DSA.
Figure 18 shows the node coverage optimization curves for algorithms such as PSO, GWO, SCA, MPA, HBA, and DSA, with 20 deployed nodes, 100 iterations, and a sensing radius of 13 m, 14 m, and 15 m, respectively. For different node sensing radius, DSA has achieved the optimal coverage percentage after 60 iterations, and it is significantly higher than other comparison algorithms around 40 iterations. MPA shows a clear upward trend in approximately 70 iterations. When the sensing radius \({R}_{s}\)= 13 m, the curves of HBA, PSO, and SCA are relatively similar before 85 iterations, while SCA falls into local optima after 85 iterations. When the sensing radius \({R}_{s}\)= 14 m, the node coverage of SCA optimization reached its optimal value around 40 times, which indicates that the SCA has fallen into local optima. When the sensing radius \({R}_{s}\)= 15 m, the node coverage of HBA has significantly improved after 85 iterations and surpassed MPA. In summary, although DSA effectively improves network coverage, reduces coverage blind spots, and reduces coverage costs, the optimization time is more time-consuming than SCA, it indicates that the comprehensive performance of DSA in solving the node optimization coverage problem of WSN still needs to be improved.
7 Conclusion and future works
A novel SI optimization algorithm, named DSA is proposed inspired by the duck swarm in this study. Two rules are modeled from the finding food and foraging of the duck, which corresponds to the exploration and exploitation phases of the designed DSA, respectively. Where duck cooperation and competition are considered in details. Eighteen test functions from the CEC benchmark functions are utilized to evaluate the performance of the proposed algorithm in terms of exploration, exploitation, local optima avoidance, convergence, and diversity. The results show that DSA can provide highly competitive results compared to well-known optimization methods, such as PSO, FA, CSO, GWO, and SCA, and other recent algorithms like MPA and AOA. The superior exploitation, exploration, local optima avoidance ability of the DSA is confirmed by the results on different types of test functions. Moreover, the convergence analysis and population diversity analysis of the DSA confirm the convergence of this algorithm. Additionally, sensitivity analysis is used to access the performance of the proposed DSA. Furthermore, the DSA has high performance on the real-world engineering constrained optimization problems and the node optimization deployment task of the WSN.
For future work, we will further study on binary and multi-objective [72,73,74] versions of the DSA for solving the node computing and energy consumption optimization tasks of the IoT. The designed DSA is also can be used to optimize the hyper-parameters of the deep learning models for the image classification [76, 77]. The feature selection problem and node location of the WSN also can be applied by the proposed inspired algorithm, which is considered the quantum theory [78].
Data availability
Enquiries about data availability should be directed to the authors.
References
Zhou, X., Cai, X., Zhang, H., Zhang, Z., Jin, T., Chen, H., Deng, W.: Multi-strategy competitive-cooperative co-evolutionary algorithm and its application. Inf. Sci. 635, 328–344 (2023)
Hu, G., Guo, Y., Wei, G., Abualigah, L.: Genghis Khan shark optimizer: a novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inform. 58, 102210 (2023)
Hu, Y., Huang, T., Yu, Y., An, Y., Cheng, M., Zhou, W., Xian, W.: An energy-aware service placement strategy using hybrid meta-heuristic algorithm in IoT environments. Clust. Comput. 26(5), 2913–2919 (2023)
Alrefai, N., Ibrahim, O.: Optimized feature selection method using particle swarm intelligence with ensemble learning for cancer classification based on microarray datasets. Neural Comput. Appl. 34(16), 13513–13528 (2022)
Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–72 (1992)
Kirkpatrick, S., Gelatt, J.C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: the IEEE International Conference on Neural Networks, Perth pp. 1942–1948 (1995)
Sastry K, Goldberg D, Kendall G (2005) Genetic algorithms. Search Methodologies. Springer, Boston, MA.
Xi, M., Sun, J., Xu, W.: An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position. Appl. Math. Comput. 205(2), 751–759 (2008)
Zhan, Z.H., Zhang, J., Li, Y., Chung, H.S.H.: Adaptive particle swarm optimization. IEEE Trans. Syst. Man Cybern. B 39(6), 1362–1381 (2009)
Chen, P., Shahandashti, S.M.: Hybrid of genetic algorithm and simulated annealing for multiple project scheduling with multiple resource constraints. Autom. Constr. 18(4), 434–443 (2009)
Storn, R., Price, K.: Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997)
Geem, Z.W., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony search. SIMULATION 76(2), 60–68 (2001)
Dorigo, M., Di Car, G.: Ant colony optimization: a new meta-heuristic. In: IEEE Congress on Evolutionary Computation, IEEE, pp. 1470–1477 (2002)
Yang, X.S.: Nature-Inspired Metaheuristic Algorithms. Luniver Press, Bristol (2008)
Yang, X. S, Deb, S.: Cuckoo search via Lévy flights. In: World Congress on Nature & Biologically Inspired Computing (NaBIC 2009), pp. 210–214, (2009)
Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
Koza, J.R.: Genetic Programming, on the Programming of Computers by Means of Natural Selection and Genetics. MIT Press, Cambridge (1992)
Yao, X., Liu, Y., Lin, G.: Evolutionary programming made faster. IEEE Trans. Evol. Comput. 3(2), 82–102 (1999)
Simon, D.: Biogeography-based optimization. IEEE Trans. Evol. Comput. 12(6), 702–713 (2008)
Kuo, R.J., Zulvia, F.E.: The gradient evolution algorithm: a new metaheuristic. Inf. Sci. 316, 246–265 (2015)
Kiran, M.S.: TSA: Tree-seed algorithm for continuous optimization. Expert Syst. Appl. 42(19), 6686–6698 (2015)
Gandomi, A.H., Yang, X.S., Alavi, A.H.: Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng. Comput. 29(1), 17–35 (2013)
Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mirjalili, S.M.: Salp Swarm Algorithm: a bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017)
Faramarzi, A., Heidarinejad, M., Mirjalili, S., Gandomi, A.H.: Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst. Appl. 152, 113377 (2020)
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)
Mirjalili, S.: Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 89, 228–249 (2015)
Arora, S., Singh, S.: Butterfly optimization algorithm: a novel approach for global optimization. Soft. Comput. 23(3), 715–734 (2019)
Eskandar, H., Sadollah, A., Bahreininejad, A., Hamdi, M.: Water cycle algorithm: a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110, 151–166 (2012)
Mirjalili, S.: SCA: a sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 96, 120–133 (2016)
Hashim, F.A., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W., Mirjalili, S.: Henry gas solubility optimization: a novel physics-based algorithm. Futur. Gener. Comput. Syst. 101, 646–667 (2019)
Hashim, F.A., Hussain, K., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W.: Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems. Appl. Intell. 51(3), 1531–1551 (2021)
Faramarzi, A., Heidarinejad, M., Stephens, B., Mirjalili, S.: Equilibrium optimizer: a novel optimization algorithm. Knowl.-Based Syst. 191, 105190 (2020)
Ahmadianfar, I., Bozorg-Haddad, O., Chu, X.: Gradient-based optimizer: a new metaheuristic optimization algorithm. Inf. Sci. 540, 131–159 (2020)
Atashpaz-Gargari, E., Lucas, C.: Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: 2007 IEEE Congress on Evolutionary Computation, IEEE, pp. 4661–4667, (2007)
Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43(3), 303–315 (2011)
Kumar, M., Kulkarni, A.J., Satapathy, S.C.: Socio evolution & learning optimization algorithm: a socio-inspired optimization methodology. Futur. Gener. Comput. Syst. 81, 252–272 (2018)
Askari, Q., Younas, I., Saeed, M.: Political optimizer: a novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 195, 105709 (2020)
Abdollahzadeh, B., Soleimanian, G. F, Mirjalili, S.: Artificial gorilla troops optimizer: a new nature‐inspired metaheuristic algorithm for global optimization problems. International Journal of Intelligent Systems pp. 1–72 (2021)
Abdel-Basset, M., El-Shahat, D., Jameel, M., Abouhawwash, M.: Young’s double-slit experiment optimizer: a novel metaheuristic optimization algorithm for global and constraint optimization problems. Comput. Methods Appl. Mech. Eng. 403, 115652 (2023)
Gharehchopogh, F.S.: An improved Harris Hawks optimization algorithm with multi-strategy for community detection in social network. J. Bionic Eng. 20(3), 1175–1197 (2023)
Askarzadeh, A.: A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput. Struct. 169, 1–12 (2016)
Meng, X., Liu, Y., Gao, X., Zhang, H.: A new bio-inspired algorithm: chicken swarm optimization: advances in swarm intelligence. ICSI 2014. Lecture Notes in Computer Science: Springer, pp. 86–94, (2014)
Xue, J., Shen, B.: A novel swarm intelligence optimization approach: sparrow search algorithm. Syst. Sci. Control Eng. 8(1), 22–34 (2020)
Bohling, M.: Severe michigan winter could leave some diving ducks stranded on land. Michigan State University Extension. (2014)
Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), IEEE, pp 69–73, (1998)
Zhang, M., Long, D., Qin, T., Yang, J.: A chaotic hybrid butterfly optimization algorithm with particle swarm optimization for high-dimensional optimization problems. Symmetry 12(11), 18 (2020)
Derrac, J., García, S., Molina, D., Herrera, F.: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 1(1), 3–18 (2011)
Wilcoxon, F.: Individual comparisons by ranking methods. Biom. Bull. 1(6), 80–83 (1945)
Meddis, R.: Unified analysis of variance by ranks. Br. J. Math. Stat. Psychol. 33(1), 84–98 (1980)
Shi, Y.: An optimization algorithm based on brainstorming process. Int. J. Swarm Intell. Res. 2(4), 35–62 (2011)
Cheng, S., Shi, Y., Qin, Q., Zhang, Q., Bai, R.: Population diversity maintenance in brain storm optimization algorithm. J. Artif. Intell. Soft Comput. Res. 4(2), 83–97 (2014)
Hussain, K., Salleh, M.N.M., Cheng, S., Shi, Y.: On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 31(11), 7665–7683 (2019)
Ray, T., Saini, P.: Engineering design optimization using a swarm with an intelligent information sharing among individuals. Eng. Optim. 3(33), 735–748 (2001)
Arora, J.S.: Introduction to Optimum Design. Elsevier Press, Amsterdam (2017)
Hashim, F.A., Houssein, E.H., Hussain, K., Mabrouk, M.S., Al-Atabany, W.: Honey badger algorithm: new metaheuristic algorithm for solving optimization problems. Math. Comput. Simul 192, 84–110 (2022)
Deb, K.: Optimal design of a welded beam via genetic algorithms. AIAA J. 29, 2013–2015 (1991)
Kumar, A., Wu, G., Ali, M.Z., Mallipeddi, R., Suganthan, P.N., Swagatam, D.: A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 56, 100693 (2020)
Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., Chen, H.: Harris hawks optimization: algorithm and applications. Futur. Gener. Comput. Syst. 97, 849–872 (2019)
He, Q., Wang, L.: An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 20, 89–99 (2007)
Mezura-Montes, E., Coello, C.A., Vela´zquez-Reyes, J., Mun˜oz-Da´vila, L.: Multiple trial vectors in differential evolution for engineering design. Eng. Optim. 39(5), 567–589 (2007)
Abdollahzadeh, B., Gharehchopogh, F.S., Mirjalili, S.: African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 158, 107408 (2021)
Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M., Gandomi, A.H.: The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 376, 113609 (2021)
Chen, L., Feng, C., Ma, Y.: Improved Harris Hawks optimization for global optimization and engineering design. Clust. Comput. 24, 1–25 (2023)
Zhang, M., Wang, D., Yang, M., Tan, W., Yang, J.: HPSBA: a modified hybrid framework with convergence analysis for solving wireless sensor network coverage optimization problem. Axioms 11(12), 675 (2022)
Dao, T.K., Nguyen, T.D., Nguyen, V.T.: An improved honey badger algorithm for coverage optimization in wireless sensor network. J. Internet Technol. 24(2), 363–377 (2023)
Jin, Z., Jiang, J., Kong, Z., Pan, C., Ruan, X.: A novel coverage optimization scheme based on enhanced marine predator algorithm for urban sensing systems. IEEE Sens. J. Early Access (2023). https://doi.org/10.1109/JSEN.2023.3287582
Xia, F., Yang, M., Zhang, M., Zhang, J.: Joint light-sensitive balanced butterfly optimizer for solving the NLO and NCO problems of WSN for environmental monitoring. Biomimetics 8(5), 393 (2023)
Fortune, S.: Voronoi diagrams and Delaunay triangulations. In Handbook of discrete and computational geometry, pp. 705–721 (2017)
Zhang, J., Piao, M.J., Zhang, D.G., Zhang, T., Dong, W.M.: An approach of multi-objective computing task offloading scheduling based NSGS for IOV in 5G. Clust. Comput. 25(6), 4203–4219 (2022)
Bacanin, N., Antonijevic, M., Bezdan, T., Zivkovic, M., Venkatachalam, K., Malebary, S.: Energy efficient offloading mechanism using particle swarm optimization in 5G enabled edge nodes. Clust. Comput. 26(1), 587–598 (2023)
Gharehchopogh, F.S., Abdollahzadeh, B., Barshandeh, S., Arasteh, B.: A multi-objective mutation-based dynamic Harris Hawks optimization for botnet detection in IoT. Internet Things 24, 100952 (2023)
Shen, Y., Zhang, C., Gharehchopogh, F.S., Mirjalili, S.: An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 215, 119269 (2023)
Zhang, M., Wen, G., Zhong, J., Chen, D., Wang, C., Huang, X., Zhang, S.: MLP-like model with convolution complex transformation for auxiliary diagnosis through medical images. IEEE J. Biomed. Health Inform. 27(9), 4385–4396 (2023)
Özbay, E., Özbay, F.A., Gharehchopogh, F.S.: Peripheral blood smear images classification for acute lymphoblastic leukemia diagnosis with an improved convolutional neural network. J. Bionic Eng. 4, 1–17 (2023)
Jain, R., Sharma, N.: A quantum inspired hybrid SSA–GWO algorithm for SLA based task scheduling to improve QoS parameter in cloud computing. Clust. Comput. 26, 3587–3610 (2023)
Funding
The authors have not disclosed any funding.
Author information
Authors and Affiliations
Contributions
MZ: Conceptualization, Methodology, Writing—original draft, Writing—review and editing. GW: Supervision, Writing—review and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhang, M., Wen, G. Duck swarm algorithm: theory, numerical optimization, and applications. Cluster Comput 27, 6441–6469 (2024). https://doi.org/10.1007/s10586-024-04293-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-024-04293-x