Keywords

1 Introduction

Optimization can now solve many problems in the real world, including civil engineering, construction, electromechanical, control, financial, health management, etc. There are significant results [1,2,3,4,5], whether it is applied to image recognition, feature extraction, machine learning, and deep learning model training, The optimization algorithm can be used to adjust [6,7,8]. The optimization method can make the traditional researcher spend a lot of time to establish the expert system to adjust and optimize, and greatly reduce the time required for exploration. With the exploration of intelligence, the complexity of the problem gradually increases.

In the past three decades, meta-heuristic algorithms that simulate the behavior of nature have received a lot of attention, for example, Particle Swarm Optimization (PSO) [9], Differential Evolution (DE) [10], Crow Search algorithm (CSA) [11], Grey Wolf Optimization (GWO) [12], Coyote Optimization Algorithm (COA) [13], Whale Optimization Algorithm (WOA) [14], Honey Badger Algorithm (HBA) [15] and Red fox optimization algorithm (RFO) [16]. These meta-heuristic algorithms inspired by natural behavior are highly efficient in optimization problems. Performance and ease of application have been improved and applied to various problems.

GWO simulates the class system in the predation process of wolves in nature, and divides gray wolves into four levels. Through the domination and leadership between the levels, the gray wolves are driven to find the best solution. This method prevents GWO from falling into the local optimum solution, and use the convergence factor to make the algorithm use a longer moving distance to perform a global search in the early stage, and gradually tends to a local search as time changes. Arora et al. [17] mixed GWO and CSA use the flight control parameters and modified linear control parameters in CSA to achieve the balance in the exploration and exploration process, and use it to solve the problem of function optimization.

The idea of COA comes from the coyotes living in North America. Unlike most meta heuristic optimization, which focuses on the predator relationship between predators and prey, COA focuses on the social structure and experience exchange of coyotes. It has a special algorithm structure. Compared with GWO, although alpha wolf (best value) is still used to guide, it does not pay attention to the ruling rules of beta wolf (second best value) and delta wolf (third best value), and balance the global search in the optimization process and local search. Li et al. [18] changed the COA differential mutation strategy, designed differential dynamic mutation disturbance strategy and adaptive differential scaling factor, and used it in the fuzzy multi-level image threshold in order to change the COA iteration to the local optimum, which is prone to premature convergence. Get better image segmentation quality.

RFO is a recently proposed meta-heuristic algorithm that imitates the life and hunting methods of red foxes. It simulates red foxes traveling through the forest to find and prey on prey. These two methods correspond to global search and local search, respectively. The hunting relationship between hunters makes RFO converge to an average in the search process.

In this paper, the combination of convergence factors allows COA to implement better exploration, and adds the risk of coyotes being killed by hunters in nature and the ability to produce young coyote to increase local exploration.

In summary, Sect. 2 is a brief review of COA methods and Sect. 3 describes the project scope and objectives of the proposed method. Section 4 shows the experimental results of proposed algorithms and other algorithms on test functions. Finally, the conclusions are in Sect. 5.

2 Standard Coyote Optimization Algorithm

Coyote Optimization Algorithm (COA) was proposed by Pierezan et al. [13] in 2018, it has been widely used in many fields due to its unique algorithm structure [19,20,21]. In COA, the coyote population is divided into \({N}_{p}\) groups, and each group contains \({N}_{i}\) coyotes, so the total number of coyotes is \({N}_{p}\) * \({N}_{i}\), at the start of the COA, the number of coyotes in each group has the same population. Each coyote represents the solution of the optimization problem, and is updated and eliminated in the iteration.

COA effectively simulates the social structure of the coyote, that is, the decision variable \(\overrightarrow{x}\) of the global optimization problem. Therefore, the social condition soc (decision variable set) formed at the time when the ith coyote of the pth ethnic group is an integer t is written as:

$${soc}_{i}^{p,t} = \overrightarrow{x} = \left({x}_{1,}{x}_{2,}\dots ,{x}_{d}\right)$$
(1)

where d is the search space dimension of the optimization problem, the first initialize the coyote race group. Each coyote is randomly generated in the search space. The ith coyote in the race group p is expressed in the jth dimension as:

$$soc_{(i,j)}^{(p,t)} = LB_{j} + r_{j} \times (UB_{j} - LB_{j} ),\quad j = 1,\,2, \ldots d$$
(2)

where \({LB}_{j}\) and \({UB}_{j}\) represent the lower and upper bounds of the search space, and \({r}_{j}\) is a random number generated in the range of 0 to 1. The current social conditions of the ith coyote, as can be shown in (3):

$${fit}_{i}^{p,t} =f\left({soc}_{i}^{p,t}\right)$$
(3)

In nature, the size of a coyote group does not remain the same, and individual coyotes sometimes leave or be expelled from the group alone, become a single one or join another group. COA defines the individual coyote outlier probability \({P}_{e}\) as:

$${P}_{e } =0.005 \times {N}_{i}^{2}$$
(4)

When the random number is less than \({P}_{e}\), the wolf will leave one group and enter another group. COA limits the number of coyotes per group to 14. And COA adopts the optimal individual (alpha) guidance mechanism:

$$alpha^{p,t} = \left\{ {soc_{i}^{p,t} |arg_{i = 1,2, \ldots d} \,{\text{max}}\,{fit}\left( {soc_{i}^{p,t} } \right)} \right\}$$
(5)

In order to communicate with each other among coyotes, the cultural tendency of coyote is defined as the link of all coyotes’ social information:

$$cult_{j}^{p,t} = \left\{ {\begin{array}{*{20}l} {\quad O_{{\frac{{N_{i} + 1}}{2},j}}^{p,t} ,\quad \quad \quad N_{i} \;is\;odd} \hfill \\ {\frac{{O_{{\frac{{N_{i} + 1}}{2},j}}^{p,t} + O_{{\frac{{N_{i} + 1}}{2},j}}^{p,t} }}{2},\;\;otherwise} \hfill \\ \end{array} } \right.$$
(6)

The cultural tendency of the wolf pack is defined as the median of the social status of all coyotes in the specific wolf pack, \({O}^{p,t}\) is the median of all individuals in the population p in the jth dimension at the tth iteration.

The birth of the pup is a combination of two parents (coyote selected at random) and environmental influences as:

$$pup_{j}^{p,t} = \left\{ {\begin{array}{*{20}l} {soc_{{r_{1} ,j}}^{p,t} ,\quad \quad rand_{j} < P_{s} \,or\,j = j_{1} } \hfill \\ {soc_{{r_{2} ,j}}^{p,t} ,\;\;rand_{j} \ge P_{s} + P_{a} \,or\,j = j_{2} } \hfill \\ {\quad \quad \quad R_{j} ,\;\;otherwise} \hfill \\ \end{array} } \right.$$
(7)

Among them, \({r}_{1}\) and \({r}_{2}\) are the random coyotes of two randomly initialized packages, \({j}_{1}\) and \({j}_{2}\) are two random dimensions in the space. Therefore, the newborn coyotes can be inherited through random selection by parents, or new social condition can be produced by random, while \({P}_{s}\) and \({P}_{a}\) influenced by search space dimension are the scatter probability and the associated probability respectively, as shown in (8) and (9). \({R}_{j}\) is a random number in the search space of the jth dimension, and \({rand}_{j}\) is a random number between 0 and 1.

$${P}_{s}=1/d$$
(8)
$${P}_{a}= \left(1-{P}_{s} \right)/2$$
(9)

In order to maintain the same population size, COA uses coyote group that do not have environmental adaptability \(\omega\) and the number of coyotes in the same population \(\varphi\), when \(\varphi =1\) the only coyote in \(\omega\) dies and \(\varphi >1\) the oldest coyote in \(\omega\) dies, and pup survives, and when \(\varphi <1\), pup alone cannot satisfy the survival condition. At the same time, in order to show the cultural exchange in the population, set influence led by the alpha wolf (\({\updelta }_{1}\)) and the influence by the group (\({\updelta }_{2}\)), the \({\updelta }_{1}\) guided by the optimal individual makes the coyote close to the optimal value, and the \({\updelta }_{2}\) guided by the coyote population reduces the probability of falling into the local optimal value, where \({cr}_{1}\) and \({cr}_{2}\) are Two random coyotes, \({\updelta }_{1}\) and \({\updelta }_{2}\) are written as:

$$\updelta _{1} = alpha^{p,t} - soc_{{cr_{1} }}^{p,t}$$
(10)
$${\updelta }_{2}= {cult}^{p,t}-{ soc}_{{cr}_{2}}^{p,t}$$
(11)

After calculating the two influencing factors \({\updelta }_{1}\) and \({\updelta }_{2}\), using the pack influence and the alpha, the new social condition (12) of the coyote is initialized by two random numbers between 0 and 1, and the new social condition (13) (the position of the coyote) is evaluated.

$${new\_soc}_{i}^{p,t}= {soc}_{i}^{p,t} + {r}_{1} \times {\updelta }_{1} + {r}_{2} \times {\updelta }_{2}$$
(12)
$${new\_fit}_{i}^{p,t}= f\left({new\_soc}_{i}^{p,t}\right)$$
(13)

Finally, according to the greedy algorithm, update the new social condition (the position of the coyote) as (14), and the optimized solution of the problem is the coyote’s can best adapt to the social conditions of the environment.

$${soc}_{i}^{p,t+1} = \left\{\begin{array}{c}{new\_soc}_{i}^{p,t},{new\_fit}_{i}^{p,t}<{fit}_{i}^{p,t} \\ {soc}_{i}^{p,t}, otherwise\end{array}\right.$$
(14)

3 Coyote Optimization Algorithm with Linear Convergence

It is important for optimization algorithms to strike a balance between exploration and exploration. In the classic COA, the position update distance (12) of the coyote is calculated by multiplying two random numbers between 0 and 1 and the influencing factors \({\updelta }_{1}\) and \({\updelta }_{2}\). This method makes the position of the coyote tend to an average. As a result, the global search capability in the early stage of the algorithm is insufficient, and the local search cannot be performed in depth in the later stage. At the same time, when calculating the social culture of coyotes (6), they will be dragged down by the poorly adapted coyotes, resulting in poor final convergence.

In order to overcome the limitations of the above conventional COA, the linear convergence strategy of GWO is adopted, and the linear control parameter (a) is calculated by follows.

$$a=2- \left(2 \times t/{Max}_{iter}\right)$$
(15)

And calculate two random moving vectors A to replace two random numbers, so that the algorithm can move significantly in the early stage to obtain a better global exploration, and in the later stage can perform a deep local search, so that the algorithm can converge in a limited time give better results. The value of a is 2 from the beginning of the iteration, and decreases linearly to 0 with the iteration. Therefore, the movement vector A is calculated by follows.

$$A=2*\mathrm{a}* {r}_{1}*a$$
(16)

Among them, \({r}_{1}\) is a random number between 0 and 1. Therefore, the social conditions of the new coyote will be generated by follows (17). The pseudo code of proposed method is presented (see Fig. 1).

$${new\_soc}_{i}^{p,t}= {soc}_{i}^{p,t} + {A}_{1} \times {\updelta }_{1} + {A}_{2} \times {\updelta }_{2}$$
(17)
Fig. 1.
figure 1

Pseudo code of proposed method.

COA uses the average of coyote information to form social culture, but it is easily affected by the coyote with the lowest adaptability, making iterative early-stage algorithms unable to quickly converge to a better range. Therefore, referring to the hunting relationship between the red fox and the hunter in the RFO, and applying it in the COA to simulate the situation where the coyote strays into the range of human activities and is hunted, the probability of the coyote being killed by the hunter is H (18), by The linear control parameter is calculated and rounded. With time, H will gradually decrease to 0. In the later stage of the algorithm, this mechanism is not used to avoid falling into the local optimal solution.

$$H= \left[{N}_{i}* \left(a*0.1\right)\right]$$
(18)

In order to maintain the total population, new coyotes will be produced. Therefore, new coyotes will be born from combining information of the best coyote (\({best}_{1}\)) and second-best coyote (\({best}_{2}\)) in the group. The location of the newborn coyotes (19), k is a random vector in the range [0, 1]. Therefore, the pseudocode of COALC is showed below after the formula is replaced.

$${new\_soc}_{i}^{p,t}=k* \frac{{soc}_{{best}_{1}}^{p,t}+{soc}_{{best}_{2}}^{p,t}}{2}$$
(19)

4 Experimental Settings and Experimental Results

4.1 Benchmarks Functions and Algorithms Setup

Table 1. Global optimization, dimensions and search range of ten CEC 2019 test functions

The proposed COALC method uses the 10 benchmark functions shown in Table 1 in the IEEE CEC2019 test function (CEC2019) [22] to extend our benchmark test, where \({F}_{i}^{*}\) is the global optimum and D is the dimension of the optimization problem. These benchmarks vary according to the number, dimensionality, and search space of the local optimal classifications. In the CEC2019 function, the functions f1, f2, and f3 are completely dependent on the parameters and do not rotate (or shift). Among them, f1 and f2 are error functions that need to rely on highly conditional solutions, and f3 is a way to simulate atomic interaction. It is difficult to find the best solution directly in the function f9, and the optimization algorithm must perform a deep search in the circular groove. F4, f5, f6, f7, f8 and f10 are classic optimization problems.

In the benchmark test, this paper compared the proposed COALC, standard COA, ICOA, GWO, and RFO. In the experiment, COALC, standard COA and ICOA defined the coyote population number parameter \({N}_{p}\) as 6, and the coyote \({N}_{i}\) in each group was set as 5. The number of gray wolves and foxes for GWO and RFO is set to 30, and the above optimization algorithms are based on their original settings, only need to mention the parameters of the method. Therefore, all running comparison heuristics have 30 number of species. The experiment was run on Python 3.8.8.

4.2 Comparison and Analysis of Experimental Results

Table 2. Experiment results of five optimizers
Fig. 2.
figure 2

Median convergence characteristics of five optimizers.

Each optimizer is performed 25 independent runs on the CEC2019, and the stopping criterion is equal to the number of ethnic groups * 500 iterations. Thus, the maximum fitness evaluation (FEs) is set as 15,000. The average error obtained from the global optimum and standard deviation is shown in Table 2, and the best performance is shown in bold. In Fig. 2 can be seen that for most of the benchmark functions, COALC can find the best solution compared to other methods. COALC acquires better exploration capabilities by inheriting the relationship between hunter and prey of RFO, and has more outstanding capabilities than other methods in f1 and f2 functions.

5 Conclusion and Future Research

In this paper, the main contribution is to propose an improved COA algorithm with the convergence factor of the GWO algorithm and the elimination of the worst coyote mechanism, and named it COALC. This method allows COA to acquire better exploration and exploration capabilities in a limited time through the convergence factor, and at the same time eliminates poor coyotes to improve the convergence speed of COA. Finally, Results of experimental benchmark tests have shown that the proposed COALC and recent metaheuristic algorithms such as COA, ICOA, GWO, and RFO, etc., and are evaluated in the CEC2019 test function. In most cases, a better global solution can be obtained than other algorithms.