1 Introduction

Metaheuristic algorithm has been become very popular over the past decade. This popularity is due to several key reasons: flexibility, lack of a gradient mechanism, simple structure and easy-to-understand. Which can convenient and quickly solve various problems. In Fister et al. (2013), the metaheuristic algorithms are divide into four major categories: swarm intelligence (SI) based, bio-inspired (but not SI-based), physics/chemistry-based, and others. The evolutionary algorithm simulates the concept of principle evolution, the most popular of which is the genetic algorithm (GA) (Goldberg and Holland 1988), which simulates Darwin’s evolutionary theory, the principle is that optimization is a set of stochastic solutions for specific problems. In the case of the evaluation objective function, it updates the variables according to their fitness values. Similarly, there are many evolutionary algorithms, such as differential evolution (DE) (Rainer and Price 1997). The ecosystem simulation algorithms, such as, biogeography-based optimization (BBO) (Simon 2008). The swarm intelligence algorithms, including particle swarm optimization (PSO) (Kennedy 2011), ant colony optimization (ACO) (Dorigo et al. 2006). There are some other swarm intelligent algorithms, such as, artificial bee colony(ABC) (Karaboga and Basturk 2007), bat algorithm (BA) (Yang 2010), cuckoo search (CS) (Yang and Deb 2009), gray wolf optimization (GWO) (Mirjalili et al. 2014), whale optimization algorithm (WOA) (Mirjalili and Lewis 2016), dragonfly algorithm (DA) (Mirjalili 2016), flower pollination algorithm (FPA) (Yang 2012), and Satin bowerbird optimizer (SBO) (Moosavi and Bardsiri (2017)).

The neural network is a training algorithm to simulate the principle of human brain computation. In 1996, Huang made a systematic theoretical description of pattern recognition and neural network (Huang 1996). In 1999, Huang et al. proposed linear and nonlinear feedforward network classification (Huang and Ma 1999). In 2002, Huang research radial model of probabilistic neural network model and application (Huang 1999), in 2004 Huang proposed a constructive method to find arbitrary roots of polynomial through neural network (Huang 2004). Zhao et al. used radial basis probabilistic neural networks for genetic optimization (Shang et al. 2006). In 2005, Huang proposed a radial basis probability determination by recursive orthogonal least squares algorithm (Zhao et al. 2004). In 2006, Du et al. proposed a new radial basis probabilistic neural network full structure optimization algorithm (Huang and Zhao 2005). Shang et al. use the FastICA algorithm and radial basis probabilistic neural network for palmprint recognition (Du et al. 2007). In 2007, Zheng et al. MISEP method is proposed (Shang et al. 2006). In 2008, Huang et al. studied the structural hybrid structure optimization method of radial basis probabilistic neural network (Li et al. 2009). Han et al. proposed to encode a priori information into feedforward neural network. New Constraint Learning Algorithm for Function Approximation in (Huang and Du 2008), and propose modified constraint learning algorithms to incorporate additional functional constraints into neural networks (Han and Huang 2008). In 2018, Shayanfar et al. propsed a new metaheuristic farmland fertility algorithm for solving continuous optimization problems (Shayanfar and Gharehchopogh 2018). Klein et al. proposed a meerkats-inspired algorithm for global optimization problems (Klein and dos Santos Coelho 2018). Mortazavia et al. proposed a hybrid metaheuristic Interactive search algorithm optimization algorithm (Mortazavia et al. 2018). Pierezan et al. proposed a metaheuristic coyote optimization algorithm for global optimization problems (Pierezan and Dos Santos Coelho 2018). Klein et al. proposed a swarm Intelligence cheetah based optimization algorithm (Klein et al. 2018). In 2019, Segundo, et al. using falcon optimization algorithm design of heat exchangers (de Vasconcelos Segundo et al. 2019). A new approach to fuzzy optimization based on the generalization of Bellman‐Zadeh’s (BZ) concept is proposed by Angelov. It consists of a parametric generalization of intersection of fuzzy sets and a generalized defuzzification method (Angelov 1994). Shadravan et al. proposed a novel nature-inspired metaheuristic sailfish optimizer algorithm for solving constrained engineering optimization problems (Shadravan et al. 2019).

The Satin bower bird optimizer algorithm is new metaheuristic optimization algorithm is proposed by Moosavi and Bardsiri 2017. It has been proven that the SBO can provide relatively competitive accuracy results. Due to this optimization algorithm is flexible, efficient and simple and successfully applied to for software development effort estimation, et al. Seyyed et al. has applied the SBO to making a link between software development effort estimation problem, ANFIS and SBO, congestion management. El-Hay et al. applied the gardener bird algorithm to Steady-state and dynamic models of solid oxide fuel cells (Han et al. 2008). Zhao et al. use the neural networks committee to resolve Human face recognition (El-Hay et al. 2018).

The complex-valued method is applied to a single individual expressing the optimization algorithm, or to optimize the expression of the weights of the neural network Satin bowerbird optimizer algorithm adopts real number coding and the individual is real number coding. The concept of plural applications in the algorithm is proposed and the introduction of diploids to represent individuals. As each complex can express two-dimensional information, this method greatly expands the dimension of expressible feature space. This paper uses complex numbers to correspond to actual variables. Although the real part and the imaginary part of the complex number are still represented by real numbers, compared with the traditional encoding method, the algorithm uses the two parameters of real part and imaginary part to represent the independent variable, which greatly improves the diversification of personalities and further increases the SBO The convergence accuracy and convergence speed of the algorithm help to improve the search efficiency of Satin bowerbird optimizer algorithm. Another important feature is the new method of solving practical problems due to the multiplicity of forms of representation and operation itself, that is, the use of complex-valued CSBO algorithm. The original satin blue optimization (SBO) has the slow convergence speed and low convergence accuracy. To improve the shortcomings of the original algorithm, complex encoding is introduced to encode the nest individuals. The real coding of the nest becomes complex coding, which increases the individual diversity of the original algorithm, increases the convergence speed and accuracy, and avoids the algorithm falling into local optimum later.

The remainder of this paper is organized as follows: Sect. 2 gives a brief introduction to SBO; Sect. 3 the complex-valued encoding satin bower bird optimizer (CSBO) Algorithm is proposed. Section 4 tests the performance of the algorithm using standard test functions and discusses the experimental results of test functions. Section 5 uses the CSBO algorithm to solve motor parameter identification problem and compare it with other algorithms. Finally, Sect. 6 contains some conclusions and future works.

2 Satin Bowerbird optimizer (SBO)

The male satin bird establishes a dedicated gazebo, called the bower that attracts the female in the bower. The bower is decorated with some decorations (flowers, feathers). These decorations are important in attracting females. It is important to note that males build their bower with innate instincts and by imitating the way other males build. According to the principle of satin bird life, SBO algorithm is divided into the following stages.

2.1 Generating a set of random bower

Initial population includes a series of positions for bowers. Each position is defined as an n dimensional vector of parameters. These values are randomly initialized so that a coincident distribution is considered between the lower and upper limit parameters. The parameters of each bower are the same as variables in the optimization problem. The combination of parameters determines the attractiveness of bowers.

2.2 Calculating the probability of each population member

The attraction probability of male bird built bowers is a key factor in whether to attraction a mate. A female satin bower bird selects a bower (built) based on its probability. This probability calculated by Eq. (1)

$$ \mathop {prob}\nolimits_{j} = \frac{{\mathop {fit}\nolimits_{n}}}{{\sum\nolimits_{m = 1}^{nb} {\mathop {fit}\nolimits_{m}}}} $$
(1)

where the \( \mathop {fit}\nolimits_{n} \) is the fitness of the \( nth \) solution, and nb is the number of bower. In this equation, the value of \( \mathop {fit}\nolimits_{n} \) is achieved by Eq. (2)

$$ \mathop {fit}\nolimits_{n} = \left\{{\begin{array}{*{20}c} {\frac{1}{{1 + f(\mathop X\nolimits_{n})}},f(\mathop X\nolimits_{n}) \ge 0} \\ {1 + \left| {f(\mathop X\nolimits_{n})} \right|,f(\mathop X\nolimits_{n}) < 0} \\ \end{array}} \right. $$
(2)

where the \( f(\mathop x\nolimits_{n}) \) is the value of cost function in \( nth \) position (\( nth \) bower).

2.3 Upgrade bower position

Elitism is one of the important characteristics of evolutionary algorithms. Elitism allows the best solution (solutions) to be retained at every stage of the optimization process. All birds usually use natural instincts to build bower, and satin bower birds is no exception. The satin bower birds use his natural instinct to build his bower and decorate it. This means that all males use materials to decorate their bower, so there is an important factor in the attraction is the experience of the male birds. In each iteration, the best individual is preserved as elite of iteration. Elites have the highest fitness values and it should be affect other positions.

In each iteration of the algorithm, the changes of every bird’s bower by Eq. (3)

$$ \mathop X\nolimits_{nk}^{new} = \mathop X\nolimits_{nk}^{old} + \mathop \lambda \nolimits_{k} \left({\left({\frac{{\mathop X\nolimits_{jk} + \mathop X\nolimits_{elite,k}}}{2}} \right) - \mathop X\nolimits_{nk}^{old}} \right) $$
(3)

where the \( \mathop X\nolimits_{n} \) is \( nth \) bower or solution vector and \( \mathop X\nolimits_{nk} \) is \( kth \) member of vector. In Eq. (3), value \( j \) is calculated based on probabilities derived from positions. In fact, the value \( j \) is calculated by the roulette wheel procedure, which means that the solution having larger probability will have more chance to be selected as \( \mathop X\nolimits_{j} \),\( \mathop X\nolimits_{elite} \) represents the location of the elite and is stored in each loop of the algorithm. In fact, the location of the elite is a bower position; the fitness of the current iteration is the highest.

The parameter \( \mathop \lambda \nolimits_{k} \) determines the attractiveness of the target bower. \( \mathop \lambda \nolimits_{k} \) determines the amount of steps to be calculated for each variable. This parameter is determined by Eq. (4)

$$ \mathop \lambda \nolimits_{k} = \frac{\alpha}{{1 + \mathop p\nolimits_{j}}} $$
(4)

In Eq. (4), \( \alpha \) is the greatest step size and \( \mathop p\nolimits_{j} \) is the probability obtained by Eq. (1) using the goal bower. Since the obtained probability values are between 0 and 1, the denominator of this equation is collected by 1 to avoid 0 in the denominator of the Eq. (4). As is obvious from Eq. (4), the step size is inversely proportional to the probability of target position. In other words, when probability of the target position is greater (due to the constant α), movement to that position is more carefully done. The highest step size occurs when the probability of the target position is 0 while the step size will be \( \alpha \). On the other hands, the lowest step size occurs when the probability of target position is 1 and the step size will be \( {\alpha \mathord{\left/{\vphantom {\alpha 2}} \right. \kern-0pt} 2} \).

2.4 Mutation

When men are busy building bower on the ground, they may be attacked by other animals or are completely ignored. In many cases, strong bird will steal materials from weak bird, and even destroy their bower. Thus, at the end of each cycle of the algorithm, a random variation is applied at a certain probability. The change is in accordance with the normal distribution, as in Eq. (5)

$$ \mathop X\nolimits_{nk}^{new} \sim N(\mathop X\nolimits_{nk}^{old} ,\mathop \sigma \nolimits^{2} ) $$
(5)

For ease of understanding, it is more intuitive and easy to program, and Eq. (5) can be expressed as Eq. (6)

$$ \mathop X\nolimits_{nk}^{new} = \mathop X\nolimits_{nk}^{old} + (\sigma *N(0,1)) $$
(6)

In Eq. (6), \( \sigma \) is a proportion of space width, \( \sigma \) is calculated by Eq. (7)

$$ \sigma = z*(\mathop {\text{var}}\nolimits_{\max} - \mathop {\text{var}}\nolimits_{\min}) $$
(7)

In Eq. (7), \( \mathop {\text{var}}\nolimits_{\max} \) is upper bound, \( \mathop {\text{var}}\nolimits_{\min} \) is lower bound. There are assigned to variables, respectively. \( z \) parameter is percent of the difference between the upper and lower limit which is variable.

3 Complex-valued Satin Bower bird Optimizer (CSBO)

3.1 The complex-valued encoding method

In the biological world, the chromosomes of complex biological tissues generally use double or multi-stranded structures. Due to the two-dimensional characteristic of complex-valued encoding, this paper naturally applies it to represent diploid. In references (El-Hay et al. 2018; Xiong et al. 2015; Rashid et al. 2016; Abdel-Baset et al. 2017; Sakthivel et al. 2010) that used complex encoding to represent individuals. This article is inspired by this. In particular, a plurality of alleles can be used to describe a pair of alleles in the chromosome pair, complex real and imaginary parts are called real genes and virtual genes. For a problem with \( n \) arguments, there are \( n \) complex numbers, then corresponding to the \( n \) complex bower is recorded as follows (8):

$$ X_{n} = R_{n} + iI_{n} $$
(8)

The gene of the bower can be expressed as a diploid structure and recorded as (\( R_{n},\;L_{n} \)).Where \( R_{n} \) and \( \mathop I\nolimits_{n} \) represent the real and imaginary parts of the complex number, respectively. Therefore, the \( nth \) bird’s bower can be expressed as shown in Table 1.

Table 1 Bower chromosome model

3.2 Initialize the complex-valued encoding population

Let the range of the function argument be interval \( [\mathop {\text{var}}\nolimits_{R\min},\mathop {\text{var}}\nolimits_{{\mathop {R\max}\nolimits_{{}}}}] \). Randomly generate \( n \) complex modulus \( \mathop \rho \nolimits_{n} \) and the amplitude \( \mathop \theta \nolimits_{n} \), the resulting modulus vector satisfies as follows:

$$ R_{n} + iL_{n} = \rho_{n} (\cos \theta_{n} + i\sin \theta_{n}) $$
(7)
$$ \rho_{n} = \left[{0,\;\frac{{\text{var}_{R\max} - \text{var}_{R\min}}}{2}} \right],\;\;\theta_{n} = \left[{\mathop {\text{var}}\nolimits_{{\text{Im} in}},\mathop {\text{var}}\nolimits_{{\text{Im} ax}}} \right] $$
(8)

where the \( \mathop {\text{var}}\nolimits_{R\max} \) denotes the upper bound of the real part and \( \mathop {\text{var}}\nolimits_{R\min} \) denotes the lower bound of the real part. \( \mathop {\text{var}}\nolimits_{{\text{Im} ax}} \) denotes the upper bound of the imaginary part, which is set as \( 2\pi \), \( \mathop {\text{var}}\nolimits_{{\text{Im} in}} \) denotes the lower bound of the imaginary part, and is set to \( - 2\pi \). The \( n \) real and imaginary parts are assigned to the real and virtual genes of the bower according to produce a bird’s bower.

3.3 The updating method of CSBO

3.3.1 Upgrade position formula

$$ \mathop X\nolimits_{Rn}^{new} + i\mathop X\nolimits_{In}^{new} = \mathop X\nolimits_{Rn}^{old} + i\mathop X\nolimits_{In}^{old} + \mathop \lambda \nolimits_{n} \left({\left({\frac{{\mathop X\nolimits_{Rj} + \mathop X\nolimits_{{\text{Re} lite}}}}{2}} \right){-}\mathop X\nolimits_{Rn}^{old}} \right) + i\mathop \lambda \nolimits_{n} \left({\left({\frac{{\mathop X\nolimits_{Ij} + \mathop X\nolimits_{Ielite}}}{2}} \right){-}\mathop X\nolimits_{In}^{old}} \right) $$
(9)

where the \( \mathop X\nolimits_{Rn} \) is \( nth \) real parts bower or solution vector.\( \mathop X\nolimits_{{\text{Re} lite}} \) Represents the real parts position of the elite. The value \( \mathop X\nolimits_{Rj} \) is calculated by the roulette wheel procedure.\( \mathop {\text{var}}\nolimits_{R\max} \) and \( \mathop {\text{var}}\nolimits_{R\min} \) mean the upper and lower bounds of the real part. \( \mathop x\nolimits_{In} \) is \( nth \) imaginary parts bower or solution vector.\( \mathop x\nolimits_{Ielite} \) Represents the imaginary parts position of the elite. The value \( \mathop x\nolimits_{Ij} \) is calculated by the roulette wheel procedure.\( \mathop {\text{var}}\nolimits_{{\text{Im} ax}} \) and \( \mathop {\text{var}}\nolimits_{{\text{Im} in}} \) mean the upper and lower bounds of the real part.

3.3.2 Bower mutation

$$ \mathop X\nolimits_{Rn}^{new} + i\mathop X\nolimits_{In}^{new} = \mathop X\nolimits_{Rn}^{old} + \mathop {iX}\nolimits_{In}^{old} + (\mathop \sigma \nolimits_{R} *N(0,1)) + i(\mathop \sigma \nolimits_{I} *N(0,1)) $$
(10)
$$ \mathop \sigma \nolimits_{R} = z*(\mathop {\text{var}}\nolimits_{R\max} - \mathop {\text{var}}\nolimits_{R\min}) $$
(11)
$$ \mathop \sigma \nolimits_{I} = z*(\mathop {\text{var}}\nolimits_{{\text{Im} ax}} - \mathop {\text{var}}\nolimits_{{\text{Im} in}}) $$
(12)

In Eq. (10), the real and imaginary parts of the bird’s bower position obey normal distribution N. Use Eq. (12) to make a mutation update of the bower. \( \mathop {\text{var}}\nolimits_{{\text{Im} ax}} \) and \( \mathop {\text{var}}\nolimits_{{\text{Im} in}} \) mean the upper and lower bounds of the real part.\( \mathop {\text{var}}\nolimits_{{\text{Im} ax}} \) and \( \mathop {\text{var}}\nolimits_{{\text{Im} in}} \) mean the upper and lower bounds of the virtual part.

3.3.3 Fitness calculation

To solve the fitness function, the complex-valued of bower must be converted into a real number, the number of modulo as the size of the real number, the symbol determined by the amplitude. The specific approach is shown in Eqs. (17) and (18)

$$ \rho_{n} = \sqrt {X_{Rn}^{2} + X_{In}^{2}} $$
(13)
$$ X_{n} = \rho_{n} \text{sgn} \left(\sin \left(\frac{{X_{In}}}{{\rho_{n}}}\right)\right) + \frac{{\text{var}_{R\max} + \text{var}_{R\min}}}{2} $$
(14)

where \( \mathop \rho \nolimits_{n} \) denotes the nth multidimensional modulus, \( \mathop X\nolimits_{{\mathop {Rn}\nolimits_{{}}}} \) and \( \mathop X\nolimits_{{\mathop {Ln}\nolimits_{{}}}} \) denote the real and imaginary parts of the complex modulus, respectively, and \( \mathop X\nolimits_{n} \) is the transformed real number independent variable.

3.3.4 Calculating the probability of each population member

$$ \mathop {prob}\nolimits_{j} = \frac{{\mathop {fit}\nolimits_{n}}}{{\sum\nolimits_{m = 1}^{nb} {\mathop {fit}\nolimits_{m}}}} $$
(15)
$$ fit_{n} = \left\{ {\begin{array}{*{20}l} {\frac{1}{{1 + f( \, X_{n} )}},} &\quad {f( \, X_{n} ) \ge 0} \\ {1 + \left| {f( \, X_{n} )} \right|,} &\quad {f( \, X_{n} ) < 0} \\ \end{array} } \right. $$
(16)
$$ \mathop \lambda \nolimits_{n} = \frac{\alpha}{{1 + \mathop p\nolimits_{j}}} $$
(17)

where \( \alpha \) the maximum is step size, and \( \mathop p\nolimits_{j} \) is the probability obtained by using the objective function by Eqs. (15) and (16).

3.4 CSBO algorithm

Using the modulus to correspond to the independent variable, the argument determines the sign of the independent variable. Although the two genes with the same amplitude and different angles are sometimes the same for the output of the objective function, but two variables of the real and imaginary parts are used to represent an independent variable, which can enhance the information of the bower and increase the diversity of the bower, reduce the chance of falling into the local optimum. The pseudo code of CSBO is as follows:

figure a

The CSBO algorithm flow chart is described as follow (Fig. 1).

Fig. 1
figure 1

The CSBO algorithm flow chart

4 Experimental results and discussion

4.1 Simulation platform

All the calculations are used Matlab R2016a. The CPU is Intel i5 processor. The operating system is Win 10.

4.2 Compare algorithm parameter settings

We have chosen five classical optimization algorithms to compare with CSBO, including the artificial bee colony optimization algorithm (Karaboga and Basturk 2007), bat algorithm (Yang 2010), dragonfly algorithm (Mirjalili 2016), Du Fu search optimizer (CS) (Yang and Deb 2009) and the original version Satin Bowerbird Optimization algorithm (Moosavi and Bardsiri 2017). These algorithms are initially set to Table 2.

Table 2 Algorithm initialization settings

4.3 Benchmark test functions

In this section, the CSBO algorithm is based on 20 benchmark test functions. These 20 reference functions are classical functions used by many researchers. In spite of the simplicity, we chose these test functions to compare our results with the current heuristic results. These benchmark functions are listed in Tables 3, 4, 5 and 6, where the \( Dim \) represents the dimension of the function, \( range \) is the boundary of the function search space, and \( \mathop f\nolimits_{\min} \) is optimal. To verify the performance of this paper, 20 classic benchmark functions were selected for simulation test. To compare each run and determine the significance of the results, the Wilcoxon rank sum test was also performed in this section. For statistical testing, CSBO selects each test function and compares it independently of other algorithms.

Table 3 Benchmark functions
Table 4 Benchmark functions test results
Table 5 p Value test results
Table 6 Manufacturer data of the motors used in the experiments

4.4 Benchmark functions

The results of the benchmark function test function are shown in Table 4, and the convergence of the algorithm is shown in Figs. 2, 3, 4, 5, 6, 7 and 8. The variance of this algorithm is shown in Figs. 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 and 21. The variance map is based on the results obtained by the program running 20 times in succession. The Wilcoxon rank-sum test is used to test the stability difference between the CCEO algorithm and the other six algorithms for \( F_{1} - F_{10} \). p = 0.05 was used as the boundary to distinguish between the test results: p > 0.05 indicates that the result is accidental and p < 0.05 indicates that the result is not accidental. The ranking in Table 5 is the average of 20 consecutive runs based on the CSBO algorithm. Examination of Table 5 showed that all p values were less than 0.05 (5% significance level).

Fig. 2
figure 2

Convergence curves of function \( F_{1} \)

Fig. 3
figure 3

Convergence curves of function \( F_{2} \)

Fig. 4
figure 4

Convergence curves of function \( F_{3} \)

Fig. 5
figure 5

Convergence curves of function \( F_{4} \)

Fig. 6
figure 6

Convergence curves of function \( F_{5} \)

Fig. 7
figure 7

Convergence curves of function \( F_{6} \)

Fig. 8
figure 8

Convergence curves of function \( F_{7} \)

Fig. 9
figure 9

Convergence curves of function \( F_{8} \)

Fig. 10
figure 10

Convergence curves of function \( F_{9} \)

Fig. 11
figure 11

Convergence curves of function \( F_{10} \)

Fig. 12
figure 12

ANOVA tests of the function \( F_{1} \)

Fig. 13
figure 13

ANOVA tests of the function \( F_{2} \)

Fig. 14
figure 14

ANOVA tests of the function \( F_{3} \)

Fig. 15
figure 15

ANOVA tests of the function \( F_{4} \)

Fig. 16
figure 16

ANOVA tests of the function \( F_{5} \)

Fig. 17
figure 17

ANOVA tests of the function \( F_{6} \)

Fig. 18
figure 18

ANOVA tests of the function \( F_{7} \)

Fig. 19
figure 19

ANOVA tests of the function \( F_{8} \)

Fig. 20
figure 20

ANOVA tests of the function \( F_{9} \)

Fig. 21
figure 21

ANOVA tests of the function \( F_{10} \)

4.5 Results discussion and analysis

The results of the Benchmark test functions are shown in Table 4. It can be seen that the accuracy of the CSBO algorithm exceeds the accuracy of other algorithms. It is proved that the use of complex coding to encode nest individuals effectively increases the amount of information of individuals, increases the diversity of populations, and increases the convergence accuracy. It can also be seen from the convergence graph that compared with other algorithms, the CSBO algorithm has the fastest convergence speed, benefiting from the individual’s diploid idea, the individual information collector, and the convergence speed is improved. The variance diagram also shows that the CSBO algorithm has strong stability, and its variance value is more stable than other algorithms. Wilcoxon’s statistical test results show that the results are statistically significant because most p values are less than 0.05. In summary, the experimental results of this paper show that the CSBO algorithm is very competitive in solving the 10 standard test functions.

5 Identification problem

Induction motor parameters can not be measured directly. Therefore, the usual special identification methods are needed to estimate them. In this way, the behavior of the induction motor is simulated by the equivalent circuit. In general, they allow estimating the proper relationship of the motor parameters. In the process of identification, the parameter estimation is transformed into a multi-dimensional optimization problem, and the internal parameters of the induction motor are taken as decision variables. The approximate circuit model as shown below (Fig. 22).

Fig. 22
figure 22

Approximate circuit model

Figure 22 illustrates the approximate circuit model. In the approximate circuit model, identification tasks can be expressed as the following optimization problems:

$$ {\text{Minimize}}\;J(x),\;x = (\mathop R\nolimits_{1},\mathop R\nolimits_{2},\mathop X\nolimits_{1}) \in \mathop R\nolimits^{4} $$
(18)
$$ {\text{Subject to:}}\quad 0 \le \mathop R\nolimits_{1} \le 1,\;0 \le \mathop R\nolimits_{2} \le 1,\;0 \le \mathop X\nolimits_{1} \le 10,\;0 < s < 1 $$
(19)

where

$$J(x) = \mathop {(\,\mathop f\nolimits_{1} (x))}\nolimits^{2} + \mathop {(\,\mathop f\nolimits_{2} (x))}\nolimits^{2} + \mathop {(\,\mathop f\nolimits_{3} (x))}\nolimits^{2} $$
(20)
$$ f_{1} (x) = \frac{{\frac{{K_{t} R_{2}}}{{s\left[{\left({R_{1} + {\raise0.7ex\hbox{${R_{2}}$} \!\mathord{\left/{\vphantom {{R_{2}} S}}\right.\kern-0pt} \!\lower0.7ex\hbox{$S$}}} \right)^{2} + X_{1}^{2}} \right]}} - T_{fl}}}{{T_{fl}}} $$
(21)
$$ \mathop f\nolimits_{2} (x) = \frac{{\frac{{\mathop K\nolimits_{t} \mathop R\nolimits_{2}}}{{\mathop {(\mathop R\nolimits_{1} + \mathop R\nolimits_{2})}\nolimits^{2} + \mathop X\nolimits_{1}^{2}}} - \mathop T\nolimits_{lr}}}{{\mathop T\nolimits_{lr}}} $$
(22)
$$ \mathop f\nolimits_{3} (x) = \frac{{\frac{{\mathop K\nolimits_{t}}}{{2[\mathop R\nolimits_{1} + \sqrt {\mathop R\nolimits_{1}^{2} + \mathop X\nolimits_{1}^{2}}]}} - \mathop T\nolimits_{\max}}}{{\mathop T\nolimits_{\max}}} $$
(23)
$$ \mathop K\nolimits_{t} = \frac{{\mathop V\nolimits_{ph}^{2}}}{{\mathop \omega \nolimits_{s}}} $$
(24)

The approximate circuit model uses the manufacturer data starting torque (\( \mathop T\nolimits_{lr} \)), maximum torque (\( \mathop T\nolimits_{\max} \)), full load torque (\( \mathop T\nolimits_{f1} \)) and motor slip (\( {\text{s}} \)) to calculate the stator resistance (\( \mathop R\nolimits_{1} \)) rotor resistance (\( \mathop R\nolimits_{2} \)) and tator leakage reactance (\( \mathop X\nolimits_{1} \)).

5.1 Initial parameter setting

We used two modes of induction motor to test the performance of the algorithm; the specific parameters are set as follows.

5.2 Experimental results

We run continuously for 30 times by comparing ABC, DA, BA, PSO, GSA, SBO and CSBO with the Best, Worst, and Average and Variance values of the results. The experimental results are shown in the Table 7. The convergence graph is shown in Figs. 23 and 24, and the variance graph is shown in Figs. 25 and 26. The results of the p value in Table 8.

Table 7 Induction motor experimental results
Fig. 23
figure 23

Convergence curves of the mode1

Fig. 24
figure 24

Convergence curves of the mode2

Fig. 25
figure 25

Anova tests of the mode1

Fig. 26
figure 26

Anova tests of the mode2

Table 8 p Value test results

As can be seen from Table 7, the CSBO algorithm runs continuously for 30 times in each of the two modes, and its average is ranked first, which fully verifies the superiority of the CSBO in handling induction motors. From the Figs. 23 and 24, it can be seen that CSBO has the fastest convergence speed and convergence accuracy. Figures 25 and 26 also show that the CSBO algorithm results are relatively stable.

5.3 Statistical analysis

In order to statistically analyze the results, a nonparametric test called Wilcoxon analysis was performed. It allows assessing the differences between the two related methods. Table 8 reports the p values generated by the Wilcoxon analysis for pairwise comparisons between algorithms. In this case, there are six groups: CSBO vs SBO, CSBO vs ABC, CSBO vs DA.CSBO vs PSO, CSBO vs BA, CSBO vs GSA. Examination of Table 8 showed that all p values were less than 0.05 (5% significance level). This fact provides strong evidence against the null hypothesis, suggesting that the proposed method presents statistically better results than other algorithms.

6 Conclusion and future work

This paper a complex-valued encoding satin bowerbird optimization algorithm (CSBO) is proposed. This algorithm introduces the idea of complex-valued coding and finds the optimal one by updating the real and imaginary parts value. The proposed CSBO optimization algorithm is compared with real-valued SBO and other optimization algorithms using 20 standard test functions including unimodal and multimodal functions, induction motor parameter identification and p value test. Results show that the proposed CSBO can significantly improve the performance. Future research focuses on the complex-valued satin bowerbird optimizer (CSBO) algorithm is used in combinatorial optimization, engineering optimization and other applied fields.