Abstract
A variant of particle swarm optimization (PSO) is represented to solve the infinitive impulse response (IIR) system identification problem. Called improved PSO (IPSO), it makes significant enhancement over PSO. To begin with, the population initialization step makes use of golden ratio to segment solution space so as to obtain high-quality solutions. It is followed by all particles using different inertia weights in velocity updating step, which is beneficial for preserving the balance between global search and local search. Subsequently, IPSO uses normal distribution to disturb the global best particle, which enhances its capacity of escaping from the local optimums. The above three operations cannot only guarantee high-quality solutions, strong global search capacity, and fast convergence rate, but also avoid low diversity, excessive local search, and premature stagnation. These properties of IPSO make it much better suited for IIR system identification problems. IPSO is applied on 12 examples. The experimental results amply demonstrate the capability of IPSO toward obtaining the best objective function values in all the cases. Compared with the other four PSO approaches, IPSO has stronger convergence and higher stability which clearly points out its desirable performance in search accuracy and identifying efficiency.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Many researchers and scholars are exhibiting increasingly greater interest in adaptive infinitive impulse response (IIR) filtering during the last few decades [1, 2]. This is getting enhanced by the fact that a variety of application areas, such as speech recognition, acoustics, and communications, strongly depend on adaptive signal processing. For some system identification problems, the adaptive IIR filter attempts to characterize the unknown system according to some function of the error between the output of the adaptive filter and the output of the plant. In order to obtain a satisfactory identification result, it is necessary to find suitable filter coefficients so as to produce minimal error between the output of the adaptive IIR filter and the output of the plant.
The task of choosing suitable parameters for the IIR model is indeed a formidable one. After all, the constructions of the adaptive IIR filter are itself complicated and the same gets compounded when the environment is disturbed. In order to address the IIR system identification problem, a variety of efficient approaches have been developed in recent years. The artificial bee colony (ABC) algorithm [3] is a simple, robust, and flexible optimization technique. Armed with these advantages, the ABC has been turned into a powerful tool for solving many global optimization problems. Recently, Karaboga [4] designed a new approach based on the ABC algorithm for low- and high-order digital IIR filters. The corresponding simulation results clearly demonstrate the effectiveness of this new methodology toward designing digital low- and high-order IIR filters. Luitel et al. [5] used a particle swarm optimization with quantum infusion (PSO-QI) [6] for IIR system identification. According to the identification results of benchmark IIR systems with full- and reduced-order models, the PSO-QI can obtain lower mean squared error and more consistent convergence than the other algorithms. Dai et al. [7] used a seeker optimization algorithm (SOA) toward designing digital IIR filter. SOA is based upon simulation of the behavior of the process of human searching where the search direction stems from the empirical gradient by evaluating the response to the position changes and the step length stems from uncertainty reasoning by using a simple fuzzy rule. The simulation results exhibit the efficiency of the SOA toward designing IIR filter. Panda et al. [8] formulated the IIR system identification task as an optimization problem and employed a cat swarm optimization (CSO) algorithm [9] in order to establish an adaptive learning rule for the model. Both actual and reduced-order identification of few benchmarked IIR plants are conducted by simulation study. Experimental results demonstrate the potential of the CSO toward identification of the IIR plant as compared to the other algorithms. Upadhyay et al. [10] proposed a craziness-based particle swarm optimization (CRPSO) algorithm for IIR system identification problem. The CRPSO utilizes some random variables to enhance the process of exploration and exploitation [65, 66] in multidimensional search space. Furthermore, craziness factor is introduced into the velocity updating of the particle swarm optimization (PSO) algorithm [11, 12]. As a result, the diversity of population is ensured and the convergence is improved. In addition to these, many adaptive system identification methods have been reported in the literatures [13–17].
The remainder of this paper is organized as follows. In Sect. 2, the adaptive IIR filter model and several system data are introduced, while Sect. 3 reports four PSO algorithms. In Sect. 4, the proposed version is presented. In Sect. 5, five PSOs are used to solve 12 IIR system identification problems. Finally, the paper ends after presenting the conclusions.
2 Adaptive IIR filter model
The goal of IIR system identification is to turn the coefficients of the adaptive IIR filter by adaptive algorithms. The purpose was to bring the filter’s output closer to the output of unknown system when the same input signal is applied simultaneously to both, unknown plant and adaptive filter. The block diagram of an adaptive IIR system identification is shown in Fig. 1.
This section studies the design method of adaptive IIR filter. The input–output relation can be described in terms of the following difference equation [10, 18]:
where \( x(n) \) and \( y(n) \) denote the filter’s input and output, respectively, and \( L( \ge K) \) is the filter’s order. Suppose \( a_{0} = 1 \), the transfer function of the adaptive IIR filter is given by
In the design method, the adaptive IIR filter \( H_{a} (z) \) is utilized to identify the unknown plant of transfer function \( H_{u} (z) \) so that the output of the unknown IIR system is well matched by the output of the adaptive IIR filter. Moreover, mean square error (MSE) of time samples is viewed as the objective function, and it is given by
Additionally, the dB form of the mean square error (MSE) is given by
where \( N_{s} \) is the number of time samples; \( e(n) = d(n) - y(n) \) stands for the error signal; \( y(n) \) stands for the response of the adaptive IIR filter; \( d(n) = y_{0} (n) + v(n) \) stands for the overall response of the unknown IIR plant; \( y_{0} (n) \) denotes the output of the unknown IIR plant; \( v(n) \) denotes an additive white Gaussian noise. The task of adaptive algorithm is to minimize MSE by turning the coefficient vector \( \varTheta \) of the transfer function \( H_{a} (z) \). Here,
3 Particle swarm optimization-based algorithms
Particle swarm optimization (PSO) algorithm [11, 12, 19–22] is one of the most popular evolutionary algorithms, and it is a very simple but efficient optimization technique for solving global optimization problems. Due to its simplicity and convenience, the PSO has been applied in many real-world applications, such as odor source localization [23], aggregate production planning [24], switching instant identification [25], defense against SYN flooding attacks [26], and designing fuzzy logic controllers [27].
3.1 PSO algorithm
In order to better understand the working principle of PSO, a simple flowchart of PSO is provided as follows (Fig. 2).
The PSO takes advantage of previous velocity vectors and position vectors to generate current velocity vectors and position vectors for the particles in the population. In particular, each particle makes use of its own experience and the most successful particle’s experience to adjust its position. For each individual, its velocity and position can be updated as given by Eqs. (6), (7), respectively:
Here, \( \omega \) represents inertia weight, and it decreases linearly during the iteration process. \( v_{i,j}^{k} \) and \( x_{i,j}^{k} \) are, respectively, the jth velocity variable and the jth position variable of particle i at generation k. \( v_{i,j}^{k + 1} \) and \( x_{i,j}^{k + 1} \) are, respectively, the jth velocity variable and the jth position variable of particle i at generation k + 1. \( p_{i,j} \) denotes the jth variable of the personal best position of particle i, while \( g_{j} \) signifies the jth variable of the global best position of the population. \( c_{1} \) is defined as cognitive factor, while \( c_{2} \) represents social factor. \( r_{1} \) and \( r_{2} \) denote the uniformly generated random numbers in the range [0, 1].
3.2 Craziness-based particle swarm optimization algorithm
In [28–30], the authors modified Eq. (6) using several random numbers and a “craziness velocity” with a predefined probability is executed before the process of position updating. This PSO variant is described as craziness-based particle swarm optimization (CRPSO). The new velocity updating equation is given by
where \( r_{1} \), \( r_{2} \), and \( r_{3} \) are the three different random numbers which are uniformly generated in the interval [0,1], and \( sign(r_{3} ) \) is determined according to the following equation:
In addition, the CRPSO introduces a craziness operator, intending to maintain the diversity of the particles, and it can be expressed as follows:
where \( r_{4} \) is a random number generated uniformly from the interval [0,1]; \( v_{i}^{\text{craziness}} \) is a random number generated uniformly from the interval \( [v_{i}^{\hbox{min} } ,v_{i}^{\hbox{max} } ] \); and \( P(r_{4} ) \) and \( sign(r_{4} ) \) are defined, respectively, as
where \( P_{cr} \) is a predefined probability of craziness.
\( v_{i}^{\text{craziness}} \) is set very small (=0.0001). \( r_{4} \ge 0.5 \) (or \( < 0.5 \)) aims to use equal probability of direction reversal of \( v_{i}^{\text{craziness}} \).
3.3 A global particle swarm optimization algorithm
In [31], the authors modified Eq. (6) by introducing a new inertia weight and adding small disturbance to the global optimal solution. This PSO variant is described as global particle swarm optimization (GPSO). The new velocity updating equation is given by:
Here, \( \omega (t) \) represents the inertia weight at generation t, and it is given by:
where \( b = \ln \left\{ {\omega_{\hbox{max} } /\omega_{\hbox{min} } } \right\}/(T^{2} - 1) \), and \( a = \omega_{\hbox{max} } \times \exp ( - b) \). Additionally, U(0,1) is a random number generated uniformly from the interval [0,1]; \( \omega_{\hbox{max} } \) (\( \omega_{\hbox{min} } \)) represents the maximal (minimal) inertia weight; \( \delta \) is termed as disturbance factor. By using the above velocity updating equation, the GPSO can make a good balance between global search and local search.
3.4 A particle swarm optimization with time-varying acceleration coefficients
In [32, 33], the authors modified Eq. (6) by introducing two dynamical acceleration coefficients. Especially, they adjust these coefficients in a way that the cognitive component is reduced and social component is increased as iteration proceeds. This PSO variant is described as particle swarm optimization with time-varying acceleration coefficients (TVAC-PSO). The novel velocity updating equation is stated as follows:
where parameters \( C_{1i} \) and \( C_{1f} \) (\( C_{2i} \) and \( C_{2f} \)) represent initial and final values of cognitive (social) acceleration coefficients, respectively. Moreover, inertia weight \( \omega \) is given by:
\( \omega_{\hbox{max} } \) and \( \omega_{\hbox{min} } \) denote maximal and minimal inertia weights, respectively. \( iter \) stands for the maximum iteration number. In addition, the position updating equation is slightly modified as:
Parameter C is the constriction factor and can be calculated by using the following equation:
where \( \varphi \) is set to 4.1. By using the above updating equation, the TVAC-PSO can improve solution quality and avoid premature convergence.
In addition to PSO, there are many other swarm intelligence-based algorithms that can be applied to IIR system identification problems, such as harmony search algorithm [34, 35], differential evolution [36], genetic algorithm [37], cuckoo search [38–40], earthworm optimization algorithm (EWA) [41], and elephant herding optimization (EHO) [42, 43], monarch butterfly optimization [44, 45], bat algorithm [46, 47], firefly algorithm [48], and krill herd algorithm [49–52]. Due to space limitations, we only concentrate on the application of several PSO algorithms to IIR system identification problems.
4 Improved particle swarm optimization
As already pointed out, this paper presents an improved particle swarm optimization algorithm (IPSO) for solving IIR system identification problems. In detail, IPSO and PSO differ in three aspects, which are described below:
4.1 Population initialization
Golden ratio is a very mysterious and interesting issue, and it has found many application areas, such as the human heart [53], n-body problem [54], computation of a face attractiveness index [55], and facial beauty [56]. Due to its simplicity and usefulness, golden ratio is employed to segment the solution space:
First, determine M values for the jth (j = 1, 2,…, N) dimension of all M particles, and the ith (i = 1, 2,…, M) value is calculated according to the following equation:
where \( U_{j} \) and \( L_{j} \) represent the upper and lower bounds, respectively, for the jth dimension. After calculating M values, they are randomly assigned to the jth dimension of M particles. Let \( x_{i,j} \) be the jth (j = 1, 2,…, N) dimension of the ith (i = 1, 2,…, M) particle, and then it can be determined by \( x_{i,j} = \alpha_{{i^{{\prime }} ,j}} \). It should be noted that the index \( i^{{\prime }} \) is a random integer in the range [1, M]. By using the above initialization method, a population P 1 is produced.
Second, calculate M values for the jth (j = 1, 2,…, N) dimension of all M particles, and the ith (i = 1, 2,…, M) value is obtained according to the following equation:
After calculating M values, they are randomly assigned to the jth dimension of the other M particles. Thus, \( x_{i,j} \) is determined by \( x_{i,j} = \beta_{{i^{\prime } ,j}} \). By using the above initialization method, a population P 2 is produced.
Third, select M best particles from P 1 ∪ P 2 for the initial population P. In population P, any solution vector \( x_{i} = (x_{i,1} ,x_{i,2} , \ldots ,x_{i,N} ) \) (i = 1, 2,…, M) stands for a candidate solution of the parameters [as Eq. (5)] used for implementing the IIR system identification. Besides, the length of solution vector x i is equal to N, which is exactly the number of the parameters in coefficient vector \( \varTheta \). By using this novel approach of population initialization, the quality of solutions and convergence of the IPSO can be improved simultaneously.
4.2 Global disturbance
In light of Eqs. (6), (7), the global best particle is a potential candidate solution and it facilitates rapid movement of all the particles toward the neighbors. However, this tendency may lead to trapping into the local optima. As a result, these particles will lose plenty of chances of searching very large spaces of candidate solutions. In order to help particles to easily get rid of the local optima, a global disturbance strategy is utilized to update the global best particle in each generation, and it can be expressed as follows:
Here, g j represents the jth dimension of the global best particle g; parameter \( \lambda \) is defined as disturbance factor. We have tried different values (i.e., 0.01, 0.02, 0.05, 0.1, 0.15, and 0.2) for \( \lambda \), but we did not notice obvious difference in experimental results. Thus, it is set to 0.1 in this paper. In addition, N(0,1) denotes the normal distribution with mean 0 and variance 1. It should be emphasized that the global best particle g will be replaced with g′ only when g′ is superior to g. By using the above updating strategy, the capacity of escaping from the local optimums can be further enhanced for IPSO.
4.3 Randomly assigned inertia weights
Inertia weight \( \omega \) is responsible for iteratively adjusting the velocities of all particles, and different values of \( \omega \) will have distinct effects on the scope of particles’ search. To be more specific, a large value of \( \omega \) contributes to the global search, while a small value is helpful to the local search. With regard to the traditional PSO algorithm, it has only one value of inertia weight at each generation, which can hardly give consideration to both global search and local search. To overcome this shortage, M different inertia weights are generated in terms of the following equation:
Here, i (i = 1, 2,…, M) is the index of particle, \( \Delta \omega \) is equal to 1/M. After generating these M different inertia weights, they are randomly assigned to M particles at each generation. Based on this method, the velocity updating equation can be stated as follows:
where p i,j stands for the jth dimension of personal best particle p i . \( \omega_{{i^{{\prime }} }} \) denotes a randomly selected inertia weight from \( \omega_{i} \) (i = 1, 2,…, M) for particle i, and each particle adopts a different inertia weight from the others. At each generation, the original PSO algorithm adopts only one inertia weight. Unlike original PSO, the IPSO uses M values of inertia weight. Furthermore, as mentioned earlier, the large values of \( \omega_{{i^{{\prime }} }} \) are beneficial to global search, and the small values of \( \omega_{{i^{{\prime }} }} \) are useful for carrying out local search. Clearly, the IPSO algorithm is able to keep better balance between the global search and the local search, as compared to that of the PSO algorithm.
In addition to these three improvements, as resulted by the application of the IPSO algorithm, we provide a more detailed IPSO steps which is presented in Table 1.
In computer science, the traditional PSO algorithm is a swarm intelligence-based approach which optimizes a problem by gradually trying to improve candidate solutions so as to meet the requirements of this problem. As a variant of PSO, IPSO optimizes the infinitive impulse response system identification problem by iteratively updating a population of candidate solutions namely particles and disturbing them in the solution space according to the mathematical model of adaptive IIR filter. More specifically, the mean square error is an important criterion that gains confidence in, or reject the postulated mathematical model based on the final parameters optimized by the IPSO algorithm. If the mean square error reaches quite a small value, the outputs of the adaptive IIR filter will be very close to the outputs of the actual system outputs. Thus, it can be safely claimed that the postulated model is reasonable and accurate. On the other hand, if the mean square error reaches a large value, the outputs of the adaptive IIR filter will be far away from the outputs of the actual system outputs, indicating that the postulated model is unreasonable and inaccurate. This paper aims to utilize the IPSO algorithm to optimize the parameters of the adaptive IIR filter in order to validate the postulated mathematical model.
5 Experimental results and analysis
This section aims to investigate the improvement made by the proposed population initialization, global disturbance, and dynamical inertia weight on solving the IIR system identification problems. For this purpose, we compared the performance of the following five PSO algorithms: the original PSO [11, 12], the CRPSO [28–30], the GPSO [31], the TVAC-PSO [32, 33], and our proposed IPSO. Twelve problems are recorded in Table 2. In order to understand the difference in the problems for algorithm validation, we will describe and discuss these problems as follows: Example I (Case 1) and Example I (Case 2) share the same unknown plant that is inherently a fifth-order low-pass Butterworth filter, but they employ different adaptive filters to identify this plant. To be more specific, the first adaptive filter has the same order as that of the unknown plant, and the order of the second adaptive filter is four which is slightly lower than that of the unknown plant. Obviously, the first filter is more accurate than the second one, but it introduces more parameters needed for solving IIR system identification, hence it increases the computational cost. However, the order of the adaptive filter should not be far from that of the unknown plant; otherwise, it may lead to serious mean square error. The same observations apply to the other cases. In addition, the problem parameters are set as follows: the number of time samples N s = 100; the variables’ upper bound x Uj = 1000 and lower bound x Lj = −1000; the input common to both the unknown plant and the identifying IIR filter is the randomly generated Gaussian white noise signal with mean 0 and standard deviation 1; the additive noise is a Gaussian white signal with mean 0 and standard deviation 10−3.
The algorithm parameters are displayed in Table 3. Here, “P S ” represents population size, N G represents the number of generations. MATLAB 7.0 was used to execute the above design steps under the environment of Intel(R) Core(TM) i5-2410M CPU @ 2.30 GHz. Thirty independent runs were performed for each problem, and the optimization results of each of these are listed in Table 4.
Table 4 compares the five PSO approaches on the IIR system identification problems with different feedback filter orders and feed forward filter orders. Here, the term “AET” stands for average execution time, and the term “Std” stands for standard deviation. Additionally, the best results are highlighted in bold. According to Table 4, the IPSO algorithm outperforms the other four PSO approaches on most IIR system identification problems. Especially for Example I (Case 1) and Example II (Case 1), IPSO can obtain the best results according to the five criterions “Best,” “Worst,” “Mean,” “Median,” and “Std.” For Example III (Case 1), Example III (Case 2), Example IV (Case 1), Example IV (Case 2), Example VI, Example VII, and Example VIII, IPSO finds their global optimal solutions in each of the 30 runs. This indicates that the new global disturbance strategy can offer much more accurate solutions for IIR system identification problems, which mainly benefits from its strong exploitation ability. With regard to Example V, IPSO finds the second best solution which is slightly worse than that of CRPSO, but better than those attained by the other three PSO approaches. Moreover, IPSO can obtain the best results in terms of the other four criterions involving “Worst,” “Mean,” “Median,” and “Std.” For Example I (Case 2), although the best solution obtained by IPSO is not as good as that of TVAC-PSO, the solutions it generates are all acceptable. For Example II (Case 2), three PSO approaches, viz., CRPSO, TVAC-PSO, and IPSO, can find the best solution which is slightly better than those of the other two approaches. A careful observation from Table 4 reveals that IPSO has the best convergence among the five approaches. The reason lies in its capability of attaining the smallest values of “Mean” for all the 12 problems. Furthermore, it can attain the smallest values of “Std” for ten out of 12 problems, which signifies its high reliability. Additional executing steps lead to increased computing time for IPSO, and it uses more average executing times as compared with the other four PSO approaches for six problems according to the term “AET”. Nevertheless, there is no significant difference regarding the average executing times of five PSO approaches. By and large, the comparison among five PSOs is fair and acceptable.
To learn more about the evolution processes of different PSO approaches, Fig. 3 plots the average optimization curves of five PSO approaches in optimizing 12 IIR system identification problems.
As can be seen from Fig. 3, the IPSO algorithm generates better solutions for most IIR system identification problems in initial evolution progress. This indicates that the novel population initialization strategy can provide high-quality solutions for IIR system identification problems. In addition, the curves of IPSO descend much faster than those of the other four PSO approaches, e.g., Example I (Case 1), Example I (Case 2), Example II (Case 1), Example IV (Case 1), and Example V, suggesting its better performance in search speed. Moreover, IPSO converges to the lowest levels for all the 12 problems, demonstrating its best search ability among the five PSO approaches. So far as Example III (Case 2) and Example VII are concerned, PSO can converge to the same levels as those of IPSO. Regarding Example III (Case 2), Example IV (Case 2), and Example VII, CRPSO can converge to the same levels as those of IPSO. With respect to Example II (Case 2), Example III (Case 2), and Example VII, GPSO can achieve the same levels as those of IPSO. With respect to Example III (Case 1), Example III (Case 2), Example IV (Case 2), Example VI, and Example VII, TVAC-PSO has the capability to achieve the same levels as those of IPSO. From the above observation and analysis, the IPSO algorithm has clearly demonstrated faster convergence rate and stronger stability than the other four PSO approaches on solving various IIR system identification problems.
The identifying results are visualized in order to testify the identifying performance of the IIR model based on IPSO, and Fig. 4 describes the comparison of actual plant outputs and IIR model outputs for 12 IIR system identification problems.
It is clear from Fig. 4 that IIR model outputs are very close to actual plant outputs in most cases, which indicates the high accuracy of IPSO. For Example II (Case 2), Example IV (Case 2), and Example VII, there are minor errors between actual plant outputs and IIR model outputs. Regarding Example III (Case 2), the rough shape of IIR model outputs is identical to that of IIR model outputs, but there exist certain errors between these two kinds of outputs. In fact, the above four problems are all based on reduced-order IIR filter models. Although this kind of approximation model can reduce problem parameters and simplify calculation of complexity, it may result in non-negligible errors. To summarize, IPSO shows desirable performance for most IIR system identification problems. By combining suitable IIR model, it will play an important role in IIR system identification.
IIR system identification is a process of identifying or measuring the mathematical model of an infinitive impulse response system based on the measurements of the system inputs and outputs. In this paper, there are mainly four steps to be carried out for IIR system identification: (1) gather the outputs \( d(n) \) of unknown system; (2) calculate the outputs \( y(n) \) of the adaptive IIR filter according to the postulated model; (3) use the IPSO algorithm to adjust the parameters of the adaptive IIR filter \( \varTheta = (a_{1} , \ldots ,a_{L} ,b_{0} ,b_{1} , \ldots ,b_{K} )^{T} \); step (4) model validation. Step (1) is known as a necessary and basic part in the process of IIR system identification, and the outputs \( d(n) \) are composed of the useful signals and noise. Step (2) is used to calculate the outputs \( y(n) \) based on the current parameters of the adaptive IIR filter. Also, the more accurate these parameters are, the smaller mean square error (MSE) is. Step (3) aims to find the smallest mean square error so as to find the most suitable parameters of the adaptive IIR filter. After a number of generations, all these three steps are stopped, and the best results are given. Furthermore, the goal of step (4) is to gain confidence in, or reject the postulated mathematical model based on the final parameters optimized by the IPSO algorithm. Concretely, if there is an adequate correspondence between the outputs of the postulated model and the actual system outputs, the postulated model is verified and vice versa.
In addition to the ABC, CSO, and PSO algorithms, our future work will concentrate on the application of krill herd algorithm [60–64] and other metaheuristic algorithms [65–70] to IIR system identification, which has been proved to be effective and efficient on solving some other complex optimization problems.
6 Conclusions
Identification of the actual unknown IIR plant based on IIR model design is often a complex process. This research presents an effective alternative approach namely IPSO for optimizing the mean square error (MSE) associated with IIR system identification problem. Specifically, IPSO is utilized to determine suitable coefficients of IIR model such that the objective function MSE is minimized. IPSO improves PSO in three aspects as follows: To begin with, it segments solution space using golden ratio at the beginning of evolutionary process; next, it dynamically adjusts inertia weights according to a random assigning strategy; finally, it introduces a global disturbance step by using normal distribution. The above three improvements are easily implemented by MATLAB 7.0 in the environment of Intel(R) Core(TM) i5-2410M CPU @ 2.30 GHz. Five PSO approaches including PSO, CRPSO, GPSO, TVAC-PSO, and IPSO are used to solve the identification problems of 12 unknown IIR systems. From the simulations, it is observed that IPSO is able to converge very rapidly and achieves the lowest levels for all cases. In light of this, we can infer that IPSO is superior to the other four PSO approaches for IIR system identification problem.
Most cases considered in this paper are small scale, and thus, our future work will focus on the IIR system identification problems with larger scale. In addition, we will try to use some other potential heuristic algorithms to solve these large-scale problems.
References
Krusienski DJ, Jenkins WK (2003) Adaptive filtering via particle swarm optimization. In: Proceedings of the 37th Asilomar conference on signals, systems and computers, vol 1, pp 571–575. doi:10.1109/ACSSC.2003.1291975
Krusienski DJ, Jenkins WK (2004) Particle swarm optimization for adaptive IIR filter structure. IEEE Congr Evolut Comput CEC 1:965–970. doi:10.1109/CEC.2004.1330966
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39(3):459–471. doi:10.1007/s10898-007-9149-x
Karaboga N (2009) A new design method based on artificial bee colony algorithm for digital IIR filters. J Frankl Inst 346(4):328–348. doi:10.1016/j.jfranklin.2008.11.003
Luitel B, Venayagamoorthy GK (2010) Particle swarm optimization with quantum infusion for system identification. Eng Appl Artif Intell 23(5):635–649. doi:10.1016/j.engappai.2010.01.022
Luitel B, Venayagamoorthy GK (2009) A PSO with quantum infusion algorithm for training simultaneous recurrent neural networks. In: IEEE-INNS international joint conference on neural networks (IJCNN), pp 1923–1930. doi:10.1109/IJCNN.2009.5179082
Dai CH, Chen WR, Zhu YF (2010) Seeker optimization algorithm for digital IIR filter design. IEEE Trans Ind Electron 57(5):1710–1718. doi:10.1109/TIE.2009.2031194
Panda G, Pradhan PM, Majhi B (2011) IIR system identification using cat swarm optimization. Expert Syst Appl 38(10):12671–12683. doi:10.1016/j.eswa.2011.04.054
Chu SC, Tsai PW (2007) Computational intelligence based on the behavior of cats. Int J Innov Comput Inf Control 3(1):163–173
Upadhyay P, Kar R, Mandal D, Ghoshal SP (2014) Craziness based particle swarm optimization algorithm for IIR system identification problem. AEU-Int J Electron Commun 68(5):369–378. doi:10.1016/j.aeue.2013.10.003
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, pp 1942–1948. doi:10.1109/ICNN.1995.488968
Shi YH, Eberhart RC (1999) Empirical study of particle swarm optimization. In: Proceedings of the IEEE congress on evolutionary computation, pp 1945–1950. doi:10.1109/CEC.1999.785511
White SA (1975) An adaptive recursive digital filter. In: Proceedings of the 9th asilomar conference: circuits, systems, computers, pp 21–25
Shynk JJ (1989) Adaptive IIR filtering. IEEE Trans Acoust Speech Signal Process 6(2):4–21. doi:10.1109/53.29644
Ng S, Leung S, Chung C, Luk A, Lau W (1996) The genetic search approach: a new learning algorithm for adaptive iir filtering. IEEE Trans Signal Process 13(6):38–46. doi:10.1109/79.543974
Abe M, Kawamata M (1998) Evolutionary digital filtering for IIR adaptive digital filters based on the cloning and mating reproduction. IEICE Trans Fundam Electron Commun Computer Sci E81-A(3):398–406
Kalinli A, Karaboga N (2005) Artificial immune algorithm for IIR filter design. Eng Appl Artif Intell 18(8):919–929. doi:10.1016/j.engappai.2005.03.009
Proakis JG, Manolakis DG (2007) Digital signal processing: principles, algorithms and applications, 4th edn. Pearson Education, New Jersey
Zhao JJ, Ji GH, Xia Y, Zhang XL (2015) Cavitary nodule segmentation in computed tomography images based on self-generating neural networks and particle swarm optimisation. Int J Bio-Inspired Comput 7(1):62–67. doi:10.1504/IJBIC.2015.067999
Wang Z, Qin L, Yang W (2015) A self-organising cooperative hunting by robotic swarm based on particle swarm optimisation localisation. Int J Bio-Inspired Comput 7(1):68–73. doi:10.1504/IJBIC.2015.068001
Grillo H, Peidro D, Alemany M, Mula J (2015) Application of particle swarm optimisation with backward calculation to solve a fuzzy multi–objective supply chain master planning model. Int J Bio-Inspired Comput 7(3):157–169. doi:10.1504/IJBIC.2015.069557
Wang GG, Gandomi AH, Yang XS, Alavi AH (2014) A novel improved accelerated particle swarm optimization algorithm for global numerical optimization. Eng Comput 31(7):1198–1220. doi:10.1108/EC-10-2012-0232
Lu Q, Han QL, Liu SR (2014) A finite-time particle swarm optimization algorithm for odor source localization. Inf Sci 277:111–140. doi:10.1016/j.ins.2014.02.010
Wang SC, Yeh MF (2014) A modified particle swarm optimization for aggregate production planning. Expert Syst Appl 41(6):3069–3077. doi:10.1016/j.eswa.2013.10.038
Boubaker S, Djemai M, Manamanni N, M’Sahli F (2014) Active modes and switching instants identification for linear switched systems based on discrete particle swarm optimization. Appl Soft Comput 14:482–488. doi:10.1016/j.asoc.2013.09.009
Jamali S, Shaker V (2014) Defense against SYN flooding attacks: a particle swarm optimization approach. Comput Electr Eng 40(6):2013–2025. doi:10.1016/j.compeleceng.2014.05.012
Sianoa P, Citro C (2014) Designing fuzzy logic controllers for DC-DC converters using multi-objective particle swarm optimization. Electr Power Syst Res 112:74–83. doi:10.1016/j.epsr.2014.03.010
Mandal S, Ghoshal SP, Kar R, Mandal D (2012) Design of optimal linear phase FIR highpass filter using craziness based particle swarm optimization technique. J King Saud Univ-Comp Inf Sci 24:83–92. doi:10.1016/j.jksuci.2011.10.007
Mandal S, Ghoshal SP, Kar R, Mandal D (2011) Optimal linear phase FIR band passfilter design using craziness based particle swarm optimization algorithm. J Shanghai Jiaotong Univ (Science) 16(6):696–703. doi:10.1007/s12204-011-1213-5
Mandal D, Ghoshal SP, Bhattacharjee AK (2010) Radiation pattern optimization for concentric circular antenna array with central element feeding using craziness based particle swarm optimization. Int J RF Microw Comput Aided Eng 20(5):577–586. doi:10.1002/mmce.20467
Gao LQ, Li RP, Zou DX (2011) A global particle swarm optimization algorithm. J Northeastern Univ (Natural Science) 32(11):1538–1541
Mohammadi-Ivatloo B, Moradi-Dalvand M, Rabiee A (2013) Combined heat and power economic dispatch problem solution using particle swarm optimization with time varying acceleration coefficients. Electr Power Syst Res 95:9–18. doi:10.1016/j.epsr.2012.08.005
Chaturvedi KT, Pandit M, Srivastava L (2009) Particle swarm optimization with time varying acceleration coefficients for non-convex economic power dispatch. Electr Power Energy Syst 31(6):249–257. doi:10.1016/j.ijepes.2009.01.010
Amaya I, Correa R (2015) Finding resonant frequencies of microwave cavities through a modified harmony search algorithm. Int J Bio-Inspired Comput 7(5):285–295. doi:10.1504/IJBIC.2015.072258
Bilbao MN, Ser JD, Salcedo-Sanz S, Casanova-Mateo C (2015) On the application of multi-objective harmony search heuristics to the predictive deployment of firefighting aircrafts: a realistic case study. Int J Bio-Inspired Comput 7(5):270–284. doi:10.1504/IJBIC.2015.072257
Coletta LF, Hruschka ER, Acharya A, Ghosh J (2015) A differential evolution algorithm to optimise the combination of classifier and cluster ensembles. Int J Bio-Inspired Comput 7(2):111–124. doi:10.1504/IJBIC.2015.069288
Amirjanov A, Sobolev K (2015) Changing range genetic algorithm for multimodal function optimisation. Int J Bio-Inspired Comput 7(4):209–221. doi:10.1504/IJBIC.2015.071075
Wang GG, Deb S, Gandomi AH, Zhang ZJ, Alavi AH (2015) Chaotic cuckoo search. Soft Comput. doi:10.1007/s00500-015-1726-1
Yang XS, Deb S, Karamangolu M, He XS (2012) Cuckoo search for business optimization applications. In: Proceedings of NCCCS2012, IEEE, pp 1–5. doi:10.1109/NCCCS.2012.6412973
Wang G-G, Gandomi AH, Zhao X, Chu HE (2016) Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput 20(1):273–285. doi:10.1007/s00500-014-1502-7
Wang GG, Deb S, Coelho LdS (2015) Earthworm optimization algorithm: a bio-inspired metaheuristic algorithm for global optimization problems. Int J Bio-Inspired Comput (in press)
Wang GG, Deb S, Gao X-Z, Coelho LdS (2016) A new metaheuristic optimization algorithm motivated by elephant herding behavior. Int J Bio-Inspired Comput (in press)
Wang GG, Deb S, Coelho LdS (2015) Elephant herding optimization. Paper presented at the 2015 3rd international symposium on computational and business intelligence (ISCBI 2015), Bali, Indonesia, December 7–9
Feng YH, Wang GG, Deb S, Lu M, Zhao XJ (2015) Solving 0-1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput Appl. doi:10.1007/s00521-015-2135-1
Wang GG, Zhao X, Deb S (2015) A novel monarch butterfly optimization with greedy strategy and self-adaptive crossover operator. Paper presented at the 2015 2nd intelligence conference on soft computing & machine intelligence (ISCMI 2015), Hong Kong, 23–24 Nov 2015
Yang XS, Deb S (2014) Fong S (2014) Bat algorithm is better than intermittent search strategy. J Multi-Valued Logic Soft Comput 22(3):223–237
Xue F, Cai Y, Cao Y, Cui Z, Li F (2015) Optimal parameter settings for bat algorithm. Int J Bio-Inspired Comput 7(2):125–128. doi:10.1504/ijbic.2015.069304
Wang GG, Chu HCE, Mirjalili S (2016) Three-dimensional path planning for UCAV using an improved bat algorithm. Aerosp Sci Technol 49:231–238. doi:10.1016/j.ast.2015.11.040
Wang GG, Gandomi AH, Alavi AH (2014) An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl Math Model 38(9–10):2454–2462. doi:10.1016/j.apm.2013.10.052
Wang GG, Deb S, Gandomi AH, Alavi AH (2016) Opposition-based krill herd algorithm with Cauchy mutation and position clamping. Neurocomputing 177:147–157. doi:10.1016/j.neucom.2015.11.018
Wang GG, Gandomi AH, Alavi AH, Hao G-S (2014) Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Comput Appl 25(2):297–308. doi:10.1007/s00521-013-1485-9
Wang GG, Gandomi AH, Alavi AH (2013) A chaotic particle-swarm krill herd algorithm for global numerical optimization. Kybernetes 42(6):962–978. doi:10.1108/K-11-2012-0108
Henein MY, Collaborators GR, Zhao Y, Nicoll R, Sun L, Khir AW, Franklin K, Lindqvist P (2011) The human heart: application of the golden ratio and angle. Int J Cardiol 150(3):239–242. doi:10.1016/j.ijcard.2011.05.094
Xie ZF (2011) The golden ratio and super central configurations of the n-body problem. J Differ Equ 251(1):58–72. doi:10.1016/j.jde.2011.03.002
Schmid K, Marx D, Samal A (2008) Computation of a face attractiveness index based on neoclassical canons, symmetry, and golden ratios. Pattern Recogn 41(8):2710–2717. doi:10.1016/j.patcog.2007.11.022
Pallett PM, Link S, Lee K (2010) New “golden” ratios for facial beauty. Vis Res 50(2):149–154. doi:10.1016/j.visres.2009.11.003
Majhi B, Panda G, Choubey A (2008) Efficient scheme of pole-zero system identification using particle swarm optimization technique. In: IEEE congress on evolutionary computation, pp 446–451. doi:10.1109/CEC.2008.4630836
Durmus B, Gun A (2011) Parameter identification using particle swarm optimization. In: 6th International advanced technologies symposium, pp 188–92
Yu X, Liu J, Li H (2009) An adaptive inertia weight particle swarm optimization algorithm for IIR digital filter. IEEE Int Conf Artif Comput Intell 1:114–118. doi:10.1109/AICI.2009.28
Wang GG, Guo LH, Wang HQ, Duan H, Liu L, Li J (2014) Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput Appl 24(3):853–871. doi:10.1007/s00521-012-1304-8
Wang GG, Guo LH, Gandomi AH, Hao GS, Wang HQ (2014) Chaotic krill herd algorithm. Inf Sci 274:17–34. doi:10.1016/j.ins.2014.02.123
Wang GG, Gandomi AH, Alavi AH (2014) Stud krill herd algorithm. Neurocomputing 128:363–370. doi:10.1016/j.neucom.2013.08.031
Guo LH, Wang GG, Gandomi AH, Alavi AH, Duan H (2014) A new improved krill herd algorithm for global numerical optimization. Neurocomputing 138:392–402. doi:10.1016/j.neucom.2014.01.023
Wang GG, Gandomi AH, Yang XS, Alavi AH (2014) A new hybrid method based on krill herd and cuckoo search for global optimization tasks. Int J Bio-Inspired Comput (in press)
Yang XS, Deb S, Fong S (2014) Metaheuristic algorithms: optimal balance of intensification & diversification. Appl Math Inf Sci 8(3):977–983. doi:10.12785/amis/080306
Yang XS, Deb S, Hanne T, He X (2015) Attraction and diffusion in nature-inspired optimization algorithms. Neural Comput Appl. doi:10.1007/s00521-015-1925-9
Cuevas E, González A, Zaldívar D, Pérez-Cisneros M (2015) An optimisation algorithm based on the behaviour of locust swarms. Int J Bio-Inspired Comput 7(6):402–407. doi:10.1504/IJBIC.2015.073178
Guo L, Wang G-G, Wang H, Wang D (2013) An effective hybrid firefly algorithm with harmony search for global numerical optimization. Sci World J 2013:9, Article ID 125625. doi:10.1155/2013/125625
Duan H, Zhao W, Wang G-G, Feng X (2012) Test-sheet composition using analytic hierarchy process and hybrid metaheuristic algorithm TS/BBO. Math Probl Eng 2012:22, Article ID 712752. doi:10.1155/2012/712752
Wang G-G, Guo L, Duan H, Liu L, Wang H, Wang J (2012) A hybrid meta-heuristic DE/CS algorithm for UCAV path planning. J Inf Comput Sci 9(16):4811–4818
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Nos. 61403174, 61503165), Jiangsu Province Science Foundation for Youths (No. BK20150239).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zou, DX., Deb, S. & Wang, GG. Solving IIR system identification by a variant of particle swarm optimization. Neural Comput & Applic 30, 685–698 (2018). https://doi.org/10.1007/s00521-016-2338-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-016-2338-0