1 Introduction

As an interesting complex nonlinear phenomenon, chaos is characterized by a bounded unstable dynamic behavior that exhibits sensitive dependence on the initial state values. In scientific and engineering researches, many nonlinear systems have exhibited phenomenon of chaos. Control and synchronization of chaotic systems have been investigated intensely in a variety of fields during recent years [14]. Many methods have been developed to control and synchronize chaotic system, with a condition that parameters of the chaotic system are known in advance. Therefore, the parameter estimation of chaotic systems is important, and many approaches have been proposed for the parameter estimation of chaotic systems [510].

Recently, by formulating the parameter estimation of chaotic systems as a multi-dimensional optimization problem, many intelligent optimization schemes have been proposed for such estimation problem. Typically, particle swarm optimization (PSO) in Ref. [11], differential evolution (DE) algorithm in Ref. [12], biogeography-based optimization (BBO) algorithm in Refs. [13, 14]. Among them, DE is with less control parameters and good at exploring global searching, but weak in exploitation and is easy to be trapped in local optima [15]. As a popular evolutionary optimization technique, PSO exhibits significant performance in the initial period of evolution, but might encounter problems in reaching optimum solutions efficiently, and thus the accuracy the algorithm can achieve is limited [16]. In BBO, poor solutions accept a lot of new features from good ones which helps to improve the quality of those solutions.

Seeker optimization algorithm (SOA) [17] is a newly proposed method concerning searching behaviors of human intelligence. SOA has demonstrated good performance when compared with other evolutionary algorithms (EAs) [1820]. However, as mentioned by Dai, the author of SOA, this algorithm can be further improved by increasing the diversity of distributed population [21]. In order to enhance the exploration and the exploitation abilities, an oppositional seeker optimization algorithm (OSOA) is devised by combining SOA with opposition-based learning (OBL) [22]. Especially, the diversity of initial population is increased via the uniformly distributed population generating mechanism in OBL. Moreover, new populations during the evolutionary process are also generated with OBL method. By simultaneously considering an estimate and its corresponding opposite estimate, OBL provides a higher chance of finding solutions which are closer to the global optimum. The OSOA is further employed to estimate the parameters of the chaotic system. On the other hand, although the parameter estimations are extensively studied in chaotic systems, few studies [13, 14, 23] have addressed the noise in the systems. Noise will affect the output of the chaotic system and bias the estimation of the original parameters. Based on this consideration, the impact of the additional noise on the parameter-estimation performance of the proposed scheme is analyzed in this study. Simulations on two typical chaotic systems show that the proposed scheme provides a promising candidate for parameter estimation of chaotic systems.

The rest of this paper is organized as follows. In Sect. 2, the problem of parameter estimation for chaotic system is formulated from the view of optimization. In Sect. 3, the OSOA is proposed after briefly introducing SOA and OBL methods. Numerical simulations and conclusions are given in Sects. 4 and 5, respectively.

2 Problem formulation

Considering the following n-dimension chaotic system:

$$ \dot{\boldsymbol {X}} = F(\boldsymbol {X},\boldsymbol {X}_{0}, \boldsymbol{\theta} ), $$
(1)

where X=(x 1,x 2,…,x n )TR n denotes the state vector and X 0 denotes the initial state. θ=(θ 1,θ 2,…,θ D )T is a set of original parameters. If the structure of system (1) is known, then the estimated system can be written as

$$ \dot{\boldsymbol {Y}} = F(\boldsymbol {Y},\boldsymbol {X}_{0},\hat{ \boldsymbol{\theta}} ), $$
(2)

where Y=(y 1,y 2,…,y n )TR n is the state vector of the estimated system, \(\hat{\boldsymbol{\theta}} = (\hat{\theta}_{1},\hat{\theta}_{2}, \ldots,\hat{\theta}_{D})^{T}\) is a set of estimated parameters. To cope with the problem of parameter estimation, the following objective function is defined:

$$ \min J = \frac{1}{M}\sum_{k = 1}^{M} \Vert \boldsymbol {X}_{k} - \boldsymbol {Y}_{k} + \boldsymbol{ \zeta}_{k} \Vert ^{2}, $$
(3)

where k=1,2,…,M is the sampling time point and M denotes the length of data used for parameter estimation; ζ k denotes the additional noise added to the system; X k and Y k denote the state vector of the original and the estimated system at time k, respectively.

Clearly, parameter estimation for chaotic system is a multi-dimensional continuous optimization problem, where the decision vector is \(\hat{\boldsymbol{\theta}}\), and the optimization goal is to minimize J. The principle of parameter estimation for chaotic system in the view of optimization can be illustrated with Fig. 1.

Fig. 1
figure 1

The principle of parameter estimation for chaotic system

Obviously, it is difficult to estimate parameters of a chaotic system using traditional optimization methods, one reason being that the chaotic system exhibits dynamic instability and the other that there are multiple local optima in the objective functions. In this paper, an effective oppositional seeker optimization algorithm is designed to find the parameters of system (1). In addition, two indices, as formulated in Eqs. (4) and (5) respectively, are introduced to better evaluate the performance of the proposed scheme:

$$\begin{aligned} & R_{\mathrm{NSR}} = \Biggl[ \sum_{k = 1}^{M} \Vert \boldsymbol{\zeta}_{k} \Vert ^{2} \Big/ \sum _{k = 1}^{M} \Vert \boldsymbol {X}_{k} \Vert ^{2} \Biggr]^{1/2} \times100\ \%, \end{aligned}$$
(4)
$$\begin{aligned} & F_{\mathrm{EEF}} = \biggl\Vert \frac{\hat{\boldsymbol{\theta}} - \boldsymbol{\theta}}{ \boldsymbol{\theta}} \biggr\Vert _{2} \times100\ \%, \end{aligned}$$
(5)

where R NSR is the noise-to-signal value defined as the ratio of noise intensity to the signal intensity in the noise-free system, and F EEF is the error evaluator factor employed to measure the accuracy of the estimated results.

3 Oppositional seeker optimization

An overview of the original seeker optimization algorithm (SOA) is presented in Sect. 3.1, while a brief introduction to the opposition-based learning scheme in Sect. 3.2. The proposed oppositional seeker optimization algorithm is presented in Sect. 3.3.

3.1 Seeker optimization algorithm

SOA [17, 24] is a relatively new population-based heuristic search algorithm. It is based on simulating the act of human search for finding an optimal solution by a seeker population. The whole population is randomly divided into three subpopulations. These subpopulations search over several different domains of the search space. All seekers in the same subpopulation constitute a neighborhood, which represents the social component for social sharing of information.

(1) Implementation of SOA. In SOA, for each seeker i, the position update on each variable j is given by the following equation:

$$ x_{ij}(t + 1) = x_{ij}(t) + \alpha_{ij}(t) \times d_{ij}(t), $$
(6)

where x ij (t+1) and x ij (t) are the positions of seeker i on the variable j at time steps t+1 and t, respectively; d ij (t) and α ij (t) are search direction and step length of seeker i on the variable j at time step t, where α ij (t)≥0 and d ij (t)∈{−1,0,1}. Here, i represents the population number and j the variable number to be optimized.

Moreover, seekers in the same subpopulation are searching for the optimal solution using their own information. In order to avoid the convergence of subpopulations trapping into local optima, the position of the worst seeker of each subpopulation is combined with the best one in each of the other subpopulations using the binomial crossover operator as follows:

$$ x_{k_{n}j,\mathrm{worst}} = \left\{ \begin{array}{l@{\quad }l} x_{lj,\mathrm{best}} & \mbox{if}\ \operatorname{rand}(0,1) \le0.5,\\ x_{k_{n}j,\mathrm{worst}} & \mbox{else}, \end{array} \right. $$
(7)

where \(\operatorname{rand}(0,1)\) is a uniformly random real number within [0,1], \(x_{k_{n}j,\mathrm{worst}}\) is denoted as the jth variable of the nth worst position in the kth subpopulation, x lj,best is the jth variable of the best position in the lth subpopulation. Here, n,k,l=1,2,…,K−1 and kl. Thus, the diversity of population is increased by sharing good information among subpopulations.

(2) Search direction. In SOA, each seeker selects his search direction based on several empirical directions by comparing the current or historical positions of himself or his neighbors. For seeker i, the empirical directions involved are:

$$\begin{aligned} & \boldsymbol {d}_{i,\mathrm{ego}}(t) = \operatorname{sign}\bigl( \boldsymbol {p}_{i,\mathrm{best}}(t) - \boldsymbol {x}_{i}(t)\bigr), \end{aligned}$$
(8)
$$\begin{aligned} & \boldsymbol {d}_{i,\mathrm{alt}1}(t) = \operatorname{sign}\bigl(\boldsymbol {g}_{\mathrm{best}}(t) - \mathbf {x}_{i}(t)\bigr), \end{aligned}$$
(9)
$$\begin{aligned} & \boldsymbol {d}_{i,\mathrm{alt}2}(t) = \operatorname{sign}\bigl(\boldsymbol {l}_{\mathrm{best}}(t) - \boldsymbol {x}_{i}(t)\bigr), \end{aligned}$$
(10)

where d i,ego(t) is egotistic direction, d i,alt1(t) and d i,alt2(t) are altruistic directions, p i,best(t), g best(t) and l best(t) represent personal historical best position and neighbors’ historical and current best positions, respectively, the function \(\operatorname{sign}( \cdot)\) being a signum function on each variable of the input vector. In addition, each seeker i, as an agent, enjoys the properties of pro-activeness and exhibits goal-directed behavior [25], which means he may change his search direction in advance according to his past behavior. This behavior is modeled as an empirical direction called pro-activeness direction as given in Eq. (11):

$$ \boldsymbol {d}_{i,\mathrm{pro}}(t) = \operatorname{sign}\bigl( \boldsymbol {x}_{i}(t_{1}) - \boldsymbol {x}_{i}(t_{2}) \bigr), $$
(11)

where t 1,t 2∈{t,t−1,t−2} and x i (t 1) is better than x i (t 2).

Every variable j of d i (t) is selected applying the following proportional selection rule as in Eq. (12):

$$ d_{ij} = \left\{ \begin{array}{l@{\quad }l} 0, & \mbox{if}\ r_{j} < p_{j}^{(0)},\\ + 1, & \mbox{if}\ p_{j}^{(0)} \le r_{j} \le p_{j}^{(0)} + p_{j}^{( + 1)},\\ - 1, & \mbox{if}\ p_{j}^{(0)} + p_{j}^{( + 1)} < r_{j} \le1, \end{array} \right. $$
(12)

where r j is a uniform random number in [0,1], \(p_{j}^{(m)}\) (m∈{0,+1,−1}) is the percent of the numbers of m from the set {d ij,ego,d ij,alt1,d ij,alt2,d ij,pro} on each variable j of all the four empirical directions, i.e., \(p_{j}^{(m)} = (\mathrm{the\ number\ of}\ m) / 4\).

(3) Step length. In SOA, fuzzy system is adopted to represent the understanding and linguistic behavioral pattern of human searching tendency. The fitness values of all the seekers are sorted in descending manner and turned into the sequence numbers from 1 to S as the inputs of fuzzy reasoning, where S is the size of population. This design renders a fuzzy system to be applicable to a wide range of optimization problems. The calculation expression of step length is presented as

$$ \mu_{i} = \mu_{\max} - \frac{S - I_{i}}{S - 1}( \mu_{\max} - \mu_{\min} ), $$
(13)

where I i is the sequence number of x i (t) after sorting the fitness values, μ max is the maximum membership degree value which is equal to or a little less than 1. Bell membership function \(\mu(x) = e^{ - x^{2}/2\delta^{2}}\) is well utilized in the action part of fuzzy reasoning, and the parameter δ in the function is determined as follows:

$$ \boldsymbol{\delta} = \omega\times \mathrm{abs}( \boldsymbol {x}_{\mathrm{best}} - \boldsymbol {x}_{\mathrm{rand}}), $$
(14)

where the symbol abs(⋅) produces an absolute value in response to input vector, ω is used to decrease the step length with time step increasing so as to gradually improve the search precision. The x best and x rand are the best seeker and a randomly selected seeker, respectively, from the same subpopulation to which the ith seeker belongs, and x rand is different from x best. The step length α ij (t) for every variable j is given as

$$ \alpha_{ij}(t) = \delta_{j}\sqrt{ - \ln\bigl( \operatorname{rand}(\mu_{i},1)\bigr)}, $$
(15)

where \(\operatorname{rand}(\mu_{i},1)\) returns a uniformly random real number within [μ i ,1].

3.2 Opposition-based learning

Opposition-based learning (OBL) has been used by EAs to improve the convergence [22, 2628]. The main idea of OBL is to consider an estimate as well as its corresponding opposite estimate to achieve a better approximation of the current candidate solutions.

We assume that X=(x 1,x 2,…,x n ) is a point in an n-dimensional space, where x j R and x j ∈[a j ,b j ] ∀j∈{1,2,…,n}; then the opposite point \(\boldsymbol {X}' = (x'_{1},x'_{2}, \ldots,x'_{n})\) is denoted by its elements as \(x'_{j} = a_{j} + b_{j} - x_{j}\).

Now, by employing the opposite point denotation, the OBL can be used in optimization as:

  • Generate a point X=(x 1,x 2,…,x n ) and its opposite point \(\boldsymbol {X}' = (x'_{1},x'_{2}, \ldots,x'_{n})\) in an n-dimensional search space.

  • Evaluate the fitness of both points f(X) and f(X′).

  • If f(X′)≤f(X) (for minimization problem, vice versa), then replace X with X′; otherwise, continue with X.

Thus, we see that the point and its opposite point are evaluated simultaneously in order to obtain the fitter one.

3.3 OSOA approach

As an extension of the original SOA approach, an oppositional seeker optimization algorithm (OSOA) is proposed in this paper. The OBL idea is embedded in SOA to improve its performance. On the one hand, the OBL approach is utilized to increase the diversity of initial population. On the other hand, new populations are also generated during the evolutionary process to accelerate its convergence speed. By fusing the operators in SOA with OBL technique, the exploration and exploitation capability may be enhanced and well balanced, and thus result in a more effective algorithm for global optimization. The main procedure of the OSOA is described in Table 1 and can be described as follows.

Table 1 The main procedure of OSOA

(1) Population initialization. In OSOA, initial population is generated via OBL. A population of 2S positions is initialized by using uniform random distribution and their quasi-opposite solutions, which is shown as

$$\begin{aligned} & x_{ij} \leftarrow a_{j} + (b_{j} - a_{j}) \times \operatorname{rand}(0,1), \end{aligned}$$
(16)
$$\begin{aligned} &x_{(S + i)j} \leftarrow a_{j} + b_{j} - x_{ij}, \end{aligned}$$
(17)

where a j and b j are the lower and upper bounds of jth variable, respectively; x ij and x (S+i)j are the positions of seeker i and S+i on the jth variable, out of which S best positions are selected.

(2) Generation jumping. Unlike OBL-based initialization, generation jumping during the evolutionary process calculates the opposite population dynamically. Instead of using variables’ predefined interval boundaries [a i ,b i ], generation jumping calculates the opposite of each variable based on minimum and maximum values of that variable in the current population:

$$ x_{(S + i)j} \leftarrow x_{\min}^{P}(j) + x_{\max}^{P}(j) - x_{ij}, $$
(18)

where \(x_{\min}^{P}(j)\) and \(x_{\max}^{P}(j)\) are the minimum and maximum values of the jth variable in current population P, respectively.

4 Simulations and comparisons

To verify the effectiveness of the proposed scheme, two typical chaotic systems are employed as testbeds in this section. The simulation was done using MATLAB 7.1 on Core 2 Duo processor, 2.26 GHz with 2 GB RAM. The parameters used in SOAs are set as follows: population size S is 40, the maximum value μ max and minimum value μ min of membership degree are set to 0.99 and 0.0111, respectively.

4.1 On the Lorenz system

As a typical chaotic system, Lorenz system is employed as the first example, which can be described as:

$$ \left\{ \begin{array}{l} \dot{x} = \delta(y - x),\\ \dot{y} = \gamma x - xz - y,\\ \dot{z} = xy - bz, \end{array} \right. $$
(19)

where δ=10, γ=28 and b=8/3 is the original setting for the system parameters.

The system is initialized as state X 0, which is randomly selected from its evolution process. The length of the sampled data M is set as 300. Moreover, the proposed scheme is compared with original SOA, PSO [11] and HBBO [13]. For a fair comparison, the maximum number of generations is set to 100, as used for PSO and HBBO in the literature [11, 13]. Table 2 lists the statistical results of the best, the mean and the worst estimated parameters for Lorenz system over 20 independent runs. It can be seen that the best fitness values obtained by OSOA are better than those by SOA, PSO and HBBO, and the same is for the mean and worst values in the table. Even the worst results by OSOA still outperform the best ones of the other three algorithms. The averaged result of the evolving process of the estimated parameters and the fitness values by SOA and OSOA are shown in Fig. 2. From the figure it can be seen that OSOA converges to the optimal solution rapidly, and has faster convergence speed than the SOA. Therefore, OSOA demonstrates better effectiveness and robustness than SOA, PSO and HBBO for parameter estimation of the well-known Lorenz system.

Fig. 2
figure 2

Evolving process of parameter estimation for Lorenz system by SOA and OSOA: (a) searching process for δ, (b) searching process for γ, (c) searching process for b, (d) convergence trajectories of fitness value

Table 2 Statistical results of different methods for Lorenz system

To better simulate the real world, white Gaussian noise is added to each state variable, and the running parameters are kept unchanged. The averaged results by OSOA over 20 independent runs are given in Table 3. It is obvious that OSOA still can achieve good results when the noise is concerned; in particular, if the value of R NSR is small enough, say, 1 %, an ideal outcome may be obtained with F EEF=0.9 %. In addition, if R NSR is substantially less than 25 %, an overall satisfactory outcome, with F EEF<5 %, can be obtained. However, the obtained result is relatively poor, with F EEF=13.89 %, under the noise of R NSR=50 %.

Table 3 Statistical results obtained by OSOA for Lorenz system with different noise

4.2 On the Chen systems

To further evaluate the proposed scheme, the Chen’s chaotic system is adopted, which is described as

$$ \left\{ \begin{array}{l} \dot{x} = \delta(y - x),\\ \dot{y} = (b - \delta)x - xz + by,\\ \dot{z} = xy - \gamma z, \end{array} \right. $$
(20)

with δ=35, γ=3 and b=28 as the original parameter setting.

The statistical results of the estimated parameters of the Chen system are summarized in Table 4, where the results of DEA given in Ref. [29] are used for comparison. To make a fair comparison, the maximum generation number is set to 45. In addition, totally 5 runs are made for each algorithm and the length of sampled data Mis set to 100. From Table 4 it can be seen that OSOA have more accurate results than SOA and DEA. Figure 3 depicts the convergence profile of the estimated parameters and fitness values of OSOA and SOA; obviously, OSOA converges to the optimal solution more rapidly than SOA. Table 5 presents the results under the influence of noise with different R NSR values. It can be observed that the results are satisfactory in the cases with R NSR value smaller than 10 %, especially a fairly good outcome can be obtained if the R NSR value reaches 1 %.

Fig. 3
figure 3

Evolving process of parameter estimation for the Chen system by SOA and OSOA: (a) searching process for δ, (b) searching process for γ, (c) searching process for b, (d) convergence trajectories of fitness value

Table 4 Statistical results of different methods for Chen system
Table 5 Statistical results of OSOA for the Chen system with different noise

It can be concluded that OSOA is significantly better and statistically more robust than all the other listed algorithms in terms of search performance.

5 Conclusions

In this paper, parameter estimation of chaotic system is formulated as a multi-dimensional optimization problem from the viewpoint of optimization. An OSOA scheme is proposed based on OBL method and applied to solve this optimization problem. Numerical simulations based on Lorenz and Chen chaotic systems and comparisons with some typical existing approaches demonstrated the effectiveness and robustness of the proposed scheme. The future research is to apply this scheme to some other systems, such as high-dimensional, dynamical and uncertain chaotic systems.