Keywords

1 Introduction

Optimization problems are of increasing importance in modern science and engineering fields, especially for the global continuous optimization problem. During the past decade, they have turned to be more complicated and diversified commensurate with the unceasing progress of science and technology [41]. The major challenge of the global continuous optimization is that the problems to be optimized may have many local optima. This issue is particularly challenging when the dimension is high [15]. Thus, numerous optimization techniques have been advanced to handle these problems [2, 20]. Currently, the most popular method is meta-heuristics, such as Genetic Algorithms (GAs) [6], Evolution Strategy (ES) [42], Particle Swarm Optimization (PSO) [21], Differential Evolution (DE) [40], Ant Colony Optimization (ACO) [9] and Biogeography-based Optimization (BBO) [36].

Biogeography-based Optimization (BBO) proposed by Dan Simon in 2008 is a novel Evolutionary Algorithm (EA) for global optimization based on the equilibrium theory of island biogeography. Different from other population-based algorithms, in BBO, poor solutions can improve their qualities by accepting new features from good ones [19]. Concretely, a habitat in BBO algorithm is called an island and a group of habitats construct the ecosystem. The habitat suitability index (HSI) presents the habitability of an island, which can be analogized as the fitness values to evaluate the problems. The HSI is always determined by a series of independent decision variables called Suitability Index Variable (SIV), such as the features of temperature, disease and earthquake in real world. Specially, the most important characteristics in BBO are migration and mutation operation. The migration, including immigration and emigration process, is designed to conditionally share the SIV information among habitats in an ecosystem. The mutation is a probabilistic operation that can randomly modify the habitat SIVs based on the a priori probability of the habitat.

Similar to other EAs, the BBO algorithm has also some certain weaknesses. The probabilistic migration can make the population share different information among solutions to guide good exploitation ability [14]. However, its directcopying-based migration and random mutation operators make BBO lack enough exploration ability and cannot improve the diversity of population [15, 41]. Although its convergence speed is relatively fast at the beginning of the evolutionary process, it easily falls into local optima. To mitigate these weakness, an improved BBO variant (called imBBO) is proposed in this paper.

Our contributions of this paper are summarized as follows:

  • An imBBO algorithm is proposed to mitigate part weakness of BBO for global optimization problems, which is composed of a hybrid migration operation and a scalable direction mutation operation.

  • A hybrid migration operation is designed based on the combination of the DE theory and a scalable method, which helps our imBBO to improve the population diversity and enhance the algorithm exploration ability.

  • We conduct an optimization performance comparison among our imBBO, the original BBO and three BBO variants. Empirical results demonstrate that our algorithm effectively outperforms the competitors for 23 out of 30 CEC’2017 benchmarks. Moreover, our imBBO presents a faster convergence speed.

The rest of this paper is organized as follows. Related works are discussed in Sect. 2. Section 3 defines the problem formulation. Section 4 proposes our imBBO algorithm. We describe our experimental setup in Sect. 5 and present the results in Sect. 6. Section 7 discusses our experimental results by answering two research questions. Section 8 concludes the paper.

2 Related Work

For global optimization problem, many references have showed that meta-heuristic algorithms [30], including the categories of nature-inspired meta-heuristic algorithms, physics-based algorithms and swarm-based methods, become much more popular to solve these problems or various engineering applications [28]. Part of typical algorithms include the first category of Genetic Algorithms (GAs) [6], Genetic Programming (GP) [29] and Biogeography-based Optimization (BBO) [36], the second category of Simulated Annealing (SA) [5], Gravitational Search Algorithm (GSA) [33] and Artificial Chemical Reaction Optimization Algorithm (ACROA) [1] and the third category of Particle Swarm Optimization Algorithm (PSO) [21], Ant Colony Optimization (ACO) [9] and Artificial Bee Colony Algorithm (ABC) [18].

This paper focuses on the Biogeography-based Optimization [36], which shows the excellent performance on various unconstrained or constrained benchmarks [7]. During the last decade, more BBO variants are proposed and are available in literature [8, 24, 37]. On the one hand, Mehmet et al. proposed an oppositional Biogeography-based Optimization [11] which is composed of the opposition based learning and migration rates of original BBO in 2009. To enhance the mutation operation, a real coded BBO algorithm [15] was proposed in 2010. Similarly, there are some additional BBO variants that the migration operation is also modified, such as the literatures [13, 17, 19, 20, 22, 41]. Moreover, Haiping Ma analyzed the equilibrium of migration models [23]. Haiping Ma and Dan Simon discussed migration models using Markov theory [25] and the blended BBO [26] for constrained optimization. Considering the random initial generation of population, Simon et al. focused on the re-initialization and local search in Linearized BBO [38].

On the other hand, Wenyin et al. proposed a combination DE/BBO [14] that combines the DE algorithm with BBO to improve the searching capability. Moreover, a hybrid BBO [27] was proposed in 2014, which combine the various EAs with BBO in different ways. That is, two types of hybridization named as iteration-level hybridization and algorithm-level hybridization are used.

Naturally, BBO has been widely applied to solve the real-world and engineering problems, such as the sensor selection [36], power system optimization [32], economic load dispatch [4] and antenna design [12, 39].

3 Problem Definition

For the different areas of engineering or scientific application, the optimization problems should be solved to achieve approximate optimal solutions. However, different problems may have the special constraints and conflicting objectives. So an effective method is to design a global search algorithm to find these optimal or near-optimal solutions.

Without loss of generality, the mathematical expression presents that the unconstrained continuous global minimization problem can formulize as a pair \( (S,f) \), where the \( S \subseteq R^{D} \) is a bounded set based on \( R^{D} \) and \( f:S \to R \) is a Ddimensional real-valued function. Finally, the purpose of these problems is to find a point \( X^{*} \in S \) [15] and the \( X^{*} \) belongs to a D-dimensional \( \left( {D \in \{ 1,2,3, \cdots \} } \right) \) vector. Thus, the \( f(X^{*} ) \) value is the global minimum on \( S[1] \). More specifically, it is required to find an \( X^{*} \in S \), as the formula 1:

$$ \forall X \in S:f(X^{*} ) \le f(X) $$
(1)

Note that the function \( f \) does not need to be continuous but it must be bounded. Moreover, the different variables contain the different bound in realworld constraint optimization problems. We only consider the unconstrained and continuous functions optimization in this paper.

As mentioned in above, the major challenge in global continuous optimization problem is that the optimization problems to be solved may easily lead EAs to trap into local optima. These issues are particularly challenging when the problem has the high dimension. So, one of the effective methods is that different EAs adopt the special search and modification method for different optimization problems.

4 Our Approach

In this section, we firstly introduce our algorithm implementation in Algorithm 1. Then, a hybrid migration operation is discussed in detail.

figure a

4.1 The Structure of imBBO

Our algorithm structure is showed in Algorithm 1, which is mainly composed of a hybrid migration operation introduced in Sect. 4.2 and a scalable direction mutation operation derived from our previous work [34].

Concretely, the hybrid migration operation combines the DE theory with a scalable method to further improve the population diversity, exploit the population information and enhance the algorithm exploration ability. Moreover, considering the excellent algorithm performance of iCPBBOCO [34], we still try to adopt the same mutation operation into our imBBO algorithm.

To make a fair comparison, all parameters values of our algorithm are set by referring to [14, 34, 36], which is shown in Table 1. The modification of migration and mutation operation are marked from Line 11 to Line 12 in Algorithm 1.

Table 1. Overview of parameters setting of imBBO and BBO variants.

4.2 Hybrid Migration Operation

A hybrid migration operation is deployed in our imBBO algorithm, which combines the DE theory [14, 31] with the component of migration operation from our previous iCPBBOCO algorithm [34]. The core idea of this proposed method is that good solutions would be less destroyed, while poor solutions can accept a lot of new features from good solutions. Furthermore, the implementation is listed in formula 2, which is composed of the relevant DE theory and a scalable method in formula 3 and formula 4, respectively. To enhance the diversity, we define a parameter \( \xi \) to further guide the specific migration operation.

$$ Migraiton\left\{ {\begin{array}{*{20}l} {Formual} \hfill & 3 \hfill & {rand \le \xi } \hfill \\ {Formual} \hfill & 4 \hfill & {rand > \xi } \hfill \\ \end{array} } \right. $$
(2)

For operations of the DE theory, the main mathematical principle is illustrated in formula 3. Recently, the DE algorithm is used for global permutation based combinatorial optimization problems [14], successfully. It is good at exploring the search space and locating the region of global minimum. It uses the distance and direction information from the current population and the characteristics of problem to guide the further search. So, we try to adopt it to our migration operation.

$$ \begin{aligned} & U_{i} (j) = \left\{ {\begin{array}{*{20}l} {U_{i} (j) + f_{1} *d_{1} + f_{2} *d_{2} } \hfill & {rand \le \kappa } \hfill \\ {U_{i} (j) - f_{1} *d_{3} - f_{2} *d_{2} } \hfill & {rand > \kappa } \hfill \\ \end{array} } \right. \\ & d_{1} = U_{1} (j) - U_{{r_{1} }} (j),d_{2} = U_{{r_{2} }} (j) - U_{{r_{3} }} (j) \\ & d_{3} = U_{NP} (j) - U_{{r_{1} }} (j) \\ \end{aligned} $$
(3)

To inherit the excellent performance, we define the standard parameters setting of DE theory in our algorithm. In formula 3, the variable \( U_{i} (j) \) presents the \( j - th \) decision variable in \( i - th \) solution. According to the principle of DE theory [14, 16, 31], we randomly select three additional population \( r_{1} \), \( r_{2} \) and \( r_{3} \), where \( r_{1} \ne r_{2} \ne r_{3} \). A series of variables, such as \( d_{1} \), \( d_{2} \) and \( d_{3} \), are the difference values of corresponding populations. Then, two scale factors of \( f_{1} \) and \( f_{2} \) are related with the lower/upper bound for immigration probability per individual, which indicates that how much amount of differential variation [10] will influence on target. Similarly, we also define the guide parameter \( \kappa \).

$$ U_{i} (j) = \left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\partial *U_{i} (j) + (1 - \partial )*U_{j} (j)} & {rand \le \varphi } \\ \end{array} } \\ {\begin{array}{*{20}c} {(1 - \partial )*U_{i} (j) + \partial *U_{j} (j)} & {rand > \varphi } \\ \end{array} } \\ \end{array} } \right. $$
(4)

In description of formula 4, it is an additional method to perform the migration operation. The new offspring solution comes from a different combination of the source solution of \( U_{i} \) and a target solution of \( U_{j} \). We define a scalable factor \( \partial \) to decide the migration size of individuals. To maintain the stability, we exchanges the scalable coefficient \( \partial \) among each other. That is, the purpose is to apply for asymmetrical migration, further enhance potential population diversity and exploit the population information.

5 Experimental Setup

In this section, we describe settings of our conducted experiments to evaluate ouralgorithm performance. Concretely, we detail the benchmarks, the performance criteria and algorithms setting.

5.1 Benchmark Functions

To evaluate the performance of the proposed algorithm, 30 benchmarks from CEC’2017 [3] which is the latest set of benchmarks are employed in our experiments. These benchmarks are divided into four categories, including unimodal functions (F01-F03), simple multimodal functions (F04-F10), hybrid functions (F11-F20) and composition functions (F21-F30). The more complex benchmarks in evaluation process, the better performance superiority of competition will be shown.

Currently, these benchmarks from CEC’2017 [3] are related with the real-parameter single objective optimization without making use of the exact equations of the test functions. Some benchmarks are developing with novel features such as new basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems, and so on.

5.2 Performance Mertrics

We evaluate the performance of our algorithm in terms of two aspects [14, 34], which are described as follows in detail.

  • Error: The error value of a solution \( X \) is defined as \( f(X) - f(X^{*} ) \), where \( X^{*} \) is the standard, global optimization of the objective. The minimum average error is recorded when the maximum number of fitness function evaluations \( \left( {maxFEs} \right) \) reached in 30 independent runnings. Moreover, we also calculate the average median values.

  • Convergence graphs: We present the convergence speed of our algorithm compared to the competitors. In order to observably demonstrate differences, we recalculate all of error values by the logarithm (log) function.

5.3 Algorithm Settings

Table 1 lists the values of key parameters of imBBO. To enable a fair performance comparison to the competitors, we use the same settings as the ones reported by [14, 15, 19, 20, 34, 36]. All of algorithms are developed by Matlab. We generate the initial population by uniform random initialization within the search space. According to [3], we set the problem dimension \( D = \, 100 \). That is, the search rang is \( [ - 100,100] \). Moreover, we define the \( maxFEs \) is 10000 * 10. All algorithms need to be terminated when reaching \( maxFEs \) or the error value is smaller than \( 10^{ - 08} \).

The competitors in our experiments include original BBO [36] and three BBO variants. They are the MOBBO [20], the PBBO [19] and the RCBBO [15] algorithms.

5.4 Measurement Settings

All measurements of each algorithm are performed on Windows 7 (64bit) Machine with Intel Core i5-4690 K CPU 3.5 GHz and 16 GB RAM. Each algorithm variant is compute-bound and not memory intensive.

To reduce measurement fluctuations caused by randomness (e.g., the randomness of performing migration and mutation), we independently execute each algorithm 30 times. We take both the median and mean values of the measurements for analysis.

6 Results

In this section, we report the experiment results in detail. we aim at answering the following two research questions.

  • RQ1: What is the performance of our imBBO, compared to the BBO and three

  • BBO variants? (Section 6.1)

  • RQ2: How fast is the convergence speed of our imBBO? (Section 6.2).

6.1 Performance Results

Table 2 records the experimental results of the comparison between our imBBO algorithm and competitors when applied to 30 CEC’2017 benchmarks in reaching to \( maxFEs \). The columns BBO, imBBO, MOBBO, PBBO and RCBBO list the measured results of each algorithm. We report the median and mean values for 30 executions of each algorithm. The rows of the table record the measured details for each benchmark. Moreover, we highlight the median and mean values in bold, which are the best for each benchmark.

Table 2. Experimental results of the best imBBO algorithm and four competitors (BBO, MOBBO, PBBO and RCBBO) for CEC’2017 continuous optimization benchmarks. The number in gray indicates a better case than others.

Our experimental results reveal that our imBBO outperforms the competitors for 23 out of 30 CEC’2017 benchmarks, except for F1, F3, F4, F7, F12, F17 and F24. Analyzing the experimental results, our algorithm achieve the better algorithm performance of the hybrid and composition functions by comparing with the unimodal and simple multimodal functions. Hence, we conjecture that the components of our imBBO make an effect to be the complex searching.

6.2 Convergence Speed

It is interesting to understand the convergence speed of our approach compared to the others. The convergence is an important metric to illustrate whether or not a algorithm has reached to the steady state.

Our experimental results demonstrate that the convergence speed of imBBO is much faster than other competitors, that part of representative curves are shown in Fig. 1 (blue line).

Fig. 1.
figure 1

The convergence curves of imBBO algorithm and four competitors (BBO, MOBBO, PBBO and RCBBO) for F02, F10, F15, F18, F19, F21, F28 and F30 benchmarks. The X-axis shows the number of iteration (NFFEs). The Y-axis presents the algorithm values of each iteration (log-error). (Color figure online)

7 Discussion

In this section, we answer the research question according to the above experimental results. Moreover, we further analyze the experimental results.

7.1 Research Questions

Regarding RQ1, we compare the algorithm performance of our imBBO with the original BBO and three BBO variants. The statistical results show that our algorithm works better than the competitors for 23 out of 30 CEC’2017 benchmarks.

Regarding RQ2, we present part of functions convergence speed of our imBBO and the competitors in Fig. 1 when the evolution running reaches to the \( maxFEs \). The experimental results indicate that our algorithm quick converges to a relatively stable state (the blue line in Fig. 1).

7.2 Results Analyzing

Although our imBBO algorithm outperforms the other competitors, the final algorithm results present some differences compared with standard optimization values from CEC’2017. Thus, we analyze the reasons as follows: (1) The functions from CEC’2017 are the latest test benchmarks. Much more complex functions are introduced, especially for hybrid functions and composition functions. Thus, these conditions take the potential probability to influence the algorithm performance. (2) Our global is to verify our imBBO for mitigating part of issues of exploration ability and diversity. We focus on the performance superiority by comparing with other popular BBO variants, especially for the latest CEC’2017 benchmarks set. (3) We insist upon our own view that different exploration methods should be involved into algorithm for different objectives. That is, different characteristics of objectives should be analyzed at the begin of evolution process. Since it is the first time to do the test in CEC’2017, there is no special consideration of objectives in our imBBO algorithm. Furthermore, more components should be developed in future.

8 Conclusion

In this paper, an improved BBO variant called imBBO, is proposed to solve the global optimization problems. Concretely, a hybrid migration operation is designed to further improve the algorithm exploration ability and exploit the population information, which conditionally combines the DE theory with a scalable method to increase the diversity of population in formula 4. Moreover, the mutation operation in our imBBO derives from our previous work because it has been proved its performance successfully.

To evaluate the algorithm performance of imBBO, we conduct the comparison by evaluating our algorithm to the original BBO and three BBO variants based on 30 CEC’2017 benchmarks [3] with different characteristics. Empirical experimental results demonstrate that our algorithm effectively outperforms the competitors for 23 out of 30 benchmarks.

In future work, the influence of population size, other parameters tuning and the problem dimension will be further studied. Additional, this research just focus on the unconstrained global numerical optimization problems. Another work will extend our imBBO to address some constrained, real-world optimization problems, such as virtual machine consolidation problems [35, 43].