Abstract
A number of benchmark generators have been proposed for dynamic single objective optimization problems. The moving peaks benchmark and the GDBG benchmark are widely used to test the performance of an evolutionary algorithm. The two benchmarks construct a fitness landscape with a number of peaks that can change heights, widths, and locations. The two benchmarks are simple and easy to understand. However, they exist two major issues: (1) the time complexity is high for evaluating a solution and (2) peaks may become invisible when changes occur. To address the two issues, this paper proposes an efficient generator with enriched features. The generator applies the k-d tree to partition the search space and sets a simple unimodal function in each sub-space. The properties of the proposed benchmark are discussed and verified by a set of evolutionary algorithms.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In recent years, there has been a growing interest in developing evolutionary algorithms in dynamic environments. To comprehensively evaluate the performance of an evolutionary algorithm (EA), an important task is to develop a good benchmark generator. Over the years, a number of benchmark generators for dynamic optimization problems (DOPs) have been proposed. Generally speaking, these benchmark generators can be classified to the following three classes in terms of the way to construct problems. Note that, this paper focuses on only dynamic continuous unconstrained single objective optimization problems.
The first class of generators switch the environment between several stationary problems or several states of a problem. Early generators normally belong to this class. A generator based on two static landscapes A and B was proposed in [7]. Changes can occur in three ways: (1) linear translation of peaks in landscape A; (2) only the global optimum randomly moves in landscape A; (3) switching landscapes between A and B. In [15], the environment oscillates among a set of fixed landscapes.
Like the first class of generators, the second class of generators also consist of a number of basic functions. However, the environment normally takes the form \(f(\mathbf {x})= max\{g_i(\mathbf {x})\}, i=1,\ldots , N\) to construct the fitness landscape and it does not switch among the basic functions. The environmental changes are caused by the changes of every \(g(\mathbf {x})\). Many generators fall into this class. The moving peaks benchmark (MPB) [15] is one of the widely used generators. The MPB consist of a number of peaks. Each peak is constructed by a simple unimodal function which can change in height, width, and location. The DF1 generator [15] and the rotation dynamic benchmark generator (RDBG) [10] use a similar way to construct the fitness landscape. The DF1 generator uses the logistic function to change the height, width and location of a peak, while the RDBG rotates the fitness landscape to generate changes. A new generator based on the framework of the DF1 was proposed in [20], where the basic function used to construct a peak in DF1 was replaced by two traditional functions in [20]. A fitness landscape consists of a number Gaussain peaks was introduced in [8] where a peak changes in its center, amplitude, and width. A challenging dynamic landscape was proposed in [10], called composition dynamic benchmark generator (CDBG), where a set of composition functions are shifted in the fitness landscape. The CDBG introduces several change types, e.g., small step changes, large step changes, random changes, chaotic changes, recurrent changes and noisy environments.
The third class of generators divide the search space into subspaces and set simple unimodal functions in each subspace. A disjoint generator was proposed in [21], where each dimension of the search space is evenly divided into w segments. The total number of subspaces is \(w^D\) (D denotes the number of dimensions). In each subspace, a peak function is defined where its global optimum is at the center of the subspace.
The first and the third classes of generators lack of the ability of manipulating a single peak. In the literature of EAs for DOPs, the second class of generators are mostly used for experimental studies. This class of generators are flexible and easy to manipulate the characteristics of a change for every peak. However, it has two disadvantages. Firstly, to evaluation a solution \(\mathbf {x}\), we need to compute the objective value of \(\mathbf {x}\) for each basic function (\(g(\mathbf {x})\)) in O(D) and then find out the maximum value as the fitness value of \(\mathbf {x}\). Therefore, the time complexity of evaluating a solution is at least O(ND). Secondly, a peak may become invisible when a change occurs, and hence the total number of peaks will be less than the predefined value.
Besides the way of the construction of the fitness landscape, researchers have also been interested in developing characteristics of changes to simulate real-world applications, such as the predictability–whether changes are predictable in a regular pattern, time-linkage–whether future changes depend on the current/previous solutions found by optimizers [4, 17], detectability–whether changes are detectable, severity– determines the magnitude of a change, and change factors (objective functions, the number of dimensions, constraints, domain of the search space, and function parameters).
A good benchmark generator should have the following characteristics [16]: (1) Flexibility, the generator should be configurable regarding different aspects, e.g., the number of peaks, change severity, and change features, etc.; (2) Simplicity and efficiency, the generator should be simple to implement, analyze, and computationally efficient; (3) The generator should be able to resemble real-world problems to some extent. Most real-world problems are very hard and complex, with nonlinearities and discontinuities [14].
Based on the above considerations, this paper aims to propose a novel generator, which is able to (1) address the two issues mentioned above of the current mainly used generators and (2) provide enriched characteristics of changes. To achieve the aims, this paper uses the idea of space partition and chooses a peak function for every sub-space from a predefined function set. A new benchmark generator, called Free Peaks (FPs), is proposed. The k-d tree [1] is used to partition the solution space. Each peak in a sub-space can be freely manipulated regarding its height, peak location, shape, and basin of attraction. A set of characteristics of changes are also introduced in this paper.
The rest of this paper is organized as follows. Section 2 introduce the construction of the FPs in detail, including the partition process of the k-d tree, a set of basic peak functions, and the setup of subspaces. Section 3 gives the construction of different types of changes. Section 4 presents the results of the experimental studies. Finally, conclusions are given in Sect. 5.
2 Free Peaks
The section introduces the basic elements of the free peaks (FPs) benchmark generator. Without loss of generality, maximization optimization problems are assumed in this paper. Before the introduction of the generator, we need to prepare a set of simple shape functions.
2.1 One Peak Function
In this paper, eight simple symmetrical unimodal functions are defined as follows:
where \({d(\mathbf {x})=\sqrt{\sum _i^D{(x_i-X_i)^2}}}\) is the Euclidean distance from \(\mathbf {x}\) to the peak, which is located at a user-defined location \(\mathbf {X}^{s_v}\) with a height of \(h>0\) (\(v =1,\ldots ,8\)), r is a parameter of value in \(\small [0, \sqrt{\sum _i^D{(u_i^{s_v}-l_i^{s_v})^2}}]\) (\(u_i^{s_v}=100,l_i^{s_v}=-100\)), and m and \(\eta \) determine the number of segments and the gap between two neighbor segments of \(s_8\), respectively. The default values of \(\mathbf {X}^{s_v}\)=\(\mathbf {0}\), \(h=100, r = 50, m=3\) and \(\eta =5.5\) are used in this paper.
Figure 1 shows the shapes of the eight functions. Among these functions, \(s_1\) is a linear function, \(s_2\)-\(s_4\) are convex functions, \(s_5\) and \(s_6\) are concave functions, \(s_7\) is partially convex, partially concave, and partially linear and \(s_8\) is a disconnected function. Each function has a single peak located at X and is monotonic from the peak.
2.2 Partition the Search Space
The k-d tree [1] is a binary tree where each node is a k-dimensional point. Every non-leaf node can be thought of as implicitly generating a splitting hyperplane that divides the space into two parts. Points to the left of this hyperplane are represented by the left subtree of that node and points to the right of the hyperplane are represented by the right subtree. Every leaf node denotes a sub-space of the solution space. Fig. 2 shows a k-d tree (Fig. 2-left) for the decomposition of a 2-D solution space (Fig. 2-right) with six points.
To construct a balanced tree, the canonical method [1] is used, where a median point is selected with the cutting axis. Algorithms 1 and 2 present the space partition process and the inquiry of a sub-space, respectively. In this paper, the solution space is divided by default into N subspaces with random sizes
2.3 Setup of the Sub-space
A function f(x) constructed based on the FPs can be defined by
where each subspace \(b_k\) contains a basic function \(f^{b_k}\) associated with a shape function \(s_v (k=1,2,\ldots N, v =1,2,\ldots ,8)\). The whole search space of \(f ([\mathbf {l},\mathbf {u}])\) is divided into N subspaces: \([\mathbf {l}^{b_1},\mathbf {u}^{b_1}],\ldots , [\mathbf {l}^{b_N},\mathbf {u}^{b_N}]\), i.e., \([\mathbf {l},\mathbf {u}]\)=\(\{[\mathbf {l}^{b_k},\mathbf {u}^{b_k}],\ldots \},\) \(k = 1, 2,\ldots N\). To compute the objective of \(\mathbf {x}\) (\(f(\mathbf {x})\)), we need to find the subspace \(b_k\) where \(\mathbf {x}\) is (i.e., \(\mathbf {l}^{b_k} \le \mathbf {x} < \mathbf {u}^{b_k}\)) by Algorithm 2, then map \(\mathbf {x}\) to a solution \(\mathbf {x^{s_v}}\) in the search space of \(s_v\) associated with \(f^{b_k}\) in subspace \(b_k\) by
where the mapping is linear from a solution in the subspace \(b_k\) of \(\mathbf {f} ([\mathbf {l}^{b_k},\mathbf {u}^{b_k}])\) to a solution in the search space of \(s_v\) (\([\mathbf {l}^{s_v},\mathbf {u}^{s_v}]\)). Eventually, we set the objective \(f(\mathbf {x})\) by
2.4 Time Complexity
According to the above description of the FPs, to evaluate a solution \(\mathbf {x}\) we need to perform the following three steps of procedures: (1) find out the sub-space (\(b_k\)) where \(\mathbf {x}\) is; (2) map \(\mathbf {x}\) to a location (\(\mathbf {x}^{s_v}\)) in the search space of \(f^{b_k}\); (3) compute the objective of (\(\mathbf {x}^{s_v}\)) by one of the eight shape functions. Identifying a solution in which sub-space has a time complexity of O(log(N)) by Algorithm 2. Both the second and the third steps run in O(D). Therefore, the total time complexity of evaluating a solution in the FPs is \(O(log(N))+ 2O(D)\).
3 Constructing Dynamic Optimization Problems
This section introduces two types of changes: physical changes and non-physical changes. Physical changes are changes, which can be observed, including changes in peak location, peak shape, peak height, the size of the basin of attraction, and the number of peaks. Non-physical changes are characteristics of physical changes, including detectability, predictability, time-linkage, and noise. The physical changes are listed as follows.
3.1 The Change in a Peak’s Location Within the Peak’s Basin
To change a peak’s location \((\mathbf {X}^{b_k}(t))\) within its basin \(b_k\) at time t, we change its mapping location \(\mathbf {X}^{s_v}(t)\) (see Eq. (3)) in the search space of the associated component function \(s_v\) by
where \(\mathbf {\nu }\) is a normalized vector with a random direction; \(\mathrm {N}(0,\sigma ^{s_v})\) returns a random number of the normal distribution with mean 0 and variance \(\sigma ^{s_v}\) (the shift severity with a default value of (1); \(\lambda \in [0,1]\) is a parameter to determine the correlation between the direction of the current movement and the previous movement. \(\lambda =1\) indicates the direction of a peak’s movement is predictable, and \(\lambda =0\) indicates the movement of a peak is completely in a random direction. The ith dimension of \(\mathbf {X}^{s_v}(t)\) will be re-mapped to a valid location if it moves out of the range of the component function as follows:
3.2 The Change in the Size of a Peak’s Basin of Attraction
To vary the size of the basin of attraction of a peak, we just need to change the value of the cutting hyper-plane constructed with the dimension c of a division point \(\mathbf {dp}\) (point \(\mathbf {dp}\) should be a parent node of a leaf node in the kd-tree, e.g., node (2,3) in Fig. 2) as follows.
where \(\sigma _c =0.01\) is the severity, \(b_{k,c}^l\) and \(b_{k+1,c}^u\) are the upper boundary and lower boundary of two neighbour subspaces \(b_{k}\) and \(b_{k+1}\), respectively, which are generated by cutting the cth dimension of the hyper-rectangle for the generation of sub-spaces \(b_{k}\) and \(b_{k+1}\); Note that, two neighbour subspaces will change if we change the value of a cutting dimension.
3.3 The Change in a Peak’s Height
The height of a peak at time t is changed as follows:
where \(\delta _{h_i} = \mathrm {N}(0,\sigma _{h_i}), \sigma _{h_i}\) is the height severity of peak \(p_i, \sigma _{h_i}\) is set to a random value in [0,7]; \(H_{min}\) and \(H_{max}\) are the minimum and maximum heights, which are set to 0 and 100, respectively, in this paper.
3.4 The Change in the Number of Peaks
The number of peaks follows a recurrent change as follows:
where N(t) is the number of peaks at time t (N(0) is the initial number of peaks); \(\sigma _N=2\) is a change step; \(T=25\) is the time period; \(N_{min}=1\) is the minimum number of peaks. If the number of peaks increases, \(\sigma _N\) random division points are added to the division set; Otherwise, \(\sigma _N\) points are randomly removed from the division set.
In addition to the predictable change in a peak’s location and the recurrent change in the number of peaks, three other non-physical features are introduced: a time-linkage change, a partial change, and noisy environments. In the time-linkage change, a peak changes only when it is found by an optimizer. For the partial change, a part of peaks change when an environmental change occurs. In the noisy environment, noise is added to a solution when it is to be evaluated by
where \(i=1,\ldots ,D, b_k=inquire(\mathbf {x})\), \(\sigma _{noi}=0.01\) is the noise severity; \(BR_{b_k}\) is the basin ratio of the subspace \(b_k\) where \(\mathbf {x}\) is located.
Table 2 summarizes the feature comparison between FPs and other four popular benchmarks. From the table, the FPs provides many more features than the other four benchmarks.
4 Experimental Studies
In this section, two groups of experiments are carried out. The first group of experiments aim to investigate the performance of the FPs and the second group of experiments aim to compare the performance of a set of existing EAs on the FPs.
4.1 Comparison of Computing Efficiency
In this subsection, an experiment is carried out to compare the performance of the FPs with two peer benchmark generators (the MPB [5] and the rotation DBG [10]) in terms of two different aspects. The first comparison is the computational efficiency. Figure 3 shows the time cost on evaluating one million random points for the three benchmarks in different scenarios. The left graph of Fig. 3 shows the comparison on problems with different numbers of peaks with 100 dimensions and the right graph shows the comparison on problems with different numbers of dimensions with 1,000 peaks. From the results, it can be seen that the time spent with the FPs is significantly smaller than the other two benchmarks. In both cases, the time spent with the FPs is almost constant as the number of dimensions/peaks increases in comparison with the time spent with the other two benchmarks.
The second comparison is the number of invisible peaks. Table 3 shows the average number of invisible peaks for the three benchmarks over 1,000 changes. From the results, the issue of the rotation DBG is more serious than the MPB in all test cases. Moreover, the number of invisible peaks will increase as the number of peaks increases for the MPB and the rotation DBG. This is because, in the two peer benchmarks a peak can be hidden by a higher peak with a broader basin of attraction. This issue does not exist in the FPs as each peak takes a different subspace.
4.2 The Performance of Existing Algorithms
In this subsection, 11 peer algorithms are selected. They are mQSO [3], SAMO [2], SPSO [18], AMSO [12], CPSO [22], CPSOR [11], FTMPSO [24], DynDE [13], DynPopDE [19], mNAFSA [23], and AMP/PSO [9]. For parameters of all the peer algorithms, default values suggested in their proposals are used. Note that, the parameter settings for these algorithms may be not the optimal values. The stopping criterion is 100 changes. All the results are averaged over 30 independent runs of an algorithm on each problem. The offline error (\(E_{O}\)) [6] and the best-before-change error (\(E_{BBC}\)) are used. The offline error used in this paper is the average of the best error found every two objective evaluations and the best-before-change error is the average of the best error achieved at the fitness evaluation just before a change occurs.
A two-tailed t-test with 58\(^\circ \) of freedom at a 0.05 level of significance was conducted for each pair of algorithms on \(E_O\) and \(E_{BBC}\). The t-test results are given with the letters “w”, “l”, or “t”, which denote that the performance of an algorithm is significantly better than, significantly worse than, or statistically equivalent to its peer algorithms, respectively.
Tables 4, 5 and 6 show the comparison between the chosen algorithms on the FPs with different numbers of peaks, different changing ratios, and the time-linkage feature, respectively. From the results, it can be seen that different features have very different impacts on the performance of a particular algorithm. For example, the partial change feature cause difficulties for algorithms (e.g., CPSO) that need the detection of changes. In this case, changes are hard to be detected as only a part of the fitness landscape is allowed to change. The detection will fail by monitoring the changes of the fitness of a set of solutions if these solutions are in unchange areas. As a result, mechanisms for handling changes will be not triggered. Therefore, the performance of this type of algorithm is poor in this case. The time-linkage feature is hard for most algorithms, where their performance gets worse when this feature is enabled. For all the algorithms, AMP/PSO, which was recently proposed, performs best in terms of both performance metrics in most test cases Table 5.
5 Conclusions
This paper proposes a efficient benchmark generator, named free peaks, for constructing dynamic optimization problems. The framework is simple, feature enriched, and computing efficient. The properties and difficulties are analytical without the assistance of the visualization of the fitness landscape. The approach uses the building-blocks approach to construct a problem, therefore users can construct a problem with desired features. Users can replace the component functions used in this paper with their own functions. To test an algorithm’s performance on a problem with a certain feature, users just need to switch on or off that particular feature instead of switching to another problem with a quite different structure.
References
Bentley, J.L.: Multidimensional binary search trees used for associative searching. Commun. ACM 18(9), 509–517 (1975)
Blackwell, T.: Particle swarm optimization in dynamic environments. In: Yang, S., Ong, Y.-S., Jin, Y. (eds.) Evol. Comput. Dynamic Uncertain Environ. Studies in Computational Intelligence, vol. 51, pp. 29–49. Spinger, Heidelberg (2007). doi:10.1007/978-3-540-49774-5_2
Blackwell, T., Branke, J.: Multiswarms, exclusion, and anti-convergence in dynamic environments. IEEE Trans. Evol. Comput. 10(4), 459–472 (2006)
Bosman, P.A.N.: Learning, anticipation and time-deception in evolutionary online dynamic optimization. In: Proceedings of 2005 Genetic and Evolationary Computation Conference, pp. 39–47. ACM (2005)
Branke, J.: Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of 1999 IEEE Congress on Evolationary Computation, vol. 3, pp. 1875–1882 (1999)
Song, T., Liu, X., Zhao, Y., Zhang, X.: Spiking neural P systems with white hole neurons. IEEE Trans. Nanobiosci. (2016). doi:10.1109/TNB.2016.2598879
Cobb, H.G., Grefenstette, J.J.: Genetic algorithms for tracking changing environments. In: 5th International Conference on Genetic Algorithms, pp. 523–530 (1993)
Grefenstette, J.J.: Evolvability in dynamic fitness landscapes: a genetic algorithm approach. In: Proceedings of 1999 IEEE Congress on Evolationary Computation, vol. 3, p. 2038 (1999)
Li, C., Nguyen, T.T., Yang, M., Mavrovouniotis, M., Yang, S.: An adaptive multi-population framework for locating and tracking multiple optima. IEEE Trans. Evol. Comput. 99, 1 (2015)
Li, C., Yang, S.: A generalized approach to construct benchmark problems for dynamic optimization. In: 7th International Conference on Simulated Evolution and Learning, pp. 391–400 (2008)
Li, C., Yang, S.: A general framework of multipopulation methods with clustering in undetectable dynamic environments. IEEE Trans. Evol. Comput. 16(4), 556–577 (2012)
Li, C., Yang, S., Yang, M.: An adaptive multi-swarm optimizer for dynamic optimization problems. Evol. Comput. 22(4), 559–594 (2014)
Mendes, R., Mohais, A.S.: DynDE: a differential evolution for dynamic optimization problems. In: Proceedings of 2005 IEEE Congress on Evolationary Computation, pp. 2808–2815 (2005)
Michalewicz, Z.: The emperor is naked: evolutionary algorithms for real-world applications. ACM Ubiquity 2012, 1–13 (2012)
Morrison, R.W., De Jon, K.A.: A test problem generator for non-stationary environments. In: Proceedings of 1999 IEEE Congress on Evolationary Computation, pp. 2047–2053 (1999)
Nguyen, T.T., Yang, S., Branke, J.: Evolutionary dynamic optimization: a survey of the state of the art. Swarm Evol. Comput. 6, 1–24 (2012)
Nguyen, T.T., Yao, X.: Dynamic time-linkage problems revisited. In: Giacobini, M., et al. (eds.) EvoWorkshops 2009. LNCS, vol. 5484, pp. 735–744. Springer, Heidelberg (2009). doi:10.1007/978-3-642-01129-0_83
Parrott, D., Li, X.: Locating and tracking multiple dynamic optima by a particle swarm model using speciation. IEEE Trans. Evol. Comput. 10(4), 440–458 (2006)
du Plessis, M.C., Engelbrecht, A.P.: Differential evolution for dynamic environments with unknown numbers of optima. J. Glob. Optim. 55, 1–27 (2012)
Tfaili, W., Siarry, P.: Fitting of an ant colony approach to dynamic optimization through a new set of test functions. Int. J. Comput. Intell. Res. 3, 203–216 (2007)
Trojanowski, K., Michalewicz, Z.: Searching for optima in non-stationary environments. In: Proceedings of 1999 IEEE Congress on Evolutionary Computation, vol. 3, p. 1850 (1999)
Song, T., Pan, Z., Dennis, M.W., Wang, X.: Design of logic gates using spiking neural P systems with homogeneous neurons and astrocytes-like control. Inf. Sci. 372, 380–391 (2016)
Yazdani, D., Nasiri, B., Sepas-Moghaddam, A., Meybodi, M., Akbarzadeh-Totonchi, M.: mNAFSA: a novel approach for optimization in dynamic environments with global changes. Swarm Evol. Comput. 18, 38–53 (2014)
Yazdani, D., Nasiri, B., Sepas-Moghaddam, A., Meybodi, M.R.: A novel multi-swarm algorithm for optimization in dynamic environments based on particle swarm optimization. Appl. Soft Comput. 13(4), 2144–2158 (2013)
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grant 61673355.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, C. (2016). An Efficient Benchmark Generator for Dynamic Optimization Problems. In: Gong, M., Pan, L., Song, T., Zhang, G. (eds) Bio-inspired Computing – Theories and Applications. BIC-TA 2016. Communications in Computer and Information Science, vol 682. Springer, Singapore. https://doi.org/10.1007/978-981-10-3614-9_8
Download citation
DOI: https://doi.org/10.1007/978-981-10-3614-9_8
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-3613-2
Online ISBN: 978-981-10-3614-9
eBook Packages: Computer ScienceComputer Science (R0)