Abstract
A very general class of EDAs is defined, on which universal results on the rate of diversity loss can be derived. This EDA class, denoted SML-EDA, requires two restrictions: 1) in each generation, the new probability model is build using only data sampled from the current probability model; and 2) maximum likelihood is used to set model parameters. This class is very general; it includes simple forms of many well-known EDAs, e.g. BOA, MIMIC, FDA, UMDA, etc. To study the diversity loss in SML-EDAs, the trace of the empirical covariance matrix is the proposed statistic. Two simple results are derived. Let N be the number of data vectors evaluated in each generation. It is shown that on a flat landscape, the expected value of the statistic decreases by a factor 1–1/N in each generation. This result is used to show that for the Needle problem, the algorithm will with a high probability never find the optimum unless the population size grows exponentially in the number of search variables.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
References
Larrañaga, P., Lozano, J.A.: Estimation of Distribution Algorithms, A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Dordrecht (2002)
Baluja, S.: Population-based incremental learning: A method for integrating genetic search based function optimization and competive learning. Technical Report CMU-CS-94-163, Computer Science Department, Carnegie Mellon University (1994)
Mühlenbein, H., Paaß, G.: From recombination of genes to the estimation of distributions i: Binary parameters. In: Proceedings of PPSN IV, pp. 178–187 (1996)
Shapiro, J.L.: The sensitivity of pbil to its learning rate and how detailed balance can remove it. In: Jong, K.A.D., Poli, R., Rowe, J.E. (eds.) Foundations of Genetic Algorithms, vol. 7, pp. 115–132. Morgan Kaufmann, San Francisco (2003)
Shapiro, J.L.: The detailed balance principle in estimation of distribution algorhithm. Evolutionary Computation 13(1), 99–124 (2005)
Droste, S.: Not all linear functions are equally difficult for the compact genetic algorithm. In: GECCO 2005, pp. 679–686. ACM, New York (2005)
Gonzalez, C., Lozano, J., Larrañaga, P.: Mathematical modelling of umcac algorithm with tournament selection: Behaviour on linear and quadratic functions. International Journal of Approximate Reasoning 31(3), 313–340 (2002)
Pelikan, M., Goldberg, D.E., Cantú-Paz, E.: Bayesian optimization algorithm, population sizing, and time to converge. In: Proceedings of GECCO 2000, pp. 275–282 (2000)
Pelikan, M.: Bayesian optimization algorithm: From single level to hierarchy. PhD thesis, University of Illinois at Urbana-Champaign, Urbana, IL (2002); Also IlliGAL Report No 2002023
Mühlenbein, H., Mahnig, T.: Evolutionary computation and beyond. In: Uesaka, Y., et al. (eds.) Foundations of Real-World Intelligence, pp. 123–186. CLSI Publications (2001)
Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Shapiro, J.L. (2006). Diversity Loss in General Estimation of Distribution Algorithms. In: Runarsson, T.P., Beyer, HG., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds) Parallel Problem Solving from Nature - PPSN IX. PPSN 2006. Lecture Notes in Computer Science, vol 4193. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11844297_10
Download citation
DOI: https://doi.org/10.1007/11844297_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-38990-3
Online ISBN: 978-3-540-38991-0
eBook Packages: Computer ScienceComputer Science (R0)