Abstract
When evolutionary algorithms are used for function optimization, they perform a heuristic search that is influenced by many parameters. Here, the choice of the mutation probability is investigated. It is shown for a non-trivial example function that the most recommended choice for the mutation probability 1/n is by far not optimal, i. e., it leads to a superpolynomial running time while another choice of the mutation probability leads to a search algorithm with expected polynomial running time. Furthermore, a simple evolutionary algorithm with an extremely simple dynamic mutation probability scheme is suggested to overcome the difficulty of finding a proper setting for the mutation probability.
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) as part of the Collaborative Research Center “computational Intelligence” (531).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bäck, T. (1993). Optimal mutation rates in genetic search. In Forrest, S. (Ed.): Proceedings of the Fifth International Conference on Genetic Algorithms (ICGA’ 93), 2–8. Morgan Kaufmann.
Bäck, T. (1998). An overview of parameter control methods by self-adaptation in evolutionary algorithms. Fundamenta Informaticae 34, 1–15.
Droste, S., Jansen, T., and Wegener, I. (1998). A rigorous complexity analysis of the (1 + 1) evolutionary algorithm for separable functions with Boolean inputs. Evolutionary Computation 6(2), 185–196.
Droste, S., Jansen, T., and Wegener, I. (1998). On the analysis of the (1 + 1) evolutionary algorithm. Tech. Report CI-21/98, Univ. Dortmund, Germany.
Garnier, J. Kallel, L., and Schoenauer, M. (1999). Rigorous hitting times for binary mutations. Evolutionary Computation 7(2), 173–203.
Hagerup, T., and Rüb, C. (1989). A guided tour of Chernoff bounds. Information Processing Letters 33, 305–308.
Jansen, T. and Wegener, I. (1999). On the analysis of evolutionary algorithms — A proof that crossover really can help. In Nešetřil, J. (Ed.): Proceedings of the 7th Annual European Symposium on Algorithms (ESA’ 99), 184–193. Springer.
Juels, A. and Wattenberg, M. (1994). Stochastic hillclimbing as a baseline method for evaluating genetic algorithms. Tech. Report CSD-94-834 Univ. of California.
Mühlenbein, H. (1992). How Genetic Algorithms Really Work. I. Mutation and Hillclimbing. In Männer, R. and Manderik, R. (Eds.): Parallel Problem Solving From Nature (PPSN II), 15–25. North-Holland.
Quick, R. J., Rayward-Smith, V. J., and Smith, G.D. (1998). Fitness distance correlation and ridge functions. In Eiben, A.E. Bäck, T., and Schwefel, H.-P. (Eds.): Parallel Problem Solving from Nature (PPSN V), 77–86. Springer.
Rudolph, G. (1997). Convergence Properties of Evolutionary Algorithms. Verlag Dr. Kovač.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jansen, T., Wegener, I. (2000). On the Choice of the Mutation Probability for the (1+1) EA. In: Schoenauer, M., et al. Parallel Problem Solving from Nature PPSN VI. PPSN 2000. Lecture Notes in Computer Science, vol 1917. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45356-3_9
Download citation
DOI: https://doi.org/10.1007/3-540-45356-3_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-41056-0
Online ISBN: 978-3-540-45356-7
eBook Packages: Springer Book Archive